You are on page 1of 13

1 Machine Vision Approach Using Leaf

2 Based Multi Features Analysis for Citrus


3 Varieties Classification.
4 Salman Qadri1 and Mutiullah2
5
1 Computer Science Department The islami University Bahawalpur, Punjab ,Pakistan
6
2 Computer science department. Quaid-i-Azam university islamabad, pakistan

7 Corresponding author:
8 First Author1
9 Email address: f.author@email.com

10 ABSTRACT

11 The objective of this study is to observe the potential of machine vision (MV) approach for the classification
12 of eight citrus varieties. The leaf images of eight citrus varieties, that are, grapefruit, Moussami, Malta,
13 Lemon, Kinow, Local lemon, Fuetrells, and Malta Shakri have been acquired by a digital camera in an
14 open environment without any complex laboratory setup. The acquired digital images dataset have been
15 transformed into the multi-feature dataset, which is the combination of Binary, histogram, texture, spectral,
16 rotational, scalability and translational (RST) invariant features. For each citrus leaf image, total 57 multi
17 features have been acquired on every non-overlapping region of interest (ROI), that is, (32x32), (64x64),
18 (128x128), (256x256). The optimized 15 features have been acquired by employing the supervised
19 correlation-based feature selection (CFS) technique and deployed this optimized multi features dataset to
20 different MV classifiers that are, Multilayer Perceptron (MLP), Random Forest (RF), J48 and Naı̈ve Bayes
21 using10 fold cross-validation method. Multilayer perceptron (MLP) have shown excellent overall accuracy
22 98.14 on ROIs (512x512) among deployed classifiers. The Classification accuracy results of these eight
23 varieties on MLP, namely; grapefruit, Moussami, Malta, Lemon, Kinow, Local lemon, Fuetrells, and Malta
24 Shakri have been observed, 98, 98.75, 99.25, 97.5, 97, 95.875, 95.5, and 99.375 percent respectively.

25 INTRODUCTION
26 Early and accurate discrimination of plants varieties is the major objective precision agriculture[1].Leaves
27 are the basic unit of plants and they have a difference in shape, size, color, structure, and texture[2].
28 Leaves have been used to differentiate one variety to another. Citrus fruit is a natural source of vitamin C,
29 fiber, potassium and glucose. The usage of citrus fruit for heart and sugar patient is very beneficial. It
30 contains high water content which is more than 85 percent and the usage of high water content food helps
31 to prevent dehydration and provides them plenty of energy with fewer calories to reduce the weight[3].It is
32 the second most important fruit after grapes in term of cultivation and production in the world. In Pakistan,
33 almost 200000 Hectares area is used for citrus cultivation. Punjab province in Pakistan is producing
34 almost 64 percent of citrus variety know. Pakistan is the sixth largest producer of citrus fruit. Different
35 varieties are produced locally such as lemon, Malta, Moussami and Kinnow (mandarin). Citrus Reticula
36 Kinnow (Mandarin) is the most popular variety and almost 95 percent of this variety is produced in country
37 and Pakistan is the largest supplier of this citrus variety and earning millions of foreign exchange every
38 year[4].Conventional methodologies have been used for the classification of plants varieties based on
39 field survey, domain knowledge and personal experience, which is time-consuming, costly and inefficient.
40 Different kinds of citrus varieties are grown in the world and to discriminate these varieties at an early
41 stage is very significant for high-quality production, management, supplied nutrients and controlling
42 the diseases. The leaves of the citrus occupy most part of the plant and the easiest part to examine
43 visually. Majority of plants diseases have been recognized through leaf symptoms. Our study focuses
44 the citrus leaves rather than the whole citrus plant.It has been observed that visual examining the citrus
45 leaves is very difficult due to the close resemblance of leaves features such as shape, size, color, texture,
46 and geometry features. The confusing statement regarding varieties discrimination may cause serious
47 issues, which might lead to the blind use of fertilizer, pesticides, and as a result of low yield quantity and
48 quality.This study describes the leaf-based multi-feature analysis using machine vision approaches for the
49 classification of citrus varieties. Many researchers have been started the machine vision approaches for
50 the classification of plants and their diseases[5-7].Kumar with his research companion proposed a system
51 which automatically recognized the species of plants by using image processing techniques and called
52 as a mobile application of leaf image recognition system[8].Wu with his fellows categorized thirty-two
53 plants by using image processing techniques. Twelve leaf features have been extracted to train 1800 leaf
54 images dataset and deployed to the probabilistic neural network (PNN) and acquired more than 90 percent
55 accuracy[9]. Shifa with his research fellows correctly identified the leaves of cotton and sugarcane plants
56 by using multispectral radiometric data and acquired 98 percent classification accuracy[10]. Rehmani and
57 his fellows acquired temporal data of five different wheat varieties. They acquired remote sensing data of
58 two types, that is, radiometric and photographic and observed the classification accuracy for radiometric
59 and photographic data 96 percent and 93.14 percent .respectively[11].Kurtulmswith his research team
60 correctly classified eight varieties of pepper seeds by using color and shape features by using the artificial
61 neural network (ANN)and observed an accuracy of 84.94 percent [12].Shahid with his fellows proposed
62 a hybrid feature selection model to discriminate five wheat varieties when an optimized feature dataset
63 is deployed to the artificial neural network (ANN) and observed more than 95 percent classification
64 accuracy[13]. Szczypiński and his co-authors identified eleven barley verities by using texture, color,
65 and geometry features. Linear discriminant analysis (LDA) and principal component analyses (PCA)
66 for optimized features dataset have been a deployed the artificial neural network (ANN).The employed
67 classifier correctly recognized the barley varieties with the variation in accuracy of 67 percent to 86
68 percent [14]. Patil and Bodhe implemented machine vision techniques for disease detection in sugarcane
69 leaves. Image segmentation has been used to find the affected leaf area and observed 98.60 percent
70 overall accuracy[15]. Another software model was suggested Babu and Rao by using the feed-forward
71 back propagation of neural networks (BPNN). It has been used to identify the species of leaf, pest, and
72 crops diseases[16]. Pydipati with his colleagues identified the citrus disease by using the machine vision
73 technique. Citrus leaves have been used to extract the color texture features. Discriminant analysis has
74 been employed for feature reduction and observed more than 95 percent accuracy for this optimized and
75 efficient framework [17]. Murat with his colleagues proposed plant classification model by implementing
76 six classifiers with three feature selection techniques. Four types of shape descriptors have been used in
77 this study namely morphological shape descriptors (MSD), Histogram of Oriented Gradients (HOG), Hu
78 invariant moments (Hu) and Zernike moments (ZM). The myDAUN dataset contains 45, tropical shrub
79 species, that have been classified using artificial neural network (ANN), random forest (RF), support
80 vector machine (SVM), k-nearest neighbor (k-NN), linear discriminant analysis (LDA) and directed
81 acyclic graph multiclass least squares twin support vector machine (DAG MLSTSVM). ANN showed the
82 best classification results on training myDAUN dataset 98.23percent and validated on 95.25 percent for
83 the Flavia and 99.89 percent for the Swedish Leaf dataset[18].Lee with his research team described the
84 leaf features based identification of plants using deep learning. Simple leaf features have been used as an
85 input data to Convolutional Neural Networks (CNN). Different leaf venation features have been compared
86 to shape boundary features, Multi-level data of leaf features representing the hierarchical change of
87 features from lower-level to higher-level related plant species [19]. Pierre with his team developed a deep
88 learning based plant identification system using discriminative features of leaf images. They used the
89 publically available leaf images dataset namely, Leaf Snap, Flavia, and Foliage. It has been observed that
90 a Convolutional neural network (CNN) provide better feature representation for leaf images compared to
91 hand-crafted features [20].Qureshi with his research colleagues proposed two techniques that are an a
92 texture-based dense segmentation and shape-based mango fruit detection and compares these methods
93 to existing ones. First, K-nearest neighbor pixel-based classification and contour segmentation, and
94 superpixel based classification using Support vector machine. Support vector machine outperformed
95 compared to, K-nearest neighbor classifier for mango fruit counting[21].Tharwat with his colleagues
96 proposed two features extraction methods based on one-dimensional (1D) and two-dimensional (2D) and
97 the Bagging classifier for plant identification using 2D digital leaves images. For the 1D-based methods,
98 Principal Component Analysis (PCA), Direct Linear Discriminant Analysis (DLDA), and PCA+LDA
99 methods were employed, while 2DPCA and 2DLDA algorithms were used for the 2D-based method.
100 The five techniques, i.e. PCA, PCA+LDA, DLDA, 2DPCA, and 2DLDA, were deployed using the

2/14
101 Flavia public dataset which contains 1907 colored leaves images. The accuracy of these techniques was
102 computed and the results showed that the 2DPCA and 2DLDA methods were much better than using
103 the PCA, PCA+LDA, and DLDA[22]. Elhariri with his research team proposed an approach based on
104 Random Forests (RF) and Linear Discriminant Analysis (LDA) for classifying the different types of plants
105 by extracting the combination of shape, first-order texture, Gray Level Co-occurrence Matrix (GLCM),
106 HSV color moments, and vein features. Different plant species of 340 leaf images were downloaded
107 from UCI- Machine learning public database. Classification accuracy of LDA and RF was acquired 92.65
108 percent and 88.82 percent respectively [23].It also has been observed that Machine vision based systems
109 have already been implemented successfully for crop identification [10, 11], land cover classification
110 [24-26] and medical image analysis [27-29]. As discussed above, the literature survey discloses that
111 prior to this study no research work has been done for the classification of citrus plants varieties using
112 multi-feature approach and final objective of this study is to develop an efficient, cost-effective and reliable
113 system for the identification of citrus plants varieties using machine vision approach.

Table 1. Local Citrus Varieties Cultivar in Pakistan

General Variety Local Varieties Region


Grapefruit (Citrus paradisiacal.) Mash Seedless, Duncan, Foster, and Shamber pakistan
Mandarin (Citrus reticulata Blanco) Fuetrells Early and Kinnow pakistan
Sweet Orange (Citrus sinensis (L.) Moussami, Washington Navel, Succri, Red Blood, pakistan
Jaffa, Ruby Red and Valencia late.
Bitter Orange (Citrus aurantium L.) Seville orange, Sour Orange, pakistan
Marmalade orange or bigarade orange
Lime (Citrus aurantifolia (Christm.) Sweet Lime and Kaghazi Lime pakistan
Lemon (Citrus Limon (L.) Eureka and Lisbon Lemon pakistan
Rough Lemon (Citrus jambhiri Lush.) Most common rootstock for the pakistan
propagation of citrus in the
subcontinent.
Grapefruit (Citrus paradisiacal.) It was first developed in California in pakistan
1935 and then Punjab Agriculture
College and Research Institute
Faisalabad (then, Lyallpur) Pakistan,
introduced it in the sub-continent in 1940.
August 2017,http://pakagrifarming.blogspot.com.[30]

114 MATERIALS AND METHODS


115 IMAGE DATASET:
116 This study comprises eight varieties of citrus plants, namely, Moussami, Malta, Lemon, Kinnow, Local
117 lemon, Fuetrells, and Malta Shakri. All the image acquisition has been performed in a natural environment
118 in Islamia University of Bahawalpur Pakistan agriculture experimentation field located at 29023-44-N
119 and 710 41-1-E, using a digital Nikon camera; model Coolpix with a resolution of 10.1 megapixels.
120 Twenty healthy citrus plants have been selected for each variety and collected two hundred healthy leaves
121 of each variety. Image background also plays an important role in image processing, for this reason, a
122 white paper sheet is placed under the leaves before acquiring the leaf images. All Images have been
123 captured at one feet height of the still mounted camera, to avoid the sun shadow effect; images have been
124 captured at noontime on an open sunny day. Two hundred leaf images of each variety have been acquired
125 with different angle dimension of a digital camera to reveal the maximal area of citrus leaves. Finally, a
126 high-quality image dataset of 1600 (200x8) colored images having dimension (4190x3010) pixels with
127 a 24-bit depth of Joint Photographic Expert Group (.jpg) format has been created to accomplish this
128 research.

129 Image Preprocessing:


130 Eight varieties of citrus leaf images have been selected for experimentation shown in figure 1. In order
131 to acquire the relevant portion of healthy leaf images, picture manager software, available in Microsoft

3/14
Figure 1. Digital Leaf Images of Eight Citrus Varieties.

132 Office has been used for this purpose. It has been observed that all cropped color images 1600 (200x8)
133 have been resized into (800x600) pixel dimension converted into the gray level (8 bit) and stored in bitmap
134 (.bmp) format as shown in figure 2.

Figure 2. Gray level Leaf Images of Eight Citrus Varieties

135 To obtain maximal information for each leaf image and also increase the size of the dataset, four
136 non-overlapping ROIs having pixel dimension, 16 by 16, 32 by 32, 64 by 64, 128 by 128, and 256 by
137 256, have been developed to use the computer vision and image processing CVIP software version 3.6.12.
138 Total 6400 1600x4 image dataset of each ROI has been developed to accomplish this research work.

Figure 3. Gray level Citrus Leaf Images of four non- overlapping regions of interest (ROIs)

139 Multi-Feature Acquisition:


140 For this study, a multi-feature dataset of leaf images have been acquired, that is, first-order histogram,
141 statistical texture, binary, spectral, rotational, scalability and translational (RST) features. These features

4/14
142 are grouped as, 5 first-order histogram, 5 texture features including 5 average texture values in all four
143 dimensions, 28 binary features composed of 10 pixel height and width of normalized projected values
144 and 6 spectral features including 3 rings and 3 sectors with an additional average value of these spectral
145 features and 7 RST invariant features [26]. Total 57 multi feature dataset has been acquired for each
146 sub-image or ROI. In this way total, 364800 (6400 by 57) multi-feature dataset has been developed for
147 each size of ROIs. All these features have been acquired using a computer vision and image processing
148 (CVIP) software version 3.6.12. To accomplish this study, all experimentation has been performed
149 on Intel R Core i3 processor 2.4 gigahertz (GHz) with 2 gigabytes (GB) RAM and 64-bit Windows-7
150 operating system. The experimental framework has been described in figure 3.

Figure 4. Multi Features Design Framework for The Citrus Leaf Varieties Classification.

151 Feature Reduction:


152 Prior to classification, it has been observed that extracted 57 features for each image have not been equally
153 important for citrus leaves classification and to handle this calculated large-scale dataset, that is, 364800
154 multi feature data space is not so easy task. It is necessary to reduce the extracted feature vector space
155 [31]. For this purpose, a supervised feature selection technique called correlation-based feature selection
156 (CFS) has been employed on this dataset.CFS has the ability to extract the most prominent features in the
157 dataset. CFS has been described in equation (1)

M σ̄ck
Hs = p
M + M(M − 1)σ̄kk

158 Here, shows the heuristic “merit” of a feature subset S with k features, while σ̄ck shows the mean
159 feature-class correlation (k ∈ S), and σ̄kk the average feature inter-correlation. The numerator of equation
160 (1) describes that how within class features are predictive while the denominator shows among the features

5/14
161 redundancy. When CFS has been deployed on the original feature space then a reduce dimensionality of
162 15 features has been acquired for further processing. The reduced dimensionality feature space has been
163 described in table 2.

Table 2. Correlation Based Feature Selection (CFS) Table for ROIs (512 by 512).

S.No. Features S.No. Features


1 Histo-Mean 9 Inverse-Diff-Range
2 Histo-Standard-Deviation 10 Texture-Entropy-Range
3 Texture-Energy-Averag 11 Spectral-DC
4 Texture-Energy-Range 12 Ring1
5 Inertia-Average 13 Sector1
6 Correlation Average 14 Sector2
7 Correlation Range 15 Sector3
8 Inverse-Diff-Average

164 Finally, 364800 (6400 by 57) multi features vector space has been reduced to 96000 (6400 by 15)
165 CFS based dataset for each size of ROIs for varietal discrimination of citrus plants and this optimized
166 multi-feature dataset has been deployed to different machine vision classifiers.

167 Dataset Preparation:


168 For classification, 10 fold stratified cross-validation method has been used for training and testing purpose.
169 Total 6400 images dataset has been used to develop an optimized (96000) multi-feature data space and
170 divided into 10 equal folds data instances of eight citrus varieties. Each fold comprises almost 640 images
171 dataset of eight citrus varieties with equal proportions. For each iteration it has been trained on the union
172 of 9 fold dataset and test on remaining one dataset, this process will be completed till on the last iteration
173 and calculate the overall accuracy of 10 folds.

174 Classification
175 Different machine vision classifiers, namely, Multilayer Perceptron (MLP), Naı̈ve Bayes (NB), Random
176 Forest(RF) and J48 have been employed on this optimized multi-feature citrus leaves dataset[32].First,
177 it has been observed by employing these classifiers on ROIs(64 by 64), and(128 by 128), a very low
178 accuracy result of less than 60 percent has been observed. Similarly, same classifiers with the same
179 strategy have been deployed on ROIs (256 by 256), the overall accuracy of less than 88 percent has been
180 observed by MLP, which is slightly better accuracy as compared to others implemented classifiers. Finally,
181 on ROIs (512 by 512), very impressive results have been observed on deployed classifiers. Here, MLP
182 classifier also performed best among the implemented classifiers because MLP always performed well for
183 noisy and open environment data[33]. The artificial neural network (ANN)model generally described that
184 the input terminals are equal to ——- and output featuresZ, whose dimensions M-Zare observed by the
185 number of classes to be classified. Thus, the ANN has——- output terminals. Finally ANN shows the
186 best results as compared to others three classifiers. The generalized ANN model has been described in
187 figure 4.

Mh
Zk = [Vk0 + ∑ Vk j h j ]
j=1

MY
H j = [W j0 + ∑ W jiYi ]
i=1

then following is the error function which is reduced by changing of weights v and w.

1 M Mz
E= ∑ ∑ (oin − Zin (Yi ;V,W ))2
2 i=1 n=1

6/14
Figure 5. Generalized Artificial Neural Network model

Table 3. CEmployed MLP Tuning Parameters for Citrus Leaf images dataset.

Input Hidden Neurons Learning rate Momentum Validation Epochs Output


layers Layers Threshold Layers
15 1 10 0.3 0.2 20 500 8

188 Graphically implemented MLP model has been shown in figure 5 with all tuning parameters. First 15
189 features has been shown is “green color” described the input layer, while second layer “red color” has
190 shown the hidden layer with 11 neurons, and the third layer with 8 yellow color nodes shows the weighted
191 sum of employed hidden layers and, final yellow layer shows the output layer of 8 citrus classes. At the
192 bottom of the deployed model, total 500 epoch with learning rate, momentum, and error rate has also
193 been shown.

Figure 6. The Implemented MLP Model for Citrus Leaf Images Dataset

194 RESULTS AND DISCUSSION:


195 For this study, four machine vision classifiers namely, NB, RF, J48, and MLP have been used for eight
196 citrus plant varieties using multi-feature reduced dataset. The deployed dataset has been used for 10
197 fold (90 -10) cross-validation method. As discussed earlier that four ROIs of pixel size (64 by 64), (128
198 by 128), (256 by 256) and (512 by 512) for each leaf image has been taken. At first attempt, two ROIs,
199 that is, (64 by 64), and (128 by 128) have not given better results on employed classifiers, which is
200 less than 60 percent. In Second attempt, the same strategy has been implemented on ROI of size (256

7/14
201 by 256), and overall classification accuracy of employed classifiers MLP, RF, J48, and NB have been
202 observed 87.03 percent, 81.75 percent, 80.34 percent, and 75.70 percent respectively.The improved results
203 in overall accuracy have been observed as compared to the previous one but, not so inspiring. It has
204 been observed that only MLP shows the better accuracy 87.03 percent among employed classifiers. The
205 overall accuracy result of MLP with others machine vision classifiers on ROI (256 by 256) with others
206 performance evaluating factors are shown in table 4.

Table 4. The Overall Classification Accuracy Table of Employed Machine Vision Classifiers on ROIs
(256 by 256)

Classifiers Kappa TP Rate FP Rate ROC MAE RMSE Time (sec) OA


Statistics
Multilayer Perceptron 0.851 0.87 0.015 0.935 0.0412 0.1791 9.86 87.03
Random Forest 0.8090 0.818 0.045 0.962 0.0730 0.2550 6.02 81.75
J48 0.8013 0.810 0.050 0.890 0.0850 0.2760 12.20 80.34
Naive Bayes 0.7360 0.757 0.060 0.941 0.0988 0.3062 16.03 75.70

207 All the implemented MV classifiers results on ROI (256 by 256)have shown in Figure 6. It has been
208 observed unemployed classifiers on ROI (256 by 256), the MLP classifier has shown relatively better
209 classification accuracy 87.03 percent as compared to others deploying classifiers.

Figure 7. The overall accuracy Graph of Employed MV Classifiers on ROIs (256 by 256)

210 Similarly, the confusion matrix (CM) of MLP classifier on ROIs (256 by256) is shown in table 5.
211

212 The confusion matrix (CM) of the optimized multi-feature dataset has been shown in table 4.The
213 diagonal of the confusion table 4 shows the classification accuracy in appropriate classes, while other
214 instances show the misclassification in those classes. It contains the information, which is actual and
215 predicted data for MLP classifier. MLP classifier has shown the relative better overall accuracy among
216 implemented classifiers. The accuracy result of eight citrus plant varieties that is; grapefruit, Moussami,
217 Malta, Lemon, Kinnow, Local lemon, Fuetrells, and Malta Shakri are 86.875 percent ,84.375 percent,
218 78.75 percent, 82.875 percent, 81.25 percent, 75.625 percent, 76.875 percent, and 74.375 percent
219 respectively.Graphically accuracy results of eight citrus plant varieties using MLP classifier on ROIs(256
220 by 256)have been shown in figure 7.

8/14
Table 5. Confusion Table of MLP Classifier on ROIs (256 by 256).

Classes G-fruit Moussami Malta Lemon Kinnow Local Fuetrells Malta Total
Lemon Shakri
G-fruit 695 20 8 10 26 21 10 10 800
Moussami 20 675 25 0 27 15 8 30 800
Malta 14 15 630 20 40 10 60 11 800
Lemon 5 45 15 663 0 30 35 7 800
Kinnow 10 13 15 28 650 48 20 16 800
Local Lemon 45 50 0 5 45 605 40 10 800
Fuetrells 7 30 45 28 0 20 615 55 800
Malta Shakri 0 0 6 0 8 13 0 595 800

Figure 8. Classification Accuracy Graph of light citrus Varieties of leaf images Using MLP classifier on
ROI (256 by 256)

221 Finally, the overall accuracy results on these ROIs, that is, (64 by 64)‘, (128 by 128), and (256 by 256),
222 were not so impressive, whereas, on ROIs (512 by 512) the same strategy with same deployed classifiers
223 has been employed for these eight citrus varieties dataset and observed very promising results with the
224 variation of 91.93 percent to 98.14 percent classification accuracy. The overall classification accuracy of
225 employed classifiers, that is, MLP, RF, J48, and NB have been observed 98.14 percent , 97.51 percent ,
226 91.93 percent , and 96.36 percent , respectively. These results were so inspiring and, it has been observed
227 that MLP showed the best accuracy among all the implemented classifiers. The overall accuracy results of
228 MLP with others machine vision classifiers on ROI (512 by 512) with performance evaluating parameters
229 are shown in table 6.

Table 6. Confusion Table of MLP Classifier on ROIs (256 by 256).

Classifiers Kappa Statistics TP Rate FP Rate ROC MAE RMSE Time OA


(sec.)
Multilayer Perceptron 0.9787 0.981 0.003 0.986 0.0145 0.0624 1.23 98.14
Random Forest 0.9716 0.975 0.003 0.996 0.0213 0.081 0.02 97.512
J48 0.9077 0.919 0.012 0.958 0.0231 0.1402 0.01 91.93
Naive Bayes 0.9572 0.964 0.004 0.999 0.0092 0.0953 0.02 96.36

230 It has been observed that unemployed classifiers, namely MLP, RF, J48 and NB on ROI (512 by
231 512), the MLP classifier have shown e by cellent overall accuracy 98.14 percent as compared to others

9/14
232 deploying classifiers as shown in figure 8.

Figure 9. The Overall Classification accuracy Graph of Employed Machine Vision Classifiers on ROIs
(512 by 512).

233 Similarly, the confusion matri by (CM) of MLP classifier on ROI (512 by 512) has been described
234 in detail in table 7. Table7: The Confusion Matri by of Multi Feature Dataset for MLP Classifier on
235 ROIs(512 by 512).

Table 7. The Confusion Matri by of Multi Feature Dataset for MLP Classifier on ROIs(512 by 512).

Classes G-fruit Moussami Malta Lemon Kinnow Local Fuetrells Malta Total
Lemon Shakri
G-fruit 784 0 1 10 5 0 0 0 800
Moussami 1 790 0 0 5 0 4 0 800
Malta 1 0 794 2 0 0 3 0 800
Lemon 1 2 0 780 0 10 0 7 800
Kinnow 0 5 5 0 776 0 5 9 800
Local Lemon 0 6 7 5 5 767 0 10 800
Fuetrells 3 3 0 0 0 10 764 20 800
Malta Shakri 0 1 0 0 2 1 0 795 800

236 The classification accuracy of the light citrus plant’s varieties, that is; grapefruit, Moussami, Malta,
237 Lemon, Kinnow, Local lemon, Fuetrells, and Malta Shakri have been observed, 98 percent , 98.75 percent
238 , 99.25 percent , 97.5 percent , 97 percent , 95.875 percent 95.5 percent , 99.375 percent respectively. The
239 diagonal of the confusion table 5 shows the classification accuracy in appropriate classes, while other
240 instances show the misclassification in those classes. The overall accuracy results of eight citrus plants
241 varieties using MLP classifier on ROIs (512 by 512) has been shown in figure 9.

10/14
Figure 10. Classification Accuracy Graph of Eight Citrus Varieties of leaf images Using MLP classifier
on ROIs (512 by 512)

242 Finally, a comparative citrus plant varieties classification graph of the multi-feature dataset using MLP
243 classifier has been shown in figure 10. This graph shows the better overall accuracy ” blue color” series1
244 of citrus varieties by using MLP classifier on ROIs(512 by 512) as compared to “dark Brown” series2 on
245 ROIs (256 by 256). A comparison graph of classification accuracy of eight citrus varieties using MLP
246 classifier on ROIs (512 by 512) and(256 by 256).

247 CONCLUSIONS:
248 This study is focused on the classification of eight citrus plants varieties based on the multi features
249 leaf dataset. Four machine vision classifiers, that is, MLP, RF, NB, and J48, have been employed by
250 using this optimized multi-feature dataset. The optimized multi features dataset have been employed
251 not only for overall classification accuracy but, also some others performance evaluating parameters
252 as discussed above in results and discussion section. The employed classifiers have shown satisfactory
253 results, but multilayer perceptron (MLP) results were e by ceptionally high among all other implemented
254 classifiers. It has been observed that, after deploying MLP classifier, an overall accuracy of 98.14 percent
255 has been achieved on these eight citrus plant varieties. For this study, it is necessary to describe here
256 that in the multi-feature dataset, if feature space would not be optimized by employing the supervise
257 correlation-based feature selection (CFS) technique and reduced the dimensionality of overall feature
258 space, then it was not possible to achieve such an e by cellent accuracy results within short time e by
259 ecution limitation. In future, the effect of variation in te by ture features values with illumination factors
260 will be verified.

11/14
Figure 11. A Comparison Classification Accuracy Graph of Eight Citrus Varieties Using MLP
Classifier on ROI (256 by 256) and (512 by 512).

261 REFERENCES
262 1. B. Ellis, D. Daly, L. Hickey, K. Johnson,J. Mitchell, P. Wilf and S. Wing, ”Manual of leaf
263 architecture.,(Cornell University Press: Ithaca, NY),” 2009. 2. S. A. Miller, F. D. Beed and C. L. Harmon,
264 ”Plant disease diagnostic capabilities and networks,” Annual review of phytopathology, vol. 47, pp. 15-38,
265 2009. 3. S. Gorinstein, A. Caspi, I. Libman, H. T. Lerner, D. Huang, H. Leontowicz, M. Leontowicz,
266 Z. Tashma, E. Katrich, and S. Feng, ”Red grapefruit positively influences serum triglyceride level in
267 patients suffering from coronary atherosclerosis: studies in vitro and in humans,” Journal of agricultural
268 and food chemistry, vol. 54, no. 5, pp. 1887-1892, 2006. 4. G. Johnson, ”Pakistan citrus industry
269 challenges: Opportunities for Australia-Pakistan collaboration in research, development, and e by tension,”
270 in Pakistan: Citrus Industry Survey and Workshops, Ed., 2006. 5. D. Al Bashish, M. Braik, and S.
271 Bani-Ahmad, ”Detection and classification of leaf diseases using K-means-based segmentation and,”
272 Information Technol. J, vol. 10, no. 2, pp. 267-275, 2011. 6. P. Chaudhary, A. K. Chaudhari, A. Cheeran
273 and S. Godara, ”Color transform based approach for disease spot detection on a plant leaf,” International
274 Journal of Computer Science and Telecommunications, vol. 3, no. 6, pp. 65-70, 2012. 7. S. Wang, D.
275 He, W. Li, and Y. Wang, ”Plant leaf disease recognition based on kernel K-means clustering algorithm,”
276 Nongye Ji by ie BY uebao= Transactions of the Chinese Society for Agricultural Machinery, vol. 40, no.
277 3, pp. 152-155, 2009. 8. N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. C. Lopez
278 and J. V. Soares, ”Leafsnap: A computer vision system for automatic plant species identification,” in
279 Computer Vision–ECCV 2012, Ed., pp. 502-516, Springer, 2012. 9. S. G. Wu, F. S. Bao, E. Y. BY u, Y.-
280 BY . Wang, Y.-F. Chang and Q.-L. BY iang, ”A leaf recognition algorithm for plant classification using the
281 probabilistic neural network,” in Signal Processing and Information Technology, 2007 IEEE International
282 Symposium on, Ed., pp. 11-16, IEEE, 2007. 10. M. S. Shifa, M. S. Naweed, M. Omar, M. Z. Jhandir and
283 T. Ahmed, ”Classification of cotton and sugarcane plants on the basis of their spectral behavior,” Pak. J.
284 Bot, vol. 43, no. 4, pp. 2119-2125, 2011. 11. E. REHMANI, M. NAWEED, M. SHAHID, S. QADRI,
285 M. ULLAH, and Z. GILANI, ”A Comparative Study of Crop Classification By Using Radiometric and
286 Photographic Data,” Sindh University Research Journal-SURJ (Science Series), vol. 47, no. 2, 2015. 12.
287 F. Kurtulmus, I. Alibas, and I. Kavdir, ”Classification of pepper seeds using machine vision based on
288 neural network,” International Journal of Agricultural and Biological Engineering, vol. 9, no. 1, pp. 51,
289 2016. 13. M. Shahid, M. Naweed, S. Qadri and a. E. R. Mutiullah, ”Varietal discrimination of wheat
290 seeds by machine vision approach,” Life Science Journal, vol. 11, no. 6, pp. 245-252, 2014. 14. P.
291 M. Szczypiński, A. Klepaczko, and P. Zapotoczny, ”Identifying barley varieties by computer vision,”
292 Computers and Electronics in Agriculture, vol. 110, pp. 1-8, 2015. 15. S. B. Patil and S. K. Bodhe,
293 ”Leaf disease severity measurement using image processing,” International Journal of Engineering and
294 Technology, vol. 3, no. 5, pp. 297-301, 2011. 16. M. P. Babu and B. S. Rao, ”Leaves recognition using
295 back propagation neural network-advice for pest and disease control on crops,” India Kisan. Net: E by pert
296 Advisory System, 2007. 17. R. Pydipati, T. Burks, and W. Lee, ”Identification of citrus disease using color

12/14
297 te by ture features and discriminant analysis,” Computers and electronics in agriculture, vol. 52, no. 1, pp.
298 49-59, 2006. 18. M. Murat, S.-W. Chang, A. Abu, H. J. Yap and K.-T. Yong, ”Automated classification of
299 tropical shrub species: a hybrid of leaf shape and machine learning approach,” PeerJ, vol. 5, pp. e3792,
300 2017. 19. S. H. Lee, C. S. Chan, S. J. Mayo and P. Remagnino, ”How deep learning e by tracts and learns
301 leaf features for plant classification,” Pattern Recognition, vol. 71, pp. 1-13, 2017. 20. P. Barré, B. C.
302 Stöver, K. F. Müller and V. Steinhage, ”LeafNet: A computer vision system for automatic plant species
303 identification,” Ecological Informatics, 2017. 21. W. Qureshi, A. Payne, K. Walsh, R. Linker, O. Cohen
304 and M. Dailey, ”Machine vision for counting fruit on mango tree canopies,” Precision Agriculture, vol.
305 18, no. 2, pp. 224-244, 2017. 22. A. Tharwat, T. Gaber and A. E. Hassanien, ”One-dimensional vs.
306 two-dimensional based features: Plant identification approach,” Journal of Applied Logic, vol. 24, pp.
307 15-31, 2017. 23. E. Elhariri, N. El-Bendary and A. E. Hassanien, ”Plant classification system based on
308 leaf features,” in Computer Engineering and Systems (ICCES), 2014 9th International Conference on,
309 Ed., pp. 271-276, IEEE, 2014. 24. S. QADRI, D. KHAN, F. AHMAD, S. QADRI, M. U. REHMAN, S.
310 MUHAMMAD and M. ULLAH, ”A Novel Optimized Land Cover Classification Framework Using Data
311 Mining Techniques,” Sindh University Research Journal-SURJ (Science Series), vol. 49, no. 2, 2017. 25.
312 S. Qadri, D. M. Khan, F. Ahmad, S. F. Qadri, M. E. Babar, M. Shahid, M. Ul-Rehman, A. Razzaq, S. Shah
313 Muhammad and M. Fahad, ”A Comparative Study of Land Cover Classification by Using Multispectral
314 and Te by ture Data,” BioMed research international, vol. 2016, 2016. 26. S. Qadri, D. M. Khan, S. F.
315 Qadri, A. Razzaq, N. Ahmad, M. Jamil, A. Nawaz Shah, S. Shah Muhammad, K. Saleem and S. A. Awan,
316 ”Multisource Data Fusion Framework for Land Use/Land Cover Classification Using Machine Vision,”
317 Journal of Sensors, vol. 2017, 2017. 27. R. S. C. Boss, K. Thangavel and D. A. P. Daniel, ”Mammogram
318 image segmentation using fuzzy clustering,” in Pattern Recognition, Informatics and Medical Engineering
319 (PRIME), 2012 International Conference on, Ed., pp. 290-295, IEEE, 2012. 28. A. Eklund, P. Dufort, D.
320 Forsberg and S. M. LaConte, ”Medical image processing on the GPU–Past, present and future,” Medical
321 image analysis, vol. 17, no. 8, pp. 1073-1094, 2013. 29. M. Strzelecki, P. Szczypinski, A. Materka and A.
322 Klepaczko, ”A software tool for automatic classification and segmentation of 2D/3D medical images,”
323 Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors
324 and Associated Equipment, vol. 702, pp. 137-140, 2013. 30. R. B. Foundation., ”AgriForming,” Ed.,
325 Ruqaiyya Bano Foundation., Saturday, October 28, 2017. 31. R. O. Duda, P. E. Hart and D. G. Stork,
326 Pattern Classification, John Wiley and Sons, 2012. 32. S. A. M. Rodrigues, ”Motivations, e by periences
327 and potential impacts of visitors to a monastery in New Zealand: A Case Study,” Ed., University of
328 Waikato, 2012. 33. S. C. Park, J. Pu and B. Zheng, ”Improving performance of computer-aided detection
329 scheme by combining results from two machine learning classifiers,” Academic Radiology, vol. 16, no. 3,
330 pp. 266-274, 2009.

13/14

You might also like