• Open access
  • Published: 24 February 2021

Plant diseases and pests detection based on deep learning: a review

  • Jun Liu   ORCID: orcid.org/0000-0001-8769-5981 1 &
  • Xuewei Wang 1  

Plant Methods volume  17 , Article number:  22 ( 2021 ) Cite this article

117k Accesses

285 Citations

17 Altmetric

Metrics details

Plant diseases and pests are important factors determining the yield and quality of plants. Plant diseases and pests identification can be carried out by means of digital image processing. In recent years, deep learning has made breakthroughs in the field of digital image processing, far superior to traditional methods. How to use deep learning technology to study plant diseases and pests identification has become a research issue of great concern to researchers. This review provides a definition of plant diseases and pests detection problem, puts forward a comparison with traditional plant diseases and pests detection methods. According to the difference of network structure, this study outlines the research on plant diseases and pests detection based on deep learning in recent years from three aspects of classification network, detection network and segmentation network, and the advantages and disadvantages of each method are summarized. Common datasets are introduced, and the performance of existing studies is compared. On this basis, this study discusses possible challenges in practical applications of plant diseases and pests detection based on deep learning. In addition, possible solutions and research ideas are proposed for the challenges, and several suggestions are given. Finally, this study gives the analysis and prospect of the future trend of plant diseases and pests detection based on deep learning.

Plant diseases and pests detection is a very important research content in the field of machine vision. It is a technology that uses machine vision equipment to acquire images to judge whether there are diseases and pests in the collected plant images [ 1 ]. At present, machine vision-based plant diseases and pests detection equipment has been initially applied in agriculture and has replaced the traditional naked eye identification to some extent.

For traditional machine vision-based plant diseases and pests detection method, conventional image processing algorithms or manual design of features plus classifiers are often used [ 2 ]. This kind of method usually makes use of the different properties of plant diseases and pests to design the imaging scheme and chooses appropriate light source and shooting angle, which is helpful to obtain images with uniform illumination. Although carefully constructed imaging schemes can greatly reduce the difficulty of classical algorithm design, but also increase the application cost. At the same time, under natural environment, it is often unrealistic to expect the classical algorithms designed to completely eliminate the impact of scene changes on the recognition results [ 3 ]. In real complex natural environment, plant diseases and pests detection is faced with many challenges, such as small difference between the lesion area and the background, low contrast, large variations in the scale of the lesion area and various types, and a lot of noise in the lesion image. Also, there are a lot of disturbances when collecting plant diseases and pests images under natural light conditions. At this time, the traditional classical methods often appear helpless, and it is difficult to achieve better detection results.

In recent years, with the successful application of deep learning model represented by convolutional neural network (CNN) in many fields of computer vision (CV, computer-vision), for example, traffic detection [ 4 ], medical Image Recognition [ 5 ], Scenario text detection [ 6 ], expression recognition [ 7 ], face Recognition [ 8 ], etc. Several plant diseases and pests detection methods based on deep learning are applied in real agricultural practice, and some domestic and foreign companies have developed a variety of deep learning-based plant diseases and pests detection Wechat applet and photo recognition APP software. Therefore, plant diseases and pests detection method based on deep learning not only has important academic research value, but also has a very broad market application prospect.

In view of the lack of comprehensive and detailed discussion on plant diseases and pests detection methods based on deep learning, this study summarizes and combs the relevant literatures from 2014 to 2020, aiming to help researchers quickly and systematically understand the relevant methods and technologies in this field. The content of this study is arranged as follows: “ Definition of plant diseases and pests detection problem ” section gives the definition of plant diseases and pests detection problem; “ Image recognition technology based on deep learning ” section focuses on the detailed introduction of image recognition technology based on deep learning; “ Plant diseases and pests detection methods based on deep learning ” section analyses the three kinds of plant diseases and pests detection methods based on deep learning according to network structure, including classification, detection and segmentation network; “ Dataset and performance comparison ” section introduces some datasets of plant diseases and pests detection and compares the performance of the existing studies; “ Challenges ” section puts forward the challenges of plant diseases and pests detection based on deep learning; “ Conclusions and future directions ” section prospects the possible research focus and development direction in the future.

Definition of plant diseases and pests detection problem

Definition of plant diseases and pests

Plant diseases and pests is one kind of natural disasters that affect the normal growth of plants and even cause plant death during the whole growth process of plants from seed development to seedling and to seedling growth. In machine vision tasks, plant diseases and pests tend to be the concepts of human experience rather than a purely mathematical definition.

Definition of plant diseases and pests detection

Compared with the definite classification, detection and segmentation tasks in computer vision [ 9 ], the requirements of plant diseases and pests detection is very general. In fact, its requirements can be divided into three different levels: what, where and how [ 10 ]. In the first stage, “what” corresponds to the classification task in computer vision. As shown in Fig.  1 , the label of the category to which it belongs is given. The task in this stage can be called classification and only gives the category information of the image. In the second stage, “where” corresponds to the location task in computer vision, and the positioning of this stage is the rigorous sense of detection. This stage not only acquires what types of diseases and pests exist in the image, but also gives their specific locations. As shown in Fig.  1 , the plaque area of gray mold is marked with a rectangular box. In the third stage, “how” corresponds to the segmentation task in computer vision. As shown in Fig.  1 , the lesions of gray mold are separated from the background pixel by pixel, and a series of information such as the length, area, location of the lesions of gray mold can be further obtained, which can assist the higher-level severity level evaluation of plant diseases and pests. Classification describes the image globally through feature expression, and then determines whether there is a certain kind of object in the image by means of classification operation; while object detection focuses on local description, that is, answering what object exists in what position in an image, so in addition to feature expression, object structure is the most obvious feature that object detection differs from object classification. That is, feature expression is the main research line of object classification, while structure learning is the research focus of object detection. Although the function requirements and objectives of the three stages of plant diseases and pests detection are different, yet in fact, the three stages are mutually inclusive and can be converted. For example, the “where” in the second stage contains the process of “what” in the first stage, and the “how” in the third stage can finish the task of “where” in the second stage. Also, the “what” in the first stage can achieve the goal of the second and the third stages through some methods. Therefore, the problem in this study is collectively referred to as plant diseases and pests detection as conventions in the following text, and the terminology differentiates only when different network structures and functions are adopted.

figure 1

Comparison with traditional plant diseases and pests detection methods

To better illustrate the characteristics of plant diseases and pests detection methods based on deep learning, according to existing references [ 11 , 12 , 13 , 14 , 15 ], a comparison with traditional plant diseases and pests detection methods is given from four aspects including essence, method, required conditions and applicable scenarios. Detailed comparison results are shown in Table 1 .

Image recognition technology based on deep learning

Compared with other image recognition methods, the image recognition technology based on deep learning does not need to extract specific features, and only through iterative learning can find appropriate features, which can acquire global and contextual features of images, and has strong robustness and higher recognition accuracy.

Deep learning theory

The concept of Deep Learning (DL) originated from a paper published in Science by Hinton et al. [ 16 ] in 2006. The basic idea of deep learning is: using neural network for data analysis and feature learning, data features are extracted by multiple hidden layers, each hidden layer can be regarded as a perceptron, the perceptron is used to extract low-level features, and then combine low-level features to obtain abstract high-level features, which can significantly alleviate the problem of local minimum. Deep learning overcomes the disadvantage that traditional algorithms rely on artificially designed features and has attracted more and more researchers’ attention. It has now been successfully applied in computer vision, pattern recognition, speech recognition, natural language processing and recommendation systems [ 17 ].

Traditional image classification and recognition methods of manual design features can only extract the underlying features, and it is difficult to extract the deep and complex image feature information [ 18 ]. And deep learning method can solve this bottleneck. It can directly conduct unsupervised learning from the original image to obtain multi-level image feature information such as low-level features, intermediate features and high-level semantic features. Traditional plant diseases and pests detection algorithms mainly adopt the image recognition method of manual designed features, which is difficult and depends on experience and luck, and cannot automatically learn and extract features from the original image. On the contrary, deep learning can automatically learn features from large data without manual manipulation. The model is composed of multiple layers, which has good autonomous learning ability and feature expression ability, and can automatically extract image features for image classification and recognition. Therefore, deep learning can play a great role in the field of plant diseases and pests image recognition. At present, deep learning methods have developed many well-known deep neural network models, including deep belief network (DBN), deep Boltzmann machine (DBM), stack de-noising autoencoder (SDAE) and deep convolutional neural network (CNN) [ 19 ]. In the area of image recognition, the use of these deep neural network models to realize automate feature extraction from high-dimensional feature space offers significant advantages over traditional manual design feature extraction methods. In addition, as the number of training samples grows and the computational power increases, the characterization power of deep neural networks is being further improved. Nowadays, the boom of deep learning is sweeping both industry and academia, and the performance of deep neural network models are all significantly ahead of traditional models. In recent years, the most popular deep learning framework is deep convolutional neural network.

  • Convolutional neural network

Convolutional Neural Networks, abbreviated as CNN, has a complex network structure and can perform convolution operations. As shown in Fig.  2 , the convolutional neural network model is composed of input layer, convolution layer, pooling layer, full connection layer and output layer. In one model, the convolution layer and the pooling layer alternate several times, and when the neurons of the convolution layer are connected to the neurons of the pooling layer, no full connection is required. CNN is a popular model in the field of deep learning. The reason lies in the huge model capacity and complex information brought about by the basic structural characteristics of CNN, which enables CNN to play an advantage in image recognition. At the same time, the successes of CNN in computer vision tasks have boosted the growing popularity of deep learning.

figure 2

The basic structure of CNN

In the convolution layer, a convolution core is defined first. The convolution core can be considered as a local receptive field, and the local receptive field is the greatest advantage of the convolution neural network. When processing data information, the convolution core slides on the feature map to extract part of the feature information. After the feature extraction of the convolution layer, the neurons are input into the pooling layer to extract the feature again. At present, the commonly used methods of pooling include calculating the mean, maximum and random values of all values in the local receptive field [ 20 , 21 ]. After the data entering several convolution layers and pooling layers, they enter the full-connection layer, and the neurons in the full-connection layer are fully connected with the neurons in the upper layer. Finally, the data in the full-connection layer can be classified by the softmax method, and then the values are transmitted to the output layer for output results.

Open source tools for deep learning

The commonly used third-party open source tools for deep learning are Tensorflow [ 22 ], Torch/PyTorch [ 23 ], Caffe [ 24 ], Theano [ 25 ]. The different characteristics of each open source tool are shown in Table 2 .

The four commonly used deep learning third-party open source tools all support cross-platform operation, and the platforms that can be run include Linux, Windows, iOS, Android, etc. Torch/PyTorch and Tensorflow have good scalability and support a large number of third-party libraries and deep network structures, and have the fastest training speed when training large CNN networks on GPU.

Plant diseases and pests detection methods based on deep learning

This section gives a summary overview of plant diseases and pests detection methods based on deep learning. Since the goal achieved is completely consistent with the computer vision task, plant diseases and pests detection methods based on deep learning can be seen as an application of relevant classical networks in the field of agriculture. As shown in Fig.  3 , the network can be further subdivided into classification network, detection network and segmentation network according to the different network structures. As can be seen from Fig.  3 , this paper is subdivided into several different sub-methods according to the processing characteristics of each type of methods.

figure 3

Framework of plant diseases and pests detection methods based on deep learning

Classification network

In real natural environment, the great differences in shape, size, texture, color, background, layout and imaging illumination of plant diseases and pests make the recognition a difficult task. Due to the strong feature extraction capability of CNN, the adoption of CNN-based classification network has become the most commonly used pattern in plant diseases and pests classification. Generally, the feature extraction part of CNN classification network consists of cascaded convolution layer + pooling layer, followed by full connection layer (or average pooling layer) + softmax structure for classification. Existing plant diseases and pests classification network mostly use the muture network structures in computer vision, including AlexNet [ 26 ], GoogleLeNet [ 27 ], VGGNet [ 28 ], ResNet [ 29 ], Inception V4 [ 30 ], DenseNets [ 31 ], MobileNet [ 32 ] and SqueezeNet [ 33 ]. There are also some studies which have designed network structures based on practical problems [ 34 , 35 , 36 , 37 ]. By inputting a test image into the classification network, the network analyses the input image and returns a label that classifies the image. According to the difference of tasks achieved by the classification network method, it can be subdivided into three subcategories: using the network as a feature extractor, using the network for classification directly and using the network for lesions location.

Using network as feature extractor

In the early studies on plant diseases and pests classification methods based on deep learning, many researchers took advantage of the powerful feature extraction capability of CNN, and the methods were combined with traditional classifiers [ 38 ]. First, the images are input into a pretrained CNN network to obtain image characterization features, and the acquired features are then input into a conventional machine learning classifier (e.g., SVM) for classification. Yalcin et al. [ 39 ] proposed a convolutional neural network architecture to extract the features of images while performing experiments using SVM classifiers with different kernels and feature descriptors such as LBP and GIST, the experimental results confirmed the effectiveness of the approach. Fuentes et al. [ 40 ] put forward the idea of CNN based meta architecture with different feature extractors, and the input images included healthy and infected plants, which were identified as their respective classes after going through the meta architecture. Hasan et al. [ 41 ] identified and classified nine different types of rice diseases by using the features extracted from DCNN model and input into SVM, and the accuracy achieved 97.5%.

Using network for classification directly

Directly using classification network to classify lesions is the earliest common means of CNN applied in plant diseases and pests detection. According to the characteristics of existing research work, it can be further subdivided into original image classification, classification after locating Region of Interest (ROI) and multi-category classification.

Original image classification. That is, directly put the collected complete plant diseases and pests image into the network for learning and training. Thenmozhi et al. [ 42 ] proposed an effective deep CNN model, and transfer learning is used to fine-tune the pre-training model. Insect species were classified on three public insect datasets with accuracy of 96.75%, 97.47% and 95.97%, respectively. Fang et al. [ 43 ] used ResNet50 in plant diseases and pests detection. The focus loss function was used instead of the standard cross-entropy loss function, and the Adam optimization method was used to identify the leaf disease grade, and the accuracy achieved 95.61%.

Classification after locating ROI. For the whole image acquired, we should focus on whether there is a lesion in a fixed area, so we often obtain the region of interest (ROI) in advance, and then input the ROI into the network to judge the category of diseases and pests. Nagasubramanian et al. [ 44 ] used a new three-dimensional deep convolution neural network (DCNN) and salience map visualization method to identify healthy and infected samples of soybean stem rot, and the classification accuracy achieved 95.73%.

Multi-category classification. When the number of plant diseases and pests class to be classified exceed 2 class, the conventional plant diseases and pests classification network is the same as the original image classification method, that is, the output nodes of the network are the number of plant diseases and pests class + 1 (including normal class). However, multi-category classification methods often use a basic network to classify lesions and normal samples, and then share feature extraction parts on the same network to modify or increase the classification branches of lesion categories. This approach is equivalent to preparing a pre-training weight parameter for subsequent multi-objective plant diseases and pests classification network, which is obtained by binary training between normal samples and plant diseases and pests samples. Picon et al. [ 45 ] proposed a CNN architecture to identify 17 diseases in 5 crops, which seamlessly integrates context metadata, allowing training of a single multi-crop model. The model can achieve the following goals: (a) obtains richer and more robust shared visual features than the corresponding single crop; (b) is not affected by different diseases in which different crops have similar symptoms; (c) seamlessly integrates context to perform crop conditional disease classification. Experiments show that the proposed model alleviates the problem of data imbalance, and the average balanced accuracy is 0.98, which is superior to other methods and eliminates 71% of classifier errors.

Using network for lesions location

Generally, the classification network can only complete the classification of image label level. In fact, it can also achieve the location of lesions and the pixel-by-pixel classification by combining different techniques and methods. According to the different means used, it can be further divided into three forms: sliding window, heatmap and multi-task learning network.

Sliding window. This is the simplest and intuitive method to achieve the location of lesion coarsely. The image in the sliding window is input into the classification network for plant diseases and pests detection by redundant sliding on the original image through a smaller size window. Finally, all sliding windows are connected to obtain the results of the location of lesion. Chen et al. [ 46 ] used CNN classification network based on sliding window to build a framework for characteristics automatic learning, feature fusion, recognition and location regression calculation of plant diseases and pests species, and the recognition rate of 38 common symptoms in the field was 50–90%.

Heatmap. This is an image that reflects the importance of each region in the image, the darker the color represents the more important. In the field of plant diseases and pests detection, the darker the color in the heatmap represents the greater the probability that it is the lesion. In 2017, Dechant et al. [ 47 ] trained CNN to make heatmap to show the probability of infection in each region in maize disease images, and these heatmaps were used to classify the complete images, dividing each image into containing or not containing infected leaves. At runtime, it takes about 2 min to generate a heatmap for an image (1.6 GB of memory) and less than one second to classify a set of three heatmaps (800 MB of memory). Experiments show that the accuracy is 96.7% on the test dataset. In 2019, Wiesner-Hanks et al. [ 48 ] used heatmap method to obtain accurate contour areas of maize diseases, the model can accurately depict lesions as low as millimeter scale from the images collected by UAVs, with an accuracy rate of 99.79%, which is the best scale of aerial plant disease detection achieved so far.

Multi-task learning network. If the pure classified network does not add any other skills, it could only realize the image level classification. Therefore, to accurately locate the location of plant diseases and pests, the designed network should often add an extra branch, and the two branches would share the results of the feature extracting. In this way, the network generally had the classification and segmentation output of the plant diseases and pests, forming a multi-task learning network. It takes into account the characteristics of both network. For segmentation network branches, each pixel in the image can be used as a training sample to train the network. Therefore, the multi-task learning network not only uses the segmentation branches to output the specific segmentation results of the lesions, but also greatly reduces the requirements of the classification network for samples. Ren et al. [ 49 ] constructed a Deconvolution-Guided VGNet (DGVGNet) model to identify plant leaf diseases which were easily disturbed by shadows, occlusions and light intensity. The deconvolution was used to guide the CNN classifier to focus on the real lesion sites. The test results show that the accuracy of disease class identification is 99.19%, the pixel accuracy of lesion segmentation is 94.66%, and the model has good robustness in occlusion, low light and other environments.

To sum up, the method based on classification network is widely used in practice, and many scholars have carried out application research on the classification of plant diseases and pests [ 50 , 51 , 52 , 53 ]. At the same time, different sub-methods have their own advantages and disadvantages, as shown in Table 3 .

Detection network

Object positioning is one of the most basic tasks in the field of computer vision. It is also the closest task to plant diseases and pests detections in the traditional sense. Its purpose is to obtain accurate location and category information of the object. At present, object detection methods based on deep learning emerge endlessly. Generally speaking, plant diseases and pests detection network based on deep learning can be divided into: two stage network represented by Faster R-CNN [ 54 ]; one stage network represented by SSD [ 55 ] and YOLO [ 56 , 57 , 58 ]. The main difference between the two networks is that the two-stage network needs to first generate a candidate box (proposal) that may contain the lesions, and then further execute the object detection process. In contrast, the one-stage network directly uses the features extracted in the network to predict the location and class of the lesions.

Plant diseases and pests detection based on two stages network

The basic process of two-stage detection network (Faster R-CNN) is to obtain the feature map of the input image through the backbone network first, then calculate the anchor box confidence using RPN and get the proposal. Then, input the feature map of the proposal area after ROIpooling to the network, fine-tune the initial detection results, and finally get the location and classification results of the lesions. Therefore, according to the characteristics of plant diseases and pests detection, common methods often improve on the backbone structure or its feature map, anchor ratio, ROIpooling and loss function. In 2017, Fuentes et al. [ 59 ] first used Faster R-CNN to locate tomato diseases and pests directly, combined with deep feature extractors such as VGG-Net and ResNet, the mAP value reached 85.98% in a dataset containing 5000 tomato diseases and pests of 9 categories. In 2019, Ozguven et al. [ 60 ] proposed a Faster R-CNN structure for automatic detection of beet leaf spot disease by changing the parameters of CNN model. 155 images were trained and tested. The results show that the overall correct classification rate of this method is 95.48%. Zhou et al. [ 61 ] presented a fast rice disease detection method based on the fusion of FCM-KM and Faster R-CNN. The application results of 3010 images showed that: the detection accuracy and time of rice blast, bacterial blight, and sheath blight are 96.71%/0.65 s, 97.53%/0.82 s and 98.26%/0.53 s respectively. Xie et al. [ 62 ] proposed a Faster DR-IACNN model based on the self-built grape leaf disease dataset (GLDD) and Faster R-CNN detection algorithm, the Inception-v1 module, Inception-ResNet-v2 module and SE are introduced. The proposed model achieved higher feature extraction ability, the mAP accuracy was 81.1% and the detection speed was 15.01FPS. The two-stage detection network has been devoted to improving the detection speed to improve the real-time and practicability of the detection system, but compared with the single-stage detection network, it is still not concise enough, and the inference speed is still not fast enough.

Plant diseases and pests detection based on one stage network

The one-stage object detection algorithm has eliminated the region proposal stage, but directly adds the detection head to the backbone network for classification and regression, thus greatly improving the inference speed of the detection network. The single-stage detection network is divided into two types, SSD and YOLO, both of which use the whole image as the input of the network, and directly return the position of the bounding box and the category to which it belongs at the output layer.

Compared with the traditional convolutional neural network, the SSD selects VGG16 as the trunk of the network, and adds a feature pyramid network to obtain features from different layers and make predictions. Singh et al. [ 63 ] built the PlantDoc dataset for plant disease detection. Considering that the application should predict in mobile CPU in real time, an application based on MobileNets and SSD was established to simplify the detection of model parameters. Sun et al. [ 64 ] presented an instance detection method of multi-scale feature fusion based on convolutional neural network, which is improved on the basis of SSD to detect maize leaf blight under complex background. The proposed method combined data preprocessing, feature fusion, feature sharing, disease detection and other steps. The mAP of the new model is higher (from 71.80 to 91.83%) than that of the original SSD model. The FPS of the new model has also improved (from 24 to 28.4), reaching the standard of real-time detection.

YOLO considers the detection task as a regression problem, and uses global information to directly predict the bounding box and category of the object to achieve end-to-end detection of a single CNN network. YOLO can achieve global optimization and greatly improve the detection speed while satisfying higher accuracy. Prakruti et al. [ 65 ] presented a method to detect pests and diseases on images captured under uncontrolled conditions in tea gardens. YOLOv3 was used to detect pests and diseases. While ensuring real-time availability of the system, about 86% mAP was achieved with 50% IOU. Zhang et al. [ 66 ] combined the pooling of spatial pyramids with the improved YOLOv3, deconvolution is implemented by using the combination of up-sampling and convolution operation, which enables the algorithm to effectively detect small size crop pest samples in the image and reduces the problem of relatively low recognition accuracy due to the diversity of crop pest attitudes and scales. The average recognition accuracy can reach 88.07% by testing 20 class of pests collected in real scene.

In addition, there are many studies on using detection network to identify diseases and pests [ 47 , 67 , 68 , 69 , 70 , 71 , 72 , 73 ]. With the development of object detection network in computer vision, it is believed that more and more new detection models will be applied in plant diseases and pests detection in the future. In summary, in the field of plant diseases and pests detection which emphasizes detection accuracy at this stage, more models based on two-stage are used, and in the field of plant diseases and pests detection which pursue detection speed more models based on one-stage are used.

Can detection network replace classification network? The task of detection network is to solve the location problem of plant diseases and pests. The task of classification network is to judge the class of plant diseases and pests. Visually, the hidden information of detection network includes the category information, that is, the category information of plant diseases and pests that need to be located needs to be known beforehand, and the corresponding annotation information should be given in advance to judge the location of plant diseases and pests. From this point of view, the detection network seems to include the steps of the classification network, that is, the detection network can answer “what kind of plant diseases and pests are in what place”. But there is a misconception, in which “what kind of plant diseases and pests” is given a priori, that is, what is labelled during training is not necessarily the real result. In the case of strong model differentiation, that is, when the detection network can give accurate results, the detection network can answer “what kind of plant diseases and pests are in what place” to a certain extent. However, in the real world, in many cases, it cannot uniquely reflect the uniqueness of plant diseases and pests categories, only can answer “what kind of plant diseases and pests may be in what place”, then the involvement of the classification network is necessary. Thus, the detection network cannot replace the classification network.

Segmentation network

Segmentation network converts the plant diseases and pests detection task to semantic and even instance segmentation of lesions and normal areas. It not only finely divides the lesion area, but also obtains the location, category and corresponding geometric properties (including length, width, area, outline, center, etc.). It can be roughly divided into: Fully Convolutional Networks (FCN) [ 74 ] and Mask R-CNN [ 75 ].

Full convolution neural network (FCN) is the basis of image semantics segmentation. At present, almost all semantics segmentation models are based on FCN. FCN first extracts and codes the features of the input image using convolution, then gradually restores the feature image to the size of the input image by deconvolution or up sampling. Based on the differences in FCN network structure, the plant diseases and pests segmentation methods can be divided into conventional FCN, U-net [ 76 ] and SegNet [ 77 ].

Conventional FCN. Wang et al. [ 78 ] presented a new method of maize leaf disease segmentation based on full convolution neural network to solve the problem that traditional computer vision is susceptible to different illumination and complex background, and the segmentation accuracy reached 96.26. Wang et al. [ 79 ] proposed a plant diseases and pests segmentation method based on improved FCN. In this method, a convolution layer was used to extract multi-layer feature information from the input maize leaf lesion image, and the size and resolution of the input image were restored by deconvolution operation. Compared with the original FCN method, not only the integrity of the lesion was guaranteed, but also the segmentation of small lesion area was highlighted, and the accuracy rate reached 95.87%.

U-net. U-net is not only a classical FCN structure, but also a typical encoder-decoder structure. It is characterized by introducing a layer-hopping connection, fusing the feature map in the coding stage with that in the decoding stage, which is beneficial to the recovery of segmentation details. Lin et al. [ 80 ] used U-net based convolutional neural network to segment 50 cucumber powdery mildew leaves collected in natural environment. Compared with the original U-net, a batch normalization layer was added behind each convolution layer, making the neural network insensitive to weight initialization. The experiment shows that the convolutional neural network based on U-net can accurately segment powdery mildew on cucumber leaves at the pixel level with an average pixel accuracy of 96.08%, which is superior to the existing K-means, Random-forest and GBDT methods. The U-net method can segment the lesion area in a complex background, and still has good segmentation accuracy and segmentation speed with fewer samples.

SegNet. It is also a classical encoder–decoder structure. Its feature is that the up-sampling operation in the decoder takes advantage of the index of the largest pooling operation in the encoder. Kerkech et al. [ 81 ] presented an image segmentation method for unmanned aerial vehicles. Visible and infrared images (480 samples from each range) were segmented using SegNet to identify four categories: shadows, ground, healthy and symptomatic grape vines. The detection rates of the proposed method on grape vines and leaves were 92% and 87%, respectively.

Mask R-CNN is one of the most commonly used image instance segmentation methods at present. It can be considered as a multitask learning method based on detection and segmentation network. When multiple lesions of the same type have adhesion or overlap, instance segmentation can separate individual lesions and further count the number of lesions. However, semantic segmentation often treats multiple lesions of the same type as a whole. Stewart et al. [ 82 ] trained a Mask R-CNN model to segment maize northern leaf blight (NLB) lesions in an unmanned aerial vehicle image. The trained model can accurately detect and segment a single lesion. At the IOU threshold of 0.50, the IOU between the baseline true value and the predicted lesion was 0.73, and the average accuracy was 0.96. Also, some studies combine the Mask R-CNN framework with object detection networks for plant diseases and pests detection. Wang et al. [ 83 ] used two different models, Faster R-CNN and ask R-CNN, in which Faster R-CNN was used to identify the class of tomato diseases and Mask R-CNN was used to detect and segment the location and shape of the infected area. The results showed that the proposed model can quickly and accurately identify 11 class of tomato diseases, and divide the location and shape of infected areas. Mask R-CNN reached a high detection rate of 99.64% for all class of tomato diseases.

Compared with the classification and detection network methods, the segmentation method has advantages in obtaining the lesion information. However, like the detection network, it requires a lot of annotation data, and its annotation information is pixel by pixel, which often takes a lot of effort and cost.

Dataset and performance comparison

This section first gives a brief introduction to the plant diseases and pests related datasets and the evaluation index of deep learning model, then compares and analyses the related models of plant diseases and pests detection based on deep learning in recent years.

Datasets for plant diseases and pests detection

Plant diseases and pests detection datasets are the basis for research work. Compared with ImageNet, PASCAL-VOC2007/2012 and COCO in computer vision tasks, there is not a large and unified dataset for plant diseases and pests detection. The plant diseases and pests dataset can be acquired by self-collection, network collection and use of public datasets. Among them, self-collection of image dataset is often obtained by unmanned aerial remote sensing, ground camera photography, Internet of Things monitoring video or video recording, aerial photography of unmanned aerial vehicle with camera, hyperspectral imager, near-infrared spectrometer, and so on. Public datasets typically come from PlantVillage, an existing well-known public standard library. Relatively, self-collected datasets of plant diseases and pests in real natural environment are more practical. Although more and more researchers have opened up the images collected in the field, it is difficult to compare them uniformly based on different class of diseases under different detection objects and scenarios. This section provides links to a variety of plant diseases and pests detection datasets in conjunction with existing studies. As shown in Table 4 .

Evaluation indices

Evaluation indices can vary depending on the focus of the study. Common evaluation indices include \(Precision\) , \(Recall\) , mean Average Precision (mAP) and the harmonic Mean F1 score based on \(Precision\) and \(Recall\) .

\(Precision\) and \(Recall\) are defined as:

In Formula ( 1 ) and Formula ( 2 ), TP (True Positive) is true-positive, predicted to be 1 and actually 1, indicating the number of lesions correctly identified by the algorithm. FP (False Positive) is false-positive, predicted to be 1 and actually 0, indicating the number of lesions incorrectly identified by the algorithm. FN (False Negative) is false-negative, predicted to be 0 and actually 1, indicating the number of unrecognized lesions.

Detection accuracy is usually assessed using mAP. The average accuracy of each category in the dataset needs to be calculated first:

In the above-mentioned formula, \(N\left( {class} \right)\) represents the number of all categories, \(Precision\left( j \right)\) and \(Recall\left( j \right)\) represents the precision and recall of class j respectively.

Average accuracy for each category is defined as mAP:

The greater the value of \(mAP\) , the higher the recognition accuracy of the algorithm; conversely, the lower the accuracy of the algorithm.

F1 score is also introduced to measure the accuracy of the model. F1 score takes into account both the accuracy and recall of the model. The formula is

Frames per second (FPS) is used to evaluate the recognition speed. The more frames per second, the faster the algorithm recognition speed; conversely, the slower the algorithm recognition speed.

Performance comparison of existing algorithms

At present, the research on plant diseases and pests based on deep learning involves a wide range of crops, including all kinds of vegetables, fruits and food crops. The tasks completed include not only the basic tasks of classification, detection and segmentation, but also more complex tasks such as the judgment of infection degree.

At present, most of the current deep learning-based methods for plant diseases and pests detection are applied on specific datasets, many datasets are not publicly available, there is still no single publicly available and comprehensive dataset that will allow all algorithms to be uniformly compared. With the continuous development of deep learning, the application performance of some typical algorithms on different datasets has been gradually improved, and the mAP, F1 score and FPS of the algorithms have all been increased.

The breakthroughs achieved in the existing studies are amazing, but due to the fact that there is still a certain gap between the complexity of the infectious diseases and pests images in the existing studies and the real-time field diseases and pests detection based on mobile devices. Subsequent studies will need to find breakthroughs in larger, more complex, and more realistic datasets.

Small dataset size problem

At present, deep learning methods are widely used in various computer vision tasks, plant diseases and pests detection is generally regarded as specific application in the field of agriculture. There are too few agricultural plant diseases and pests samples available. Compared with open standard libraries, self-collected data sets are small in size and laborious in labeling data. Compared with more than 14 million sample data in ImageNet datasets, the most critical problem facing plant diseases and pests detection is the problem of small samples. In practice, some plant diseases have low incidence and high cost of disease image acquisition, resulting in only a few or dozen training data collected, which limits the application of deep learning methods in the field of plant diseases and pests identification. In fact, for the problem of small samples, there are currently three different solutions.

Data amplification, synthesis and generation

Data amplification is a key component of training deep learning models. An optimized data amplification strategy can effectively improve the plant diseases and pests detection effect. The most common method of plant diseases and pests image expansion is to acquire more samples using image processing operations such as mirroring, rotating, shifting, warping, filtering, contrast adjustment, and so on for the original plant diseases and pests samples. In addition, Generative Adversarial Networks (GANs) [ 93 ] and Variational automatic encoder (VAE) [ 94 ] can generate more diverse samples to enrich limited datasets.

Transfer learning and fine-tuning classical network model

Transfer learning (TL) transfers knowledge learned from generic large datasets to specialized areas with relatively small amounts of data. When transfer learning develops a model for newly collected unlabeled samples, it can start with a training model by a similar known dataset. After fine-tuning parameters or modifying components, it can be applied to localized plant disease and pest detection, which can reduce the cost of model training and enable the convolution neural network to adapt to small sample data. Oppenheim et al. [ 95 ] collected infected potato images of different sizes, hues and shapes under natural light and classified by fine-tuning the VGG network. The results showed that, the transfer learning and training of new networks were effective. Too et al. [ 96 ] evaluated various classical networks by fine-tuning and contrast. The experimental results showed that the accuracy of Dense-Nets improved with the number of iterations. Chen et al. [ 97 ] used transfer learning and fine-tuning to identify rice disease images under complex background conditions and achieved an average accuracy of 92.00%, which proves that the performance of transfer learning is better than training from scratch.

Reasonable network structure design

By designing a reasonable network structure, the sample requirements can be greatly reduced. Zhang et al. [ 98 ] constructed a three-channel convolution neural network model for plant leaf disease recognition by combining three color components. Each channel TCCNN component is composed of three color RGB leaf disease images. Liu et al. [ 99 ] presented an improved CNN method for identifying grape leaf diseases. The model used a depth-separable convolution instead of a standard convolution to alleviate overfitting and reduce the number of parameters. For the different size of grape leaf lesions, the initial structure was applied to the model to improve the ability of multi-scale feature extraction. Compared with the standard ResNet and GoogLeNet structures, this model has faster convergence speed and higher accuracy during training. The recognition accuracy of this algorithm was 97.22%.

Fine-grained identification of small-size lesions in early identification

Small-size lesions in early identification.

Accurate early detection of plant diseases is essential to maximize the yield [ 36 ]. In the actual early identification of plant diseases and pests, due to the small size of the lesion object itself, multiple down sampling processes in the deep feature extraction network tend to cause small-scale objects to be ignored. Moreover, due to the background noise problem on the collected images, large-scale complex background may lead to more false detection, especially on low-resolution images. In view of the shortage of existing algorithms, the improvement direction of small object detection algorithm is analyzed, and several strategies such as attention mechanism are proposed to improve the performance of small target detection.

The use of attention mechanism makes resources allocated more rationally. The essence of attention mechanism is to quickly find region of interest and ignore unimportant information. By learning the characteristics of plant diseases and pests images, features can be separated using weighted sum method with weighted coefficient, and the background noise in the image can be suppressed. Specifically, the attention mechanism module can get a salient image, and seclude the object from the background, and the Softmax function can be used to manipulate the feature image, and combine it with the original feature image to obtain new fusion features for noise reduction purposes. In future studies on early recognition of plant diseases and pests, attention mechanisms can be used to effectively select information and allocate more resources to region of interest to achieve more accurate detection. Karthik et al. [ 100 ] applied attention mechanism on the residual network and experiments were carried out using the plantVillage dataset, which achieved 98% overall accuracy.

Fine-grained identification

First, there is a large difference within the class, that is, the visual characteristics of plant diseases and pests belonging to the same class are quite different. The reason is that the aforementioned external factors such as uneven illumination, dense occlusion, blurred equipment dithering and other interferences, resulting in different image samples belonging to the same kind of diseases and pests differ greatly. Plant diseases and pests detection in complex scenarios is a very challenging task of fine-grained recognition [ 101 ]. The existence of growth variations of diseases and pests results in distinct differences in the characterization of the same diseases and pests at different stages, forming the “intra-class difference” fine-grained characteristics.

Secondly, there is fuzziness between classes, that is, objects of different classes have some similarity. There are many detailed classifications of biological subspecies and subclasses of different kinds of diseases and pests, and there are some similarities of biological morphology and life habits among the subclasses, which lead to the problem of fine-grained identification of “inter-class similarity”. Barbedo believed that similar symptoms could be produced, which even phytopathologists could not correctly distinguish [ 102 ].

Thirdly, background disturbance makes it impossible for plant diseases and pests to appear in a very clean background in the real world. Background can be very complex and interfere with objects of interest, which makes plant diseases and pests detection more difficult. Some literature often ignores this issue because images are captured under controlled conditions [ 103 ].

Relying on the existing deep learning methods can not effectively identify the fine-grained characteristics of diseases and pests that exist naturally in the application of the above actual agricultural scenarios, resulting in technical difficulties such as low identification accuracy and generalization robustness, which has long restricted the performance improvement of decision-making management of diseases and pests by the Intelligent Agricultural Internet of Things [ 104 ]. The existing research is only suitable for fine-grained identification of fewer class of diseases and pests, can not solve the problem of large-scale, large-category, accurate and efficient identification of diseases and pests, and is difficult to deploy directly to the mobile terminals of smart agriculture.

Detection performance under the influence of illumination and occlusion

Lighting problems.

Previous studies have collected images of plant diseases and pests mostly in indoor light boxes [ 105 ]. Although this method can effectively eliminate the influence of external light to simplify image processing, it is quite different from the images collected under real natural light. Because natural light changes very dynamically, and the range in which the camera can accept dynamic light sources is limited, it is easy to cause image color distortion when above or below this limit. In addition, due to the difference of view angle and distance during image collection, the apparent characteristics of plant diseases and pests change greatly, which brings great difficulties to the visual recognition algorithm.

Occlusion problem

At present, most researchers intentionally avoid the recognition of plant diseases and pests in complex environments. They only focus on a single background. They use the method of directly intercepting the area of interest to the collected images, but seldom consider the occlusion problem. As a result, the recognition accuracy under occlusion is low and the practicability is greatly reduced. Occlusion problems are common in real natural environments, including blade occlusion caused by changes in blade posture, branch occlusion, light occlusion caused by external lighting, and mixed occlusion caused by different types of occlusion. The difficulties of plant diseases and pests identification under occlusion are the lack of features and noise overlap caused by occlusion. Different occlusion conditions have different degrees of impact on the recognition algorithm, resulting in false detection or even missed detection. In recent years, with the maturity of deep learning algorithms under restricted conditions, some researchers have gradually challenged the identification of plant diseases and pests under occluded conditions [ 106 , 107 ], and significant progress has been made, which lays a good foundation for the application of plant diseases and pests identification in real-world scenarios. However, occlusion is random and complex. The training of the basic framework is difficult and the dependence on the performance of hardware devices still exists, we should strengthen the innovation and optimization of the basic framework, including the design of lightweight network architecture. The exploration of GAN and other aspects should be enhanced, while ensuring the accuracy of detection, the difficulty of model training should be reduced. GAN has prominent advantages in dealing with posture changes and chaotic background, but its design is not yet mature, and it is easy to crash in learning and cause model uncontrollable problems during training. We should strengthen the exploration of network performance to make it easier to quantify the quality of the model.

Detection speed problem

Compared with traditional methods, deep learning algorithms have better results, but their computational complexity is also higher. If the detection accuracy is guaranteed, the model needs to fully learn the characteristics of the image and increase the computational load, which will inevitably lead to slow detection speed and can not meet the needs of real-time. In order to ensure the detection speed, it is usually necessary to reduce the amount of calculation. However, this will cause insufficient training and result in false or missed detection. Therefore, it is important to design an efficient algorithm with both detection accuracy and detection speed.

Plant diseases and pests detection methods based on deep learning include three main links in agricultural applications: data labeling, model training and model inference. In real-time agricultural applications, more attention is paid to model inference. Currently, most plant diseases and pests detection methods focus on the accuracy of recognition. Little attention is paid to the efficiency of model inference. In reference [ 108 ], to improve the efficiency of the model calculation process to meet the actual agricultural needs, a deep separable convolution structure model for plant leaf disease detection was introduced. Several models were trained and tested. The classification accuracy of Reduced MobileNet was 98.34%, the parameters were 29 times less than VGG, and 6 times less than MobileNet. This shows an effective compromise between delay and accuracy, which is suitable for real-time crop diseases diagnosis on resource-constrained mobile devices.

Conclusions and future directions

Compared with traditional image processing methods, which deal with plant diseases and pests detection tasks in several steps and links, plant diseases and pests detection methods based on deep learning unify them into end-to-end feature extraction, which has a broad development prospects and great potential. Although plant diseases and pests detection technology is developing rapidly, it has been moving from academic research to agricultural application, there is still a certain distance from the mature application in the real natural environment, and there are still some problems to be solved.

Plant diseases and pests detection dataset

Deep learning technology has made some achievements in the identification of plant diseases and pests. Various image recognition algorithms have also been further developed and extended, which provides a theoretical basis for the identification of specific diseases and pests. However, the collection of image samples in previous studies mostly come from the characterization of disease spots, insect appearance characteristics or the characterization of insect pests and leaves. Most of the research results are limited to the laboratory environment and are applicable only to the plant diseases and pests images obtained at the time. The main reason for this is that the growth of plants is cyclical, continuous, seasonal and regional. Similarly, the characteristics of the same disease or pest at different growing stages of crops are different. Images of different plant species vary from region to region. As a result, most of the existing research results are not universal. Even with a high recognition rate in a single trial, the validity of the data obtained at other times cannot be guaranteed.

Most of the existing studies are based on the images generated in the visible range, but the electromagnetic wave outside the visible range also contains a lot of information, so the comprehensive information such as visible light, near infrared, multi-spectral should be fused to achieve the acquisition of plant diseases and pests dataset. Future research should focus on multi-information fusion method to obtain and identify plant diseases and pests information.

In addition, image databases of different kinds of plant diseases and pests in real natural environments are still in the blank stage. Future research should make full use of the data information acquisition platform such as portable field spore auto-capture instrument, unmanned aerial vehicle aerial photography system, agricultural internet of things monitoring equipment, which performs large-area and coverage identification of farmland and makes up for the lack of randomness of image samples in previous studies. Also, it can ensures the comprehensiveness and accuracy of dataset, and improves the generality of the algorithm.

Early recognition of plant diseases and pests

In the application of plant diseases and pests identification, the manifestation symptoms are not obvious, so early diagnosis is very difficult whether it is by visual observation or computer interpretation. However, the research significance and demand of early diagnosis are greater, which is more conducive to the prevention and control of plant diseases and pests and prevent their spread and development. The best image quality can be obtained when the sunlight is sufficient, and taking pictures in cloudy weather will increase the complexity of image preprocessing and reduce the recognition effect. In addition, in the early stage of plant diseases and pests occurrence, even high-resolution images are difficult to analyze. It is necessary to combine meteorological and plant protection data such as temperature and humidity to realize the recognition and prediction of diseases and pests. By consulting the existing research literatures, there are few reports on the early diagnosis of plant diseases and pests.

Network training and learning

When plant diseases and pests are visually identified manually, it is difficult to collect samples of all plant diseases and pests types, and many times only healthy data (positive samples) are available. However, most of the current plant diseases and pests detection methods based on deep learning are supervised learning based on a large number of diseases and pests samples, so manual collection of labelled datasets requires a lot of manpower, so unsupervised learning needs to be explored. Deep learning is a black box, which requires a large number of labelled training samples for end-to-end learning and has poor interpretability. Therefore, how to use the prior knowledge of brain-inspired computing and human-like visual cognitive model to guide the training and learning of the network is also a direction worthy of studying. At the same time, deep models need a large amount of memory and are extremely time-consuming during testing, which makes them unsuitable for deployment on mobile platforms with limited resources. It is important to study how to reduce complexity and obtain fast-executing models without losing accuracy. Finally, the selection of appropriate hyper-parameters has always been a major obstacle to the application of deep learning model to new tasks, such as learning rate, filter size, step size and number, these hyper-parameters have a strong internal dependence, any small adjustment may have a greater impact on the final training results.

Interdisciplinary research

Only by more closely integrating empirical data with theories such as agronomic plant protection, can we establish a field diagnosis model that is more in line with the rules of crop growth, and will further improve the effectiveness and accuracy of plant diseases and pests identification. In the future, it is necessary to go from image analysis at the surface level to identification of the occurrence mechanism of diseases and pests, and transition from simple experimental environment to practical application research that comprehensively considers crop growth law, environmental factors, etc.

In summary, with the development of artificial intelligence technology, the research focus of plant diseases and pests detection based on machine vision has shifted from classical image processing and machine learning methods to deep learning methods, which solved the difficult problems that could not be solved by traditional methods. There is still a long distance from the popularization of practical production and application, but this technology has great development potential and application value. To fully explore the potential of this technology, the joint efforts of experts from relevant disciplines are needed to effectively integrate the experience knowledge of agriculture and plant protection with deep learning algorithms and models, so as to make plant diseases and pests detection based on deep learning mature. Also, the research results should be integrated into agricultural machinery equipment to truly land the corresponding theoretical results.

Availability of data and materials

For relevant data and codes, please contact the corresponding author of this manuscript.

Lee SH, Chan CS, Mayo SJ, Remagnino P. How deep learning extracts and learns leaf features for plant classification. Pattern Recogn. 2017;71:1–13.

Article   Google Scholar  

Tsaftaris SA, Minervini M, Scharr H. Machine learning for plant phenotyping needs image processing. Trends Plant Sci. 2016;21(12):989–91.

Article   CAS   PubMed   Google Scholar  

Fuentes A, Yoon S, Park DS. Deep learning-based techniques for plant diseases recognition in real-field scenarios. In: Advanced concepts for intelligent vision systems. Cham: Springer; 2020.

Google Scholar  

Yang D, Li S, Peng Z, Wang P, Wang J, Yang H. MF-CNN: traffic flow prediction using convolutional neural network and multi-features fusion. IEICE Trans Inf Syst. 2019;102(8):1526–36.

Sundararajan SK, Sankaragomathi B, Priya DS. Deep belief cnn feature representation based content based image retrieval for medical images. J Med Syst. 2019;43(6):1–9.

Melnyk P, You Z, Li K. A high-performance CNN method for offline handwritten chinese character recognition and visualization. Soft Comput. 2019;24:7977–87.

Li J, Mi Y, Li G, Ju Z. CNN-based facial expression recognition from annotated rgb-d images for human–robot interaction. Int J Humanoid Robot. 2019;16(04):504–5.

Kumar S, Singh SK. Occluded thermal face recognition using bag of CNN(BoCNN). IEEE Signal Process Lett. 2020;27:975–9.

Wang X. Deep learning in object recognition, detection, and segmentation. Found Trends Signal Process. 2016;8(4):217–382.

Article   CAS   Google Scholar  

Boulent J, Foucher S, Théau J, St-Charles PL. Convolutional neural networks for the automatic identification of plant diseases. Front Plant Sci. 2019;10:941.

Article   PubMed   PubMed Central   Google Scholar  

Kumar S, Kaur R. Plant disease detection using image processing—a review. Int J Comput Appl. 2015;124(2):6–9.

Martineau M, Conte D, Raveaux R, Arnault I, Munier D, Venturini G. A survey on image-based insect classification. Pattern Recogn. 2016;65:273–84.

Jayme GAB, Luciano VK, Bernardo HV, Rodrigo VC, Katia LN, Claudia VG, et al. Annotated plant pathology databases for image-based detection and recognition of diseases. IEEE Latin Am Trans. 2018;16(6):1749–57.

Kaur S, Pandey S, Goel S. Plants disease identification and classification through leaf images: a survey. Arch Comput Methods Eng. 2018;26(4):1–24.

CAS   Google Scholar  

Shekhawat RS, Sinha A. Review of image processing approaches for detecting plant diseases. IET Image Process. 2020;14(8):1427–39.

Hinton GE, Salakhutdinov R. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.

Liu W, Wang Z, Liu X, et al. A survey of deep neural network architectures and their applications. Neurocomputing. 2017;234:11–26.

Fergus R. Deep learning methods for vision. CVPR 2012 Tutorial; 2012.

Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–828.

Article   PubMed   Google Scholar  

Boureau YL, Le Roux N, Bach F, Ponce J, Lecun Y. [IEEE 2011 IEEE international conference on computer vision (ICCV)—Barcelona, Spain (2011.11.6–2011.11.13)] 2011 international conference on computer vision—ask the locals: multi-way local pooling for image recognition; 2011. p. 2651–8.

Zeiler MD, Fergus R. Stochastic pooling for regularization of deep convolutional neural networks. Eprint Arxiv. arXiv:1301.3557 . 2013.

TensorFlow. https://www.tensorflow.org/ .

Torch/PyTorch. https://pytorch.org/ .

Caffe. http://caffe.berkeleyvision.org/ .

Theano. http://deeplearning.net/software/theano/ .

Krizhenvshky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional networks. In: Proceedings of the conference neural information processing systems (NIPS), Lake Tahoe, NV, USA, 3–8 December; 2012. p. 1097–105.

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the 2015 IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June; 2015. p. 1–9.

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. arXiv:1409.1556 . 2014.

Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. arXiv. arXiv:1611.05431 . 2017.

Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the AAAI conference on artificial intelligence. 2016.

Huang G, Lrj Z, Maaten LVD, et al. Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition. 2017. p. 2261–9.

Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv. arXiv:1704.04861 . 2017.

Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. SqueezeNet: AlexNet-level accuracy with 50 × fewer parameters and < 0.5 MB model size. arXiv. arXiv:1602.07360 . 2016.

Priyadharshini RA, Arivazhagan S, Arun M, Mirnalini A. Maize leaf disease classification using deep convolutional neural networks. Neural Comput Appl. 2019;31(12):8887–95.

Wen J, Shi Y, Zhou X, Xue Y. Crop disease classification on inadequate low-resolution target images. Sensors. 2020;20(16):4601.

Article   PubMed Central   Google Scholar  

Thangaraj R, Anandamurugan S, Kaliappan VK. Automated tomato leaf disease classification using transfer learning-based deep convolution neural network. J Plant Dis Prot. 2020. https://doi.org/10.1007/s41348-020-00403-0 .

Atila M, Uar M, Akyol K, Uar E. Plant leaf disease classification using efficientnet deep learning model. Ecol Inform. 2021;61:101182.

Sabrol H, Kumar S. Recent studies of image and soft computing techniques for plant disease recognition and classification. Int J Comput Appl. 2015;126(1):44–55.

Yalcin H, Razavi S. Plant classification using convolutional neural networks. In: 2016 5th international conference on agro-geoinformatics (agro-geoinformatics). New York: IEEE; 2016.

Fuentes A, Lee J, Lee Y, Yoon S, Park DS. Anomaly detection of plant diseases and insects using convolutional neural networks. In: ELSEVIER conference ISEM 2017—The International Society for Ecological Modelling Global Conference, 2017. 2017.

Hasan MJ, Mahbub S, Alom MS, Nasim MA. Rice disease identification and classification by integrating support vector machine with deep convolutional neural network. In: 2019 1st international conference on advances in science, engineering and robotics technology (ICASERT). 2019.

Thenmozhi K, Reddy US. Crop pest classification based on deep convolutional neural network and transfer learning. Comput Electron Agric. 2019;164:104906.

Fang T, Chen P, Zhang J, Wang B. Crop leaf disease grade identification based on an improved convolutional neural network. J Electron Imaging. 2020;29(1):1.

Nagasubramanian K, Jones S, Singh AK, Sarkar S, Singh A, Ganapathysubramanian B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods. 2019;15(1):1–10.

Picon A, Seitz M, Alvarez-Gila A, Mohnke P, Echazarra J. Crop conditional convolutional neural networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput Electron Agric. 2019;167:105093.

Tianjiao C, Wei D, Juan Z, Chengjun X, Rujing W, Wancai L, et al. Intelligent identification system of disease and insect pests based on deep learning. China Plant Prot Guide. 2019;039(004):26–34.

Dechant C, Wiesner-Hanks T, Chen S, Stewart EL, Yosinski J, Gore MA, et al. Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology. 2017;107:1426–32.

Wiesner-Hanks T, Wu H, Stewart E, Dechant C, Nelson RJ. Millimeter-level plant disease detection from aerial photographs via deep learning and crowdsourced data. Front Plant Sci. 2019;10:1550.

Shougang R, Fuwei J, Xingjian G, Peishen Y, Wei X, Huanliang X. Deconvolution-guided tomato leaf disease identification and lesion segmentation model. J Agric Eng. 2020;36(12):186–95.

Fujita E, Kawasaki Y, Uga H, Kagiwada S, Iyatomi H. Basic investigation on a robust and practical plant diagnostic system. In: IEEE international conference on machine learning & applications. New York: IEEE; 2016.

Mohanty SP, Hughes DP, Salathé M. Using deep learning for image-based plant disease detection. Front Plant Sci. 2016;7:1419. https://doi.org/10.3389/fpls.2016.01419 .

Brahimi M, Arsenovic M, Laraba S, Sladojevic S, Boukhalfa K, Moussaoui A. Deep learning for plant diseases: detection and saliency map visualisation. In: Zhou J, Chen F, editors. Human and machine learning. Cham: Springer International Publishing; 2018. p. 93–117.

Chapter   Google Scholar  

Barbedo JG. Plant disease identification from individual lesions and spots using deep learning. Biosyst Eng. 2019;180:96–107.

Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–49.

Liu W, Anguelov D, Erhan D, Szegedy C, Berg AC. SSD: Single shot MultiBox detector. In: European conference on computer vision. Cham: Springer International Publishing; 2016.

Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

Redmon J, Farhadi A. Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 6517–25.

Redmon J, Farhadi A. Yolov3: an incremental improvement. arXiv preprint. arXiv:1804.02767 . 2018.

Fuentes A, Yoon S, Kim SC, Park DS. A robust deep-learning-based detector for real-time tomato plant diseases and pests detection. Sensors. 2017;17(9):2022.

Ozguven MM, Adem K. Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Phys A Statal Mech Appl. 2019;535(2019):122537.

Zhou G, Zhang W, Chen A, He M, Ma X. Rapid detection of rice disease based on FCM-KM and faster R-CNN fusion. IEEE Access. 2019;7:143190–206. https://doi.org/10.1109/ACCESS.2019.2943454 .

Xie X, Ma Y, Liu B, He J, Wang H. A deep-learning-based real-time detector for grape leaf diseases using improved convolutional neural networks. Front Plant Sci. 2020;11:751.

Singh D, Jain N, Jain P, Kayal P, Kumawat S, Batra N. Plantdoc: a dataset for visual plant disease detection. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. 2019.

Sun J, Yang Y, He X, Wu X. Northern maize leaf blight detection under complex field environment based on deep learning. IEEE Access. 2020;8:33679–88. https://doi.org/10.1109/ACCESS.2020.2973658 .

Bhatt PV, Sarangi S, Pappula S. Detection of diseases and pests on images captured in uncontrolled conditions from tea plantations. In: Proc. SPIE 11008, autonomous air and ground sensing systems for agricultural optimization and phenotyping IV; 2019. p. 1100808. https://doi.org/10.1117/12.2518868 .

Zhang B, Zhang M, Chen Y. Crop pest identification based on spatial pyramid pooling and deep convolution neural network. Trans Chin Soc Agric Eng. 2019;35(19):209–15.

Ramcharan A, McCloskey P, Baranowski K, Mbilinyi N, Mrisho L, Ndalahwa M, Legg J, Hughes D. A mobile-based deep learning model for cassava disease diagnosis. Front Plant Sci. 2019;10:272. https://doi.org/10.3389/fpls.2019.00272 .

Selvaraj G, Vergara A, Ruiz H, Safari N, Elayabalan S, Ocimati W, Blomme G. AI-powered banana diseases and pest detection. Plant Methods. 2019. https://doi.org/10.1186/s13007-019-0475-z .

Tian Y, Yang G, Wang Z, Li E, Liang Z. Detection of apple lesions in orchards based on deep learning methods of CycleGAN and YOLOV3-dense. J Sens. 2019. https://doi.org/10.1155/2019/7630926 .

Zheng Y, Kong J, Jin X, Wang X, Zuo M. CropDeep: the crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors. 2019;19:1058. https://doi.org/10.3390/s19051058 .

Arsenovic M, Karanovic M, Sladojevic S, Anderla A, Stefanović D. Solving current limitations of deep learning based approaches for plant disease detection. Symmetry. 2019;11:21. https://doi.org/10.3390/sym11070939 .

Fuentes AF, Yoon S, Lee J, Park DS. High-performance deep neural network-based tomato plant diseases and pests diagnosis system with refinement filter bank. Front Plant Sci. 2018;9:1162. https://doi.org/10.3389/fpls.2018.01162 .

Jiang P, Chen Y, Liu B, He D, Liang C. Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks. IEEE Access. 2019. https://doi.org/10.1109/ACCESS.2019.2914929 .

Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2015;39(4):640–51.

He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. In: 2017 IEEE international conference on computer vision (ICCV). New York: IEEE; 2017.

Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Berlin: Springer; 2015. p. 234–41. https://doi.org/10.1007/978-3-319-24574-4_28 .

Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2019;39(12):2481–95.

Wang Z, Zhang S. Segmentation of corn leaf disease based on fully convolution neural network. Acad J Comput Inf Sci. 2018;1:9–18.

Wang X, Wang Z, Zhang S. Segmenting crop disease leaf image by modified fully-convolutional networks. In: Huang DS, Bevilacqua V, Premaratne P, editors. Intelligent computing theories and application. ICIC 2019, vol. 11643. Lecture Notes in Computer Science. Cham: Springer; 2019. https://doi.org/10.1007/978-3-030-26763-6_62 .

Lin K, Gong L, Huang Y, Liu C, Pan J. Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Front Plant Sci. 2019;10:155.

Kerkech M, Hafiane A, Canals R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput Electron Agric. 2020;174:105446.

Stewart EL, Wiesner-Hanks T, Kaczmar N, Dechant C, Gore MA. Quantitative phenotyping of northern leaf blight in UAV images using deep learning. Remote Sens. 2019;11(19):2209.

Wang Q, Qi F, Sun M, Qu J, Xue J. Identification of tomato disease types and detection of infected areas based on deep convolutional neural networks and object detection techniques. Comput Intell Neurosci. 2019. https://doi.org/10.1155/2019/9142753 .

Hughes DP, Salathe M. An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing. Comput Sci. 2015.

Shah JP, Prajapati HB, Dabhi VK. A survey on detection and classification of rice plant diseases. In: IEEE international conference on current trends in advanced computing. New York: IEEE; 2016.

Prajapati HB, Shah JP, Dabhi VK. Detection and classification of rice plant diseases. Intell Decis Technol. 2017;11(3):1–17.

Barbedo JGA, Koenigkan LV, Halfeld-Vieira BA, Costa RV, Nechet KL, Godoy CV, Junior ML, Patricio FR, Talamini V, Chitarra LG, Oliveira SAS. Annotated plant pathology databases for image-based detection and recognition of diseases. IEEE Latin Am Trans. 2018;16(6):1749–57.

Brahimi M, Arsenovic M, Laraba S, Sladojevic S, Boukhalfa K, Moussaoui A. Deep learning for plant diseases: detection and saliency map visualisation. In: Zhou J, Chen F, editors. Human and machine learning. Human–computer interaction series. Cham: Springer; 2018. https://doi.org/10.1007/978-3-319-90403-0_6 .

Tyr WH, Stewart EL, Nicholas K, Chad DC, Harvey W, Nelson RJ, et al. Image set for deep learning: field images of maize annotated with disease symptoms. BMC Res Notes. 2018;11(1):440.

Thapa R, Snavely N, Belongie S, Khan A. The plant pathology 2020 challenge dataset to classify foliar disease of apples. arXiv preprint. arXiv:2004.11958 . 2020.

Wu X, Zhan C, Lai YK, Cheng MM, Yang J. IP102: a large-scale benchmark dataset for insect pest recognition. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR). New York: IEEE; 2019.

Huang M-L, Chuang TC. A database of eight common tomato pest images. Mendeley Data. 2020. https://doi.org/10.17632/s62zm6djd2.1 .

Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 2014 conference on advances in neural information processing systems 27. Montreal: Curran Associates, Inc.; 2014. p. 2672–80.

Pu Y, Gan Z, Henao R, et al. Variational autoencoder for deep learning of images, labels and captions [EB/OL]. 2016–09–28. arxiv:1609.08976 .

Oppenheim D, Shani G, Erlich O, Tsror L. Using deep learning for image-based potato tuber disease detection. Phytopathology. 2018;109(6):1083–7.

Too EC, Yujian L, Njuki S, Yingchun L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput Electron Agric. 2018;161:272–9.

Chen J, Chen J, Zhang D, Sun Y, Nanehkaran YA. Using deep transfer learning for image-based plant disease identification. Comput Electron Agric. 2020;173:105393.

Zhang S, Huang W, Zhang C. Three-channel convolutional neural networks for vegetable leaf disease recognition. Cogn Syst Res. 2018;53:31–41. https://doi.org/10.1016/j.cogsys.2018.04.006 .

Liu B, Ding Z, Tian L, He D, Li S, Wang H. Grape leaf disease identification using improved deep convolutional neural networks. Front Plant Sci. 2020;11:1082. https://doi.org/10.3389/fpls.2020.01082 .

Karthik R, Hariharan M, Anand S, et al. Attention embedded residual CNN for disease detection in tomato leaves. Appl Soft Comput J. 2020;86:105933.

Guan W, Yu S, Jianxin W. Automatic image-based plant disease severity estimation using deep learning. Comput Intell Neurosci. 2017;2017:2917536.

Barbedo JGA. Factors influencing the use of deep learning for plant disease recognition. Biosyst Eng. 2018;172:84–91.

Barbedo JGA. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput Electron Agric. 2018;153:46–53.

Nawaz MA, Khan T, Mudassar R, Kausar M, Ahmad J. Plant disease detection using internet of thing (IOT). Int J Adv Comput Sci Appl. 2020. https://doi.org/10.14569/IJACSA.2020.0110162 .

Martinelli F, Scalenghe R, Davino S, Panno S, Scuderi G, Ruisi P, et al. Advanced methods of plant disease detection. A review. Agron Sustain Dev. 2015;35(1):1–25.

Liu J, Wang X. Early recognition of tomato gray leaf spot disease based on MobileNetv2-YOLOv3 model. Plant Methods. 2020;16:83.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Liu J, Wang X. Tomato diseases and pests detection based on improved Yolo V3 convolutional neural network. Front Plant Sci. 2020;11:898.

Kamal KC, Yin Z, Wu M, Wu Z. Depthwise separable convolution architectures for plant disease classification. Comput Electron Agric. 2019;165:104948.

Download references

Acknowledgements

Appreciations are given to the editors and reviewer of the Journal Plant Method.

This study was supported by the Facility Horticulture Laboratory of Universities in Shandong with Project Numbers 2019YY003, 2018YY016, 2018YY043 and 2018YY044; school level High-level Talents Project 2018RC002; Youth Fund Project of Philosophy and Social Sciences of Weifang College of Science and Technology with project numbers 2018WKRQZ008 and 2018WKRQZ008-3; Key research and development plan of Shandong Province with Project Number 2019RKA07012, 2019GNC106034 and 2020RKA07036; Research and Development Plan of Applied Technology in Shouguang with Project Number 2018JH12; 2018 innovation fund of Science and Technology Development centre of the China Ministry of Education with Project Number 2018A02013; 2019 basic capacity construction project of private colleges and universities in Shandong Province; and Weifang Science and Technology Development Programme with project numbers 2019GX081 and 2019GX082, Special project of Ideological and political education of Weifang University of science and technology (W19SZ70Z01).

Author information

Authors and affiliations.

Shandong Provincial University Laboratory for Protected Horticulture, Blockchain Laboratory of Agricultural Vegetables, Weifang University of Science and Technology, Weifang, 262700, Shandong, China

Jun Liu & Xuewei Wang

You can also search for this author in PubMed   Google Scholar

Contributions

JL designed the research. JL and XW conducted the experiments and data analysis and wrote the manuscript. XW revised the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Xuewei Wang .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Liu, J., Wang, X. Plant diseases and pests detection based on deep learning: a review. Plant Methods 17 , 22 (2021). https://doi.org/10.1186/s13007-021-00722-9

Download citation

Received : 20 September 2020

Accepted : 13 February 2021

Published : 24 February 2021

DOI : https://doi.org/10.1186/s13007-021-00722-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Deep learning
  • Plant diseases and pests
  • Classification
  • Object detection
  • Segmentation

Plant Methods

ISSN: 1746-4811

leaf disease detection research paper

Plant leaf disease classification and damage detection system using deep learning models

  • Published: 19 March 2022
  • Volume 81 , pages 24021–24040, ( 2022 )

Cite this article

  • B. Sai Reddy 1 &
  • S. Neeraja 1  

1242 Accesses

17 Citations

1 Altmetric

Explore all metrics

Agriculture is the primary source of livelihood for about 70% of the rural population in India. The crop variety cultivated in India is very diverse. There are more than 500 crop varieties grown in India. Despite the technological advances, the agricultural practices are still manual and involve less automation than western countries. Most of the diseases affecting a plant will reflect the damage in the leaves. The diseases affecting the plant can thus be identified from the leaf images. This paper presents an automatic plant leaf damage detection and disease identification system. The first stage of the proposed method identifies the type of the disease based on the plant leaf image using DenseNet. The DenseNet model is trained on images categorized according to their nature, i.e., healthy and the type of the disease. This model is then used for testing new leaf images. The proposed DenseNet model produced a classification accuracy of 100%, with fewer images used during the training stage. The second stage identifies the damage in the leaf using deep learning-based semantic segmentation. Each RGB pixel value combination in the image is extracted, and supervised training is performed on the pixel values using the 1D Convolutional Neural Network (CNN). The trained model can detect the damage present in the leaves at a pixel level. Evaluation of the proposed semantic segmentation resulted in an accuracy of 97%. The third stage suggests a remedy for the disease based on the disease type and the damage state. The proposed method detects various defects in different plants in the experimental analysis, namely apple, grape, potato, and strawberry. The proposed model is compared with the existing techniques and obtained better performance in comparison with those methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

leaf disease detection research paper

Akhtar A, Khanum A, Khan SA, Shaukat A (2013) Automated plant disease analysis (APDA): Performance Comparison of Machine Learning Techniques. IEEE International Conference on Frontiers of Information Technology (FIT), pp 60-65

Barbados, Jayme GA (2018) Factors influencing the use of deep learning for plant disease recognition. Biosyst Eng 172:84-91

Barbedo JGA, Koenigkan LV, Santos TT (2016) Identifying multiple plant diseases using digital image processing. Biosyst Eng 147:104–116

Article   Google Scholar  

Bhong VS, Pawar BV (2016) Study and analysis of cotton leaf disease detection using image processing. International Journal of Advanced Research in Science, Engineering and Technology 3(2):1447–1454

Das D, Singh M, Mohanty SS, Chakravarty S (2020) Leaf disease detection using support vector machine. In 2020 International Conference on Communication and Signal Processing (ICCSP). IEEE, pp 1036-1040

Deeba K, Amutha B (2020) ResNet-deep neural network architecture for leaf disease classification. Microprocess Microsyst :103364

Gavhale KR, Gawande U, Hajari KO (2014) Unhealthy region of citrus leaf detection using image processing techniques,. IEEE, International Conference for Convergence of Technology, Pune, pp 1-6

Hou C, Zhuang J, Tang Yu, He Y, Miao A, Huang H, Luo S (2021) Recognition of early blight and late blight diseases on potato leaves based on graph cut segmentation. J Agric Food Res 5:100154

Iqbal Z, Khan MA, Sharif M, Shah JH, Ur Rehman MH, Javed K (2018) An automated detection and classification of citrus plant diseases using image processing techniques: A review. Comput Electron Agric 153:12–32

Islam M, Dinh A, Wahid K, Bhowmik P (2017) Detection of potato diseases using image segmentation and multiclass support vector machine. In 2017 IEEE 30th Canadian conference on electrical and computer engineering (CCECE). IEEE, pp 1-4

Jalal AS, Dubey SR (2012) Detection and classification of apple fruit diseases using complete local binary patterns. IEEE Third International Conference on Computer and Communication Anand Singh Jalal, Shiv Ram Dubey, Technology, pp 978-0-7695-4872

Kamlapurkar SR (2016) Detection of plant leaf disease using image processing approach. Int J Sci Res Publ 6(2):73–76

Google Scholar  

Kim M (2021) Apple leaf disease classification using superpixel and CNN. Advances in Computer Vision and Computational Biology. Springer, Cham, pp 99–106

Chapter   Google Scholar  

Kutty SB, Abdullah NE, Hashim H, Rahim AAA, Kusim AS, Yaakub TNT, Yunus PNAM, Rahman NFA (2013) Classification of watermelon leaf diseases using neural network analysis. IEEE, Business Engineering and Industrial Applications Colloquium (BEIAC), Langkawi, pp 459 – 464

Liu X, Deng Z, Yang Y (2019) Recent progress in semantic image segmentation. Artif Intell Rev 52(2):1089–1106

Luna-Benoso B, Martínez-Perales JC, Cortés-Galicia J, Flores-Carapia R, Silva-García VM (2021) Detection of diseases in tomato leaves by color analysis. Electronics 10(9):1055

Mavridou E, Vrochidou E, Papakostas GA, Pachidis T, Kaburlasos VG (2019) Machine vision systems in precision agriculture for crop farming. J Imaging 5(12):89

Mokhtar U, Alit MAS, Hassenian AE, Hefny H (2015) Tomato leaves diseases detection approach based on support vector machines. IEEE, pp 978-1-5090-0275-7/15

Rao US, Swathi R, Sanjana V, Arpitha L, Chandrasekhar K, Naik PK (2021) International Conference on Computing Systemits Applications (ICCSA-2021): Deep Learning Precision Farming: GrapesMango Leaf Disease Detection by Transfer Learning. Global Transitions Proceedings

Sachin D, Khirade AB, Patil (2015) Plant disease detection using image processing. IEEE, International Conference on Computing Communication Control and Automation, Pune, pp 768-771

Sannaki SS, Rajpurohit VS, Nargund VB, Kulkarni P (2013) Diagnosis and classification of grape leaf diseases using neural network. IEEE, Tiruchengode, pp 1 – 5

Sardogan M, Tuncer A, Ozen Y (2018) Plant leaf disease detection and classification based on CNN with LVQ algorithm. In 3rd International Conference on Computer Science and Engineering (UBMK). IEEE, pp 382-385

Sharma P, Hans P, Grupta SC (2020) Classification of plant leaf diseases using machine learning and image preprocessing techniques. In 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, pp 480-484

Smith LN, Zhang W, Hansen MF, Smith ML (2018) Innovative 3D and 2D machine vision methods for the analysis of plants and crops in the field. Comput Ind 97:122–131

Steward PRA, Dougill J, Thierfelder C, Pittelkow CM, Stringer LC, Kudzala M, Shackelford GE (2018) The adaptive capacity of maize-based conservation agriculture systems to climate stress in tropical and subtropical environments: A meta-regression of yields. Agric Ecosyst Environ 251:194–202

Sujatha R, Chatterjee JM, Jhanjhi NZ, Brohi SN (2021) Performance of deep learning vs machine learning in plant leaf disease detection. Microprocess Microsyst 80:103615

Thomas S, Kuska MT, Bohnenkamp D, Brugger A, Alisaac E, Wahabzada M, Behmann J, Mahlein A-K (2018) Benefits of hyperspectral imaging for plant disease detection and plant protection: a technical perspective. J Plant Dis Prot 125(1):5–20

Download references

Author information

Authors and affiliations.

Department of Electrical, Electronics and Communications Engineering, GITAM (Deemed to be University), Visakhapatnam, Andhra Pradesh, India

B. Sai Reddy & S. Neeraja

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to B. Sai Reddy .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Sai Reddy, B., Neeraja, S. Plant leaf disease classification and damage detection system using deep learning models. Multimed Tools Appl 81 , 24021–24040 (2022). https://doi.org/10.1007/s11042-022-12147-0

Download citation

Received : 03 June 2021

Revised : 19 August 2021

Accepted : 03 January 2022

Published : 19 March 2022

Issue Date : July 2022

DOI : https://doi.org/10.1007/s11042-022-12147-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Plant disease
  • Leaf damage
  • Semantic segmentation
  • Find a journal
  • Publish with us
  • Track your research

METHODS article

Using deep learning for image-based plant disease detection.

\r\nSharada P. Mohanty,,

  • 1 Digital Epidemiology Lab, EPFL, Geneva, Switzerland
  • 2 School of Life Sciences, EPFL, Lausanne, Switzerland
  • 3 School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland
  • 4 Department of Entomology, College of Agricultural Sciences, Penn State University, State College, PA, USA
  • 5 Department of Biology, Eberly College of Sciences, Penn State University, State College, PA, USA
  • 6 Center for Infectious Disease Dynamics, Huck Institutes of Life Sciences, Penn State University, State College, PA, USA

Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.

Introduction

Modern technologies have given human society the ability to produce enough food to meet the demand of more than 7 billion people. However, food security remains threatened by a number of factors including climate change ( Tai et al., 2014 ), the decline in pollinators ( Report of the Plenary of the Intergovernmental Science-PolicyPlatform on Biodiversity Ecosystem and Services on the work of its fourth session, 2016 ), plant diseases ( Strange and Scott, 2005 ), and others. Plant diseases are not only a threat to food security at the global scale, but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops. In the developing world, more than 80 percent of the agricultural production is generated by smallholder farmers ( UNEP, 2013 ), and reports of yield loss of more than 50% due to pests and diseases are common ( Harvey et al., 2014 ). Furthermore, the largest fraction of hungry people (50%) live in smallholder farming households ( Sanchez and Swaminathan, 2005 ), making smallholder farmers a group that's particularly vulnerable to pathogen-derived disruptions in food supply.

Various efforts have been developed to prevent crop loss due to diseases. Historical approaches of widespread application of pesticides have in the past decade increasingly been supplemented by integrated pest management (IPM) approaches ( Ehler, 2006 ). Independent of the approach, identifying a disease correctly when it first appears is a crucial step for efficient disease management. Historically, disease identification has been supported by agricultural extension organizations or other institutions, such as local plant clinics. In more recent times, such efforts have additionally been supported by providing information for disease diagnosis online, leveraging the increasing Internet penetration worldwide. Even more recently, tools based on mobile phones have proliferated, taking advantage of the historically unparalleled rapid uptake of mobile phone technology in all parts of the world ( ITU, 2015 ).

Smartphones in particular offer very novel approaches to help identify diseases because of their computing power, high-resolution displays, and extensive built-in sets of accessories, such as advanced HD cameras. It is widely estimated that there will be between 5 and 6 billion smartphones on the globe by 2020. At the end of 2015, already 69% of the world's population had access to mobile broadband coverage, and mobile broadband penetration reached 47% in 2015, a 12-fold increase since 2007 ( ITU, 2015 ). The combined factors of widespread smartphone penetration, HD cameras, and high performance processors in mobile devices lead to a situation where disease diagnosis based on automated image recognition, if technically feasible, can be made available at an unprecedented scale. Here, we demonstrate the technical feasibility using a deep learning approach utilizing 54,306 images of 14 crop species with 26 diseases (or healthy) made openly available through the project PlantVillage ( Hughes and Salathé, 2015 ). An example of each crop—disease pair can be seen in Figure 1 .

www.frontiersin.org

Figure 1. Example of leaf images from the PlantVillage dataset, representing every crop-disease pair used. (1) Apple Scab, Venturia inaequalis (2) Apple Black Rot, Botryosphaeria obtusa (3) Apple Cedar Rust, Gymnosporangium juniperi-virginianae (4) Apple healthy (5) Blueberry healthy (6) Cherry healthy (7) Cherry Powdery Mildew, Podoshaera clandestine (8) Corn Gray Leaf Spot, Cercospora zeae-maydis (9) Corn Common Rust, Puccinia sorghi (10) Corn healthy (11) Corn Northern Leaf Blight, Exserohilum turcicum (12) Grape Black Rot, Guignardia bidwellii , (13) Grape Black Measles (Esca), Phaeomoniella aleophilum, Phaeomoniella chlamydospora (14) Grape Healthy (15) Grape Leaf Blight, Pseudocercospora vitis (16) Orange Huanglongbing (Citrus Greening), Candidatus Liberibacter spp. (17) Peach Bacterial Spot, Xanthomonas campestris (18) Peach healthy (19) Bell Pepper Bacterial Spot, Xanthomonas campestris (20) Bell Pepper healthy (21) Potato Early Blight, Alternaria solani (22) Potato healthy (23) Potato Late Blight, Phytophthora infestans (24) Raspberry healthy (25) Soybean healthy (26) Squash Powdery Mildew, Erysiphe cichoracearum (27) Strawberry Healthy (28) Strawberry Leaf Scorch, Diplocarpon earlianum (29) Tomato Bacterial Spot, Xanthomonas campestris pv. vesicatoria (30) Tomato Early Blight, Alternaria solani (31) Tomato Late Blight, Phytophthora infestans (32) Tomato Leaf Mold, Passalora fulva (33) Tomato Septoria Leaf Spot, Septoria lycopersici (34) Tomato Two Spotted Spider Mite, Tetranychus urticae (35) Tomato Target Spot, Corynespora cassiicola (36) Tomato Mosaic Virus (37) Tomato Yellow Leaf Curl Virus (38) Tomato healthy.

Computer vision, and object recognition in particular, has made tremendous advances in the past few years. The PASCAL VOC Challenge ( Everingham et al., 2010 ), and more recently the Large Scale Visual Recognition Challenge (ILSVRC) ( Russakovsky et al., 2015 ) based on the ImageNet dataset ( Deng et al., 2009 ) have been widely used as benchmarks for numerous visualization-related problems in computer vision, including object classification. In 2012, a large, deep convolutional neural network achieved a top-5 error of 16.4% for the classification of images into 1000 possible categories ( Krizhevsky et al., 2012 ). In the following 3 years, various advances in deep convolutional neural networks lowered the error rate to 3.57% ( Krizhevsky et al., 2012 ; Simonyan and Zisserman, 2014 ; Zeiler and Fergus, 2014 ; He et al., 2015 ; Szegedy et al., 2015 ). While training large neural networks can be very time-consuming, the trained models can classify images very quickly, which makes them also suitable for consumer applications on smartphones.

Deep neural networks have recently been successfully applied in many diverse domains as examples of end to end learning. Neural networks provide a mapping between an input—such as an image of a diseased plant—to an output—such as a crop~disease pair. The nodes in a neural network are mathematical functions that take numerical inputs from the incoming edges, and provide a numerical output as an outgoing edge. Deep neural networks are simply mapping the input layer to the output layer over a series of stacked layers of nodes. The challenge is to create a deep network in such a way that both the structure of the network as well as the functions (nodes) and edge weights correctly map the input to the output. Deep neural networks are trained by tuning the network parameters in such a way that the mapping improves during the training process. This process is computationally challenging and has in recent times been improved dramatically by a number of both conceptual and engineering breakthroughs ( LeCun et al., 2015 ; Schmidhuber, 2015 ).

In order to develop accurate image classifiers for the purposes of plant disease diagnosis, we needed a large, verified dataset of images of diseased and healthy plants. Until very recently, such a dataset did not exist, and even smaller datasets were not freely available. To address this problem, the PlantVillage project has begun collecting tens of thousands of images of healthy and diseased crop plants ( Hughes and Salathé, 2015 ), and has made them openly and freely available. Here, we report on the classification of 26 diseases in 14 crop species using 54,306 images with a convolutional neural network approach. We measure the performance of our models based on their ability to predict the correct crop-diseases pair, given 38 possible classes. The best performing model achieves a mean F 1 score of 0.9934 (overall accuracy of 99.35%), hence demonstrating the technical feasibility of our approach. Our results are a first step toward a smartphone-assisted plant disease diagnosis system.

Dataset Description

We analyze 54,306 images of plant leaves, which have a spread of 38 class labels assigned to them. Each class label is a crop-disease pair, and we make an attempt to predict the crop-disease pair given just the image of the plant leaf. Figure 1 shows one example each from every crop-disease pair from the PlantVillage dataset. In all the approaches described in this paper, we resize the images to 256 × 256 pixels, and we perform both the model optimization and predictions on these downscaled images.

Across all our experiments, we use three different versions of the whole PlantVillage dataset. We start with the PlantVillage dataset as it is, in color; then we experiment with a gray-scaled version of the PlantVillage dataset, and finally we run all the experiments on a version of the PlantVillage dataset where the leaves were segmented, hence removing all the extra background information which might have the potential to introduce some inherent bias in the dataset due to the regularized process of data collection in case of PlantVillage dataset. Segmentation was automated by the means of a script tuned to perform well on our particular dataset. We chose a technique based on a set of masks generated by analysis of the color, lightness and saturation components of different parts of the images in several color spaces (Lab and HSB). One of the steps of that processing also allowed us to easily fix color casts, which happened to be very strong in some of the subsets of the dataset, thus removing another potential bias.

This set of experiments was designed to understand if the neural network actually learns the “notion” of plant diseases, or if it is just learning the inherent biases in the dataset. Figure 2 shows the different versions of the same leaf for a randomly selected set of leaves.

www.frontiersin.org

Figure 2. Sample images from the three different versions of the PlantVillage dataset used in various experimental configurations. (A) Leaf 1 color, (B) Leaf 1 grayscale, (C) Leaf 1 segmented, (D) Leaf 2 color, (E) Leaf 2 gray-scale, (F) Leaf 2 segmented.

Measurement of Performance

To get a sense of how our approaches will perform on new unseen data, and also to keep a track of if any of our approaches are overfitting, we run all our experiments across a whole range of train-test set splits, namely 80–20 (80% of the whole dataset used for training, and 20% for testing), 60–40 (60% of the whole dataset used for training, and 40% for testing), 50–50 (50% of the whole dataset used for training, and 50% for testing), 40–60 (40% of the whole dataset used for training, and 60% for testing) and finally 20–80 (20% of the whole dataset used for training, and 80% for testing). It must be noted that in many cases, the PlantVillage dataset has multiple images of the same leaf (taken from different orientations), and we have the mappings of such cases for 41,112 images out of the 54,306 images; and during all these test-train splits, we make sure all the images of the same leaf goes either in the training set or the testing set. Further, for every experiment, we compute the mean precision, mean recall, mean F 1 score, along with the overall accuracy over the whole period of training at regular intervals (at the end of every epoch). We use the final mean F 1 score for the comparison of results across all of the different experimental configurations.

We evaluate the applicability of deep convolutional neural networks for the classification problem described above. We focus on two popular architectures, namely AlexNet ( Krizhevsky et al., 2012 ), and GoogLeNet ( Szegedy et al., 2015 ), which were designed in the context of the “Large Scale Visual Recognition Challenge” (ILSVRC) ( Russakovsky et al., 2015 ) for the ImageNet dataset ( Deng et al., 2009 ).

The AlexNet architecture (see Figure S2) follows the same design pattern as the LeNet-5 ( LeCun et al., 1989 ) architecture from the 1990s. The LeNet-5 architecture variants are usually a set of stacked convolution layers followed by one or more fully connected layers. The convolution layers optionally may have a normalization layer and a pooling layer right after them, and all the layers in the network usually have ReLu non-linear activation units associated with them. AlexNet consists of 5 convolution layers, followed by 3 fully connected layers, and finally ending with a softMax layer. The first two convolution layers (conv{1, 2}) are each followed by a normalization and a pooling layer, and the last convolution layer (conv5) is followed by a single pooling layer. The final fully connected layer (fc8) has 38 outputs in our adapted version of AlexNet (equaling the total number of classes in our dataset), which feeds the softMax layer. The softMax layer finally exponentially normalizes the input that it gets from (fc8), thereby producing a distribution of values across the 38 classes that add up to 1. These values can be interpreted as the confidences of the network that a given input image is represented by the corresponding classes. All of the first 7 layers of AlexNet have a ReLu non-linearity activation unit associated with them, and the first two fully connected layers (fc{6, 7}) have a dropout layer associated with them, with a dropout ratio of 0.5.

The GoogleNet architecture on the other hand is a much deeper and wider architecture with 22 layers, while still having considerably lower number of parameters (5 million parameters) in the network than AlexNet (60 million parameters). An application of the “network in network” architecture ( Lin et al., 2013 ) in the form of the inception modules is a key feature of the GoogleNet architecture. The inception module uses parallel 1 × 1, 3 × 3, and 5 × 5 convolutions along with a max-pooling layer in parallel, hence enabling it to capture a variety of features in parallel. In terms of practicality of the implementation, the amount of associated computation needs to be kept in check, which is why 1 × 1 convolutions before the above mentioned 3 × 3, 5 × 5 convolutions (and also after the max-pooling layer) are added for dimensionality reduction. Finally, a filter concatenation layer simply concatenates the outputs of all these parallel layers. While this forms a single inception module, a total of 9 inception modules is used in the version of the GoogLeNet architecture that we use in our experiments. A more detailed overview of this architecture can be found for reference in ( Szegedy et al., 2015 ).

We analyze the performance of both these architectures on the PlantVillage dataset by training the model from scratch in one case, and then by adapting already trained models (trained on the ImageNet dataset) using transfer learning. In case of transfer learning, we re-initialize the weights of layer fc8 in case of AlexNet, and of the loss {1,2,3}/classifier layers in case of GoogLeNet. Then, when training the model, we do not limit the learning of any of the layers, as is sometimes done for transfer learning. In other words, the key difference between these two learning approaches (transfer vs. training from scratch) is in the initial state of weights of a few layers, which lets the transfer learning approach exploit the large amount of visual knowledge already learned by the pre-trained AlexNet and GoogleNet models extracted from ImageNet ( Russakovsky et al., 2015 ).

To summarize, we have a total of 60 experimental configurations, which vary on the following parameters:

1. Choice of deep learning architecture:

2. Choice of training mechanism:

Transfer Learning,

Training from Scratch.

3. Choice of dataset type:

Gray scale,

Leaf Segmented.

4. Choice of training-testing set distribution:

Train: 80%, Test: 20%,

Train: 60%, Test: 40%,

Train: 50%, Test: 50%,

Train: 40%, Test: 60%,

Train: 20%, Test: 80%.

Throughout this paper, we have used the notation of Architecture:TrainingMechanism:DatasetType:Train-Test-Set-Distribution to refer to particular experiments. For instance, to refer to the experiment using the GoogLeNet architecture, which was trained using transfer learning on the gray-scaled PlantVillage dataset on a train—test set distribution of 60–40, we will use the notation GoogLeNet:TransferLearning:GrayScale:60–40 .

Each of these 60 experiments runs for a total of 30 epochs, where one epoch is defined as the number of training iterations in which the particular neural network has completed a full pass of the whole training set. The choice of 30 epochs was made based on the empirical observation that in all of these experiments, the learning always converged well within 30 epochs (as is evident from the aggregated plots (Figure 3 ) across all the experiments).

www.frontiersin.org

Figure 3. Progression of mean F 1 score and loss through the training period of 30 epochs across all experiments, grouped by experimental configuration parameters . The intensity of a particular class at any point is proportional to the corresponding uncertainty across all experiments with the particular configurations. (A) Comparison of progression of mean F 1 score across all experiments, grouped by deep learning architecture, (B) Comparison of progression of mean F 1 score across all experiments, grouped by training mechanism, (C) Comparison of progression of train-loss and test-loss across all experiments, (D) Comparison of progression of mean F 1 score across all experiments, grouped by train-test set splits, (E) Comparison of progression of mean F 1 score across all experiments, grouped by dataset type. A similar plot of all the observations, as it is, across all the experimental configurations can be found in the Supplementary Material.

To enable a fair comparison between the results of all the experimental configurations, we also tried to standardize the hyper-parameters across all the experiments, and we used the following hyper-parameters in all of the experiments:

• Solver type: Stochastic Gradient Descent,

• Base learning rate: 0.005,

• Learning rate policy: Step (decreases by a factor of 10 every 30/3 epochs),

• Momentum: 0.9,

• Weight decay: 0.0005,

• Gamma: 0.1,

• Batch size: 24 (in case of GoogLeNet), 100 (in case of AlexNet).

All the above experiments were conducted using our own fork of Caffe ( Jia et al., 2014 ), which is a fast, open source framework for deep learning. The basic results, such as the overall accuracy can also be replicated using a standard instance of caffe.

At the outset, we note that on a dataset with 38 class labels, random guessing will only achieve an overall accuracy of 2.63% on average. Across all our experimental configurations, which include three visual representations of the image data (see Figure 2 ), the overall accuracy we obtained on the PlantVillage dataset varied from 85.53% (in case of AlexNet::TrainingFromScratch::GrayScale::80–20 ) to 99.34% (in case of GoogLeNet::TransferLearning::Color::80–20 ), hence showing strong promise of the deep learning approach for similar prediction problems. Table 1 shows the mean F 1 score, mean precision, mean recall, and overall accuracy across all our experimental configurations. All the experimental configurations run for a total of 30 epochs each, and they almost consistently converge after the first step down in the learning rate.

www.frontiersin.org

Table 1. Mean F 1 score across various experimental configurations at the end of 30 epochs .

To address the issue of over-fitting, we vary the test set to train set ratio and observe that even in the extreme case of training on only 20% of the data and testing the trained model on the rest 80% of the data, the model achieves an overall accuracy of 98.21% (mean F 1 score of 0.9820) in the case of GoogLeNet::TransferLearning::Color::20–80 . As expected, the overall performance of both AlexNet and GoogLeNet do degrade if we keep increasing the test set to train set ratio (see Figure 3D ), but the decrease in performance is not as drastic as we would expect if the model was indeed over-fitting. Figure 3C also shows that there is no divergence between the validation loss and the training loss, confirming that over-fitting is not a contributor to the results we obtain across all our experiments.

Among the AlexNet and GoogLeNet architectures, GoogLeNet consistently performs better than AlexNet (Figure 3A ), and based on the method of training, transfer learning always yields better results (Figure 3B ), both of which were expected.

The three versions of the dataset (color, gray-scale, and segmented) show a characteristic variation in performance across all the experiments when we keep the rest of the experimental configuration constant. The models perform the best in case of the colored version of the dataset. When designing the experiments, we were concerned that the neural networks might only learn to pick up the inherent biases associated with the lighting conditions, the method and apparatus of collection of the data. We therefore experimented with the gray-scaled version of the same dataset to test the model's adaptability in the absence of color information, and its ability to learn higher level structural patterns typical to particular crops and diseases. As expected, the performance did decrease when compared to the experiments on the colored version of the dataset, but even in the case of the worst performance, the observed mean F 1 score was 0.8524 (overall accuracy of 85.53%). The segmented versions of the whole dataset was also prepared to investigate the role of the background of the images in overall performance, and as shown in Figure 3E , the performance of the model using segmented images is consistently better than that of the model using gray-scaled images, but slightly lower than that of the model using the colored version of the images.

While these approaches yield excellent results on the PlantVillage dataset which was collected in a controlled environment, we also assessed the model's performance on images sampled from trusted online sources, such as academic agriculture extension services. Such images are not available in large numbers, and using a combination of automated download from Bing Image Search and IPM Images with a visual verification step, we obtained two small, verified datasets of 121 (dataset 1) and 119 images (dataset 2), respectively (see Supplementary Material for a detailed description of the process). Using the best model on these datasets, we obtained an overall accuracy of 31.40% in dataset 1, and 31.69% in dataset 2, in successfully predicting the correct class label (i.e., crop and disease information) from among 38 possible class labels. We note that a random classifier will obtain an average accuracy of only 2.63%. Across all images, the correct class was in the top-5 predictions in 52.89% of the cases in dataset 1, and in 65.61% of the cases in dataset 2. The best models for the two datasets were GoogLeNet:Segmented:TransferLearning:80–20 for dataset 1, and GoogLeNet:Color:TransferLearning:80–20 for dataset 2. An example image from theses datasets, along with its visualization of activations in the initial layers of an AlexNet architecture, can be seen in Figure 4 .

www.frontiersin.org

Figure 4. Visualization of activations in the initial layers of an AlexNet architecture demonstrating that the model has learnt to efficiently activate against the diseased spots on the example leaf. (A) Example image of a leaf suffering from Apple Cedar Rust, selected from the top-20 images returned by Bing Image search for the keywords “Apple Cedar Rust Leaves” on April 4th, 2016. Image Reference: Clemson University - USDA Cooperative Extension Slide Series, Bugwood. org. (B) Visualization of activations in the first convolution layer(conv1) of an AlexNet architecture trained using AlexNet:Color:TrainFromScratch:80–20 when doing a forward pass on the image in shown in panel b.

So far, all results have been reported under the assumption that the model needs to detect both the crop species and the disease status. We can limit the challenge to a more realistic scenario where the crop species is provided, as it can be expected to be known by those growing the crops. To assess this the performance of the model under this scenario, we limit ourselves to crops where we have at least n > = 2 (to avoid trivial classification) or n > = 3 classes per crop. In the n > = 2 case, dataset 1 contains 33 classes distributed among 9 crops. Random guessing in such a dataset would achieve an accuracy of 0.225, while our model has an accuracy of 0.478. In the n > = 3 case, the dataset contains 25 classes distributed among 5 crops. Random guessing in such a dataset would achieve an accuracy of 0.179, while our model has an accuracy of 0.411.

Similarly, in the n > = 2 case, dataset 2 contains 13 classes distributed among 4 crops. Random guessing in such a dataset would achieve an accuracy of 0.314, while our model has an accuracy of 0.545. In the n > = 3 case, the dataset contains 11 classes distributed among 3 crops. Random guessing in such a dataset would achieve an accuracy of 0.288, while our model has an accuracy of 0.485.

The performance of convolutional neural networks in object recognition and image classification has made tremendous progress in the past few years. ( Krizhevsky et al., 2012 ; Simonyan and Zisserman, 2014 ; Zeiler and Fergus, 2014 ; He et al., 2015 ; Szegedy et al., 2015 ). Previously, the traditional approach for image classification tasks has been based on hand-engineered features, such as SIFT ( Lowe, 2004 ), HoG ( Dalal and Triggs, 2005 ), SURF ( Bay et al., 2008 ), etc., and then to use some form of learning algorithm in these feature spaces. The performance of these approaches thus depended heavily on the underlying predefined features. Feature engineering itself is a complex and tedious process which needs to be revisited every time the problem at hand or the associated dataset changes considerably. This problem occurs in all traditional attempts to detect plant diseases using computer vision as they lean heavily on hand-engineered features, image enhancement techniques, and a host of other complex and labor-intensive methodologies.

In addition, traditional approaches to disease classification via machine learning typically focus on a small number of classes usually within a single crop. Examples include a feature extraction and classification pipeline using thermal and stereo images in order to classify tomato powdery mildew against healthy tomato leaves ( Raza et al., 2015 ); the detection of powdery mildew in uncontrolled environments using RGB images ( Hernández-Rabadán et al., 2014 ); the use of RGBD images for detection of apple scab ( Chéné et al., 2012 ) the use of fluorescence imaging spectroscopy for detection of citrus huanglongbing ( Wetterich et al., 2012 ) the detection of citrus huanglongbing using near infrared spectral patterns ( Sankaran et al., 2011 ) and aircraft-based sensors ( Garcia-Ruiz et al., 2013 ) the detection of tomato yellow leaf curl virus by using a set of classic feature extraction steps, followed by classification using a support vector machines pipeline ( Mokhtar et al., 2015 ), and many others. A very recent review on the use of machine learning on plant phenotyping ( Singh et al., 2015 ) extensively discusses the work in this domain. While neural networks have been used before in plant disease identification ( Huang, 2007 ) (for the classification and detection of Phalaenopsis seedling disease like bacterial soft rot, bacterial brown spot, and Phytophthora black rot), the approach required representing the images using a carefully selected list of texture features before the neural network could classify them.

Our approach is based on recent work Krizhevsky et al. (2012) which showed for the first time that end-to-end supervised training using a deep convolutional neural network architecture is a practical possibility even for image classification problems with a very large number of classes, beating the traditional approaches using hand-engineered features by a substantial margin in standard benchmarks. The absence of the labor-intensive phase of feature engineering and the generalizability of the solution makes them a very promising candidate for a practical and scaleable approach for computational inference of plant diseases.

Using the deep convolutional neural network architecture, we trained a model on images of plant leaves with the goal of classifying both crop species and the presence and identity of disease on images that the model had not seen before. Within the PlantVillage data set of 54,306 images containing 38 classes of 14 crop species and 26 diseases (or absence thereof), this goal has been achieved as demonstrated by the top accuracy of 99.35%. Thus, without any feature engineering, the model correctly classifies crop and disease from 38 possible classes in 993 out of 1000 images. Importantly, while the training of the model takes a lot of time (multiple hours on a high performance GPU cluster computer), the classification itself is very fast (less than a second on a CPU), and can thus easily be implemented on a smartphone. This presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.

However, there are a number of limitations at the current stage that need to be addressed in future work. First, when tested on a set of images taken under conditions different from the images used for training, the model's accuracy is reduced substantially, to just above 31%. It's important to note that this accuracy is much higher than the one based on random selection of 38 classes (2.6%), but nevertheless, a more diverse set of training data is needed to improve the accuracy. Our current results indicate that more (and more variable) data alone will be sufficient to substantially increase the accuracy, and corresponding data collection efforts are underway.

The second limitation is that we are currently constrained to the classification of single leaves, facing up, on a homogeneous background. While these are straightforward conditions, a real world application should be able to classify images of a disease as it presents itself directly on the plant. Indeed, many diseases don't present themselves on the upper side of leaves only (or at all), but on many different parts of the plant. Thus, new image collection efforts should try to obtain images from many different perspectives, and ideally from settings that are as realistic as possible.

At the same time, by using 38 classes that contain both crop species and disease status, we have made the challenge harder than ultimately necessary from a practical perspective, as growers are expected to know which crops they are growing. Given the very high accuracy on the PlantVillage dataset, limiting the classification challenge to the disease status won't have a measurable effect. However, on the real world datasets, we can measure noticeable improvements in accuracy. Overall, the presented approach works reasonably well with many different crop species and diseases, and is expected to improve considerably with more training data.

Finally, it's worth noting that the approach presented here is not intended to replace existing solutions for disease diagnosis, but rather to supplement them. Laboratory tests are ultimately always more reliable than diagnoses based on visual symptoms alone, and oftentimes early-stage diagnosis via visual inspection alone is challenging. Nevertheless, given the expectation of more than 5 Billion smartphones in the world by 2020—of which almost a Billion in Africa ( GSMA Intelligence, 2016 )—we do believe that the approach represents a viable additional method to help prevent yield loss. What's more, in the future, image data from a smartphone may be supplemented with location and time information for additional improvements in accuracy. Last but not least, it would be prudent to keep in mind the stunning pace at which mobile technology has developed in the past few years, and will continue to do so. With ever improving number and quality of sensors on mobiles devices, we consider it likely that highly accurate diagnoses via the smartphone are only a question of time.

Author Contributions

MS, DH, and SM conceived the study and wrote the paper. SM implemented the algorithm described.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank Boris Conforty for help with the segmentation. We thank Kelsee Baranowski, Ryan Bringenberg, and Megan Wilkerson for taking the images and Kelsee Baranowski for image curation. We thank Anna Sostarecz, Kaity Gonzalez, Ashtyn Goodreau, Kalley Veit, Ethan Keller, Parand Jalili, Emma Volk, Nooeree Samdani, Kelsey Pryze for additional help with image curation. We thank EPFL, and the Huck Institutes at Penn State University for support. We are particularly grateful for access to EPFL GPU cluster computing resources.

Supplementary Material

The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fpls.2016.01419

The data and the code used in this paper are available at the following locations:

Data: https://github.com/salathegroup/plantvillage_deeplearning_paper_dataset

Code: https://github.com/salathegroup/plantvillage_deeplearning_paper_analysis

More image data can be found at https://www.plantvillage.org/en/plant_images

Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). Speeded-up robust features (surf). Comput. Vis. Image Underst. 110, 346–359. doi: 10.1016/j.cviu.2007.09.014

CrossRef Full Text | Google Scholar

Chéné, Y., Rousseau, D., Lucidarme, P., Bertheloot, J., Caffier, V., Morel, P., et al. (2012). On the use of depth camera for 3d phenotyping of entire plants. Comput. Electron. Agric. 82, 122–127. doi: 10.1016/j.compag.2011.12.007

Dalal, N., and Triggs, B. (2005). “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. (IEEE) (Washington, DC).

Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei L. (2009). “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. (IEEE).

Google Scholar

Ehler, L. E. (2006). Integrated pest management (ipm): definition, historical development and implementation, and the other ipm. Pest Manag. Sci. 62, 787–789. doi: 10.1002/ps.1247

PubMed Abstract | CrossRef Full Text | Google Scholar

Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. (2010). The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88, 303–338. doi: 10.1007/s11263-009-0275-4

Garcia-Ruiz, F., Sankaran, S., Maja, J. M., Lee, W. S., Rasmussen, J., and Ehsani R. (2013). Comparison of two aerial imaging platforms for identification of huanglongbing-infected citrus trees. Comput. Electron. Agric. 91, 106–115. doi: 10.1016/j.compag.2012.12.002

GSMA Intelligence (2016). The Mobile Economy- Africa 2016 . London: GSMA.

Harvey, C. A., Rakotobe, Z. L., Rao, N. S., Dave, R., Razafimahatratra, H., Rabarijohn, R. H., et al. (2014). Extreme vulnerability of smallholder farmers to agricultural risks and climate change in madagascar. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369:20130089. doi: 10.1098/rstb.2013.008

He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. arXiv:1512.03385.

PubMed Abstract | Google Scholar

Hernández-Rabadán, D. L., Ramos-Quintana, F., and Guerrero Juk, J. (2014). Integrating soms and a bayesian classifier for segmenting diseased plants in uncontrolled environments. Sci. World J. 2014:214674. doi: 10.1155/2014/214674

Huang, K. Y. (2007). Application of artificial neural network for detecting phalaenopsis seedling diseases using color and texture features. Comput. Electron. Agric. 57, 3–11. doi: 10.1016/j.compag.2007.01.015

Hughes, D. P., and Salathé, M. (2015). An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv:1511.08060

ITU (2015). ICT Facts and Figures – the World in 2015. Geneva: International Telecommunication Union.

Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., et al. (2014). Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093.

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems , eds F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Curran Associates, Inc.), 1097–1105.

LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., et al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551. doi: 10.1162/neco.1989.1.4.541

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539

Lin, M., Chen, Q., and Yan, S. (2013). Network in network. arXiv:1312.4400.

Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110. doi: 10.1023/B:VISI.0000029664.99615.94

Mokhtar, U., Ali, M. A., Hassanien, A. E., and Hefny, H. (2015). “Identifying two of tomatoes leaf viruses using support vector machine,” in Information Systems Design and Intelligent Applications , eds J. K. Mandal, S. C. Satapathy, M. K. Sanyal, P. P. Sarkar, A. Mukhopadhyay (Springer), 771–782.

Raza, S.-A., Prince, G., Clarkson, J. P., Rajpoot, N. M., et al. (2015). Automatic detection of diseased tomato plants using thermal and stereo visible light images. PLoS ONE 10:e0123262. doi: 10.1371/journal.pone.0123262. Available online at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0123262

Report of the Plenary of the Intergovernmental Science-PolicyPlatform on Biodiversity Ecosystem Services on the work of its fourth session (2016). Plenary of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services Fourth session . Kuala Lumpur. Available online at: http://www.ipbes.net/sites/default/files/downloads/pdf/IPBES-4-4-19-Amended-Advance.pdf

Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252. doi: 10.1007/s11263-015-0816-y

Sanchez, P. A., and Swaminathan, M. S. (2005). Cutting world hunger in half. Science 307, 357–359. doi: 10.1126/science.1109057

Sankaran, S., Mishra, A., Maja, J. M., and Ehsani, R. (2011). Visible-near infrared spectroscopy for detection of huanglongbing in citrus orchards. Comput. Electron. Agric. 77, 127–134. doi: 10.1016/j.compag.2011.03.004

Schmidhuber, J. (2015). Deep learning in neural networks: an overview. Neural Netw. 61, 85–117. doi: 10.1016/j.neunet.2014.09.003

Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.

Singh, A., Ganapathysubramanian, B., Singh, A. K., and Sarkar, S. (2015). Machine learning for highthroughput stress phenotyping in plants. Trends Plant Sci. 21, 110–124 doi: 10.1016/j.tplants.2015.10.015

PubMed Abstract | CrossRef Full Text

Strange, R. N., and Scott, P. R. (2005). Plant disease: a threat to global food security. Phytopathology 43, 83–116. doi: 10.1146/annurev.phyto.43.113004.133839

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition .

Tai, A. P., Martin, M. V., and Heald, C. L. (2014). Threat to future global food security from climate change and ozone air pollution. Nat. Clim. Chang 4, 817–821. doi: 10.1038/nclimate2317

UNEP (2013). Smallholders, Food Security, and the Environment . Rome : International Fund for Agricultural Development (IFAD). Available online at: https://www.ifad.org/documents/10180/666cac2414b643c2876d9c2d1f01d5dd

Wetterich, C. B., Kumar, R., Sankaran, S., Junior, J. B., Ehsani, R., and Marcassa, L. G. (2012). A comparative study on application of computer vision and fluorescence imaging spectroscopy for detection of huanglongbing citrus disease in the usa and brazil. J. Spectrosc. 2013:841738. doi: 10.1155/2013/841738

Zeiler, M. D., and Fergus, R. (2014). “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014 , eds D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Springer), 818–833.

Keywords: crop diseases, machine learning, deep learning, digital epidemiology

Citation: Mohanty SP, Hughes DP and Salathé M (2016) Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 7:1419. doi: 10.3389/fpls.2016.01419

Received: 19 June 2016; Accepted: 06 September 2016; Published: 22 September 2016.

Reviewed by:

Copyright © 2016 Mohanty, Hughes and Salathé. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marcel Salathé, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

Early Detection and Classification of Tomato Leaf Disease Using High-Performance Deep Neural Network

Naresh k. trivedi.

1 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; [email protected] (N.K.T.); [email protected] (A.A.)

Vinay Gautam

2 School of Computing, DIT University, Dehradun 248009, India; [email protected]

Abhineet Anand

Hani moaiteq aljahdali.

3 Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 37848, Saudi Arabia; as.ude.uak@iladhajlamH

Santos Gracia Villar

4 Higher Polytechnic School/Industrial Organization Engineering, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; [email protected]

5 Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico

Divya Anand

6 Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, India; [email protected]

Nitin Goyal

Seifedine kadry.

7 Faculty of Applied Computing and Technology, Noroff University College, 4608 Kristiansand, Norway; [email protected]

Associated Data

Not applicable.

Tomato is one of the most essential and consumable crops in the world. Tomatoes differ in quantity depending on how they are fertilized. Leaf disease is the primary factor impacting the amount and quality of crop yield. As a result, it is critical to diagnose and classify these disorders appropriately. Different kinds of diseases influence the production of tomatoes. Earlier identification of these diseases would reduce the disease’s effect on tomato plants and enhance good crop yield. Different innovative ways of identifying and classifying certain diseases have been used extensively. The motive of work is to support farmers in identifying early-stage diseases accurately and informing them about these diseases. The Convolutional Neural Network (CNN) is used to effectively define and classify tomato diseases. Google Colab is used to conduct the complete experiment with a dataset containing 3000 images of tomato leaves affected by nine different diseases and a healthy leaf. The complete process is described: Firstly, the input images are preprocessed, and the targeted area of images are segmented from the original images. Secondly, the images are further processed with varying hyper-parameters of the CNN model. Finally, CNN extracts other characteristics from pictures like colors, texture, and edges, etc. The findings demonstrate that the proposed model predictions are 98.49% accurate.

1. Introduction

Plants are an integral part of our lives because they produce food and shield us from dangerous radiation. Without plants, no life is imaginable; they sustain all terrestrial life and defend the ozone layer, which filters ultraviolet radiations. Tomato is a food-rich plant, a consumable vegetable widely cultivated [ 1 ]. Worldwide, there are approximately 160 million tons of tomatoes consumed annually [ 2 ]. The tomato, a significant contributor to reducing poverty, is seen as an income source for farm households [ 3 ]. Tomatoes are one of the most nutrient-dense crops on the planet, and their cultivation and production have a significant impact on the agricultural economy. Not only is the tomato nutrient-dense, but it also possesses pharmacological properties that protect against diseases such as hypertension, hepatitis, and gingival bleeding [ 1 ]. Tomato demand is also increasing as a result of its widespread use. According to statistics, small farmers produce more than 80% of agricultural output [ 2 ]; due to diseases and pests, about 50% of their crops are lost. The diseases and parasitic insects are the key factors impacting tomato growth, making it necessary to research the field crop disease diagnosis.

The manual identification of pests and pathogens is inefficient and expensive. Therefore, it is necessary to provide automated AI image-based solutions to farmers. Images are being used and accepted as a reliable means of identifying disease in image-based computer vision applications due to the availability of appropriate software packages or tools. They process images using image processing, an intelligent image identification technology which increases image recognition efficiency, lowers costs, and improves recognition accuracy [ 3 ].

Although plants are necessary for existence, they experience numerous obstacles. An early and accurate diagnosis helps decrease the risk of ecological damage. Without systematic disease identification, product quality and quantity suffer. This has a further detrimental effect on a country’s economy [ 1 ]. Agricultural production must expand by 70% by 2050 to meet global food demands, according to the United Nations Food and Agriculture Organization (FAO) [ 2 ]. In opposition, chemicals used to prevent diseases, such as fungicides and bactericides, negatively impact the agricultural ecosystem. We therefore need quick and effective disease classification and detection techniques that can help the agro-ecosystem. Advance disease detection technology, such as image processing and neural networks, will allow the design of systems capable of early disease detection for tomato plants. The plant production can be reduced by 50% due to stress as a result [ 1 ]. Inspecting the plant is the first step in finding disease, then figuring out what to work with based on prior experience is the next step [ 3 ]. This method lacks scientific consistency because farmers’ backgrounds differ, resulting in the process being less reliable. There is a possibility that farmers will misclassify a disease, and an incorrect treatment will damage the plant. Similarly, field visits by domain specialists are pricey. There is a need for the development of automated disease detection and classification methods based on images that can take the role of the domain expert.

It is necessary to tackle the leaf disease issue with an appropriate solution [ 4 , 5 ]. Tomato disease control is a complex process that takes constant account of a substantial fraction of production cost during the season [ 6 , 7 , 8 , 9 ]. Vegetable diseases (bacteria, late mildew, leaf spot, tomato mosaic, and yellow curved) are prevalent. They seriously affect plant growth, which leads to reduced product quality and quantity [ 10 ]. As per past research, 80–90% of diseases of plants appear on leaves [ 11 ]. Tracking the farm and recognizing different forms of the disease with infected plants takes a long time. Farmers’ evaluation of the type of plant disease might be wrong. This decision could lead to insufficient and counterproductive defense measures implemented in the plant. Early detection can reduce processing costs, reduce the environmental impact of chemical inputs, and minimize loss risk [ 12 , 13 , 14 ].

Many solutions have been proposed with the advent of technology. Here in this paper, the same solutions are used to recognize leaf diseases. Compared with other image regions, the main objective is to make the lesion more apparent. Problems such as (1) shifts in illumination and spectral reflectance, (2) poor input image contrast, and (3) image size and form range have been encountered. Pre-processing operations include image contrast, greyscale conversion, image resizing, and image cropping and filtering [ 15 , 16 , 17 ]. The next step is the division of an image into objects. These objects are used to determine regions of interest as infected regions in the image [ 18 ]. Unfortunately, the segmentation method has many problems:

  • When the conditions of light differ from eligible photographs, color segmentation fails.
  • Regional segmentation occurs because of initial seed selection.
  • Texture varieties take too long to handle.

The next step for classification is to determine which class belongs to the sample. Then, one or more different input variables of the procedure are surveyed. Occasionally, the method is employed to identify a particular type of input. Improving the accuracy of the classification is by far the most extreme classification challenge. Finally, the actual data are used to create and validate datasets dissimilar to the training set.

The rest of the paper is organized as follows: Section 2 reviews the extant literature. Then, the material method and process are described in Section 3 . Next, the results analysis and discussion are explained in Section 4 . Finally, Section 5 is the conclusion.

2. Related Work

Various researchers have used cutting-edge technology such as machine learning and neural network architectures like Inception V3 net, VGG 16 net, and SqueezeNet to construct automated disease detection systems. These use highly accurate methods for identifying plant disease in tomato leaves. In addition, researchers have proposed many deep learning-based solutions in disease detection and classification, as discussed below in [ 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 ].

A pre-trained network model for detecting and classifying tomato disease has been proposed with 94–95% accuracy [ 19 , 20 ]. The Tree Classification Model and Segmentation is used to detect and classify six different types of tomato leaf disease with a dataset of 300 images [ 21 ]. A technique has been proposed to detect and classify plant leaf disease with an accuracy of 93.75% [ 22 ]. The image processing technology and classification algorithm detect and classify plant leaf disease with better quality [ 23 ]. Here, an 8-mega-pixel smartphone camera is used to collect sample data and divides it into 50% healthy and 50% unhealthy categories. The image processing procedure includes three elements: improving contrast, segmenting, and extracting features. Classification processes are performed via an artificial neural network using a multi-layer feed-forward neural network, and two types of network structures are compared. The result was better than the Multilayer Perceptron (MLP) network and Radial Basis Function (RBF) network results. The quest divides the plant blade’s picture into healthy and unhealthy; it cannot detect the form of the disease. Authors used leaf diseases to identify and achieve 87.2% classification accuracy through color space analysis, color time, histogram, and color coherence [ 24 ].

AlexNet and VGG 19 models have been used to diagnose diseases affecting tomato crops using a frame size of 13,262. The model is used to achieve 97.49% precision [ 25 ]. A transfer learning and CNN Model is used to accurately detect diseases infecting dairy crops, reaching 95% [ 26 ]. A neural network to determine and classify tomato plant leaf conditions using transfer learning as an AlexNet based deep learning mechanism achieved an accuracy of 95.75% [ 27 , 28 ]. Resnet-50 was designed to identify 1000 diseases of tomato leaf, i.e., a total of 3000 pictures with the name of lesion blight, late blight, and the yellow curl leaf. The Network Activation function for comparison has been amended to Leaky-ReLU, and the kernel size has been updated to 11 × 11 for the first convolution layer. The model predicts the class of diseases with an accuracy of 98.30% and a precision of 98.0% after several repetitions [ 29 ]. A simplified eight-layered CNN model has been proposed to detect and classify tomato leaf disease [ 30 ]. In this paper, the author utilized the PlantVillage dataset [ 31 ] that contains different crop datasets. The tomato leaf dataset was selected and used to performe deep learning; the author used the disease classes and achieved a better accuracy rate.

A simple CNN model with eight hidden layers has been used to identify the conditions of a tomato plant. The proposed techniques show optimal results compared to other classical models [ 32 , 33 , 34 , 35 ]. The image processing technique uses deep learning methods to identify and classify tomato plant diseases [ 36 ]. Here, the author used the segmentation technique and CNN to implement a complete system. A variation in the CNN model has been adopted and applied to achieve better accuracy.

LeNet has been used to identify and classify tomato diseases with minimal resource utilization in CPU processing capability. Furthermore, the automatic feature extraction technique has been employed to improve classification accuracy [ 37 ]. ResNet 50 model has been used to classify and identify tomato disease. The authors detected the diseases in multiple steps: Firstly, by segregating the disease dataset. Secondly, by adapting and adjusting the model based on the transfer learning approach, and lastly, by enhancing the quality of the model by using data augmentation. Finally, the model is authenticated by using the dataset. The model outperformed various legacy methods and achieved 97% accuracy [ 38 ]. Hyperspectral images identify rice leaf diseases by evaluating different spectral responses of leaf blade fractions and identifying Sheath blight (ShB) leaf diseases [ 39 ]. A spectral library has been created using different disease samples [ 40 ]. An improved VGG16 has been used to identify apple leaf disease with an accuracy rate of 99.01% [ 41 ].

The author employed image processing, segmentation, and a CNN to classify leaf disease. This research attempts to identify and classify tomato diseases in fields and greenhouse plants. The author used deep learning and a robot in real-time to identify plant diseases utilizing the sensor’s image. AlexNet and SqueezeNet are deep learning architectures used to diagnose and categorize plant disease [ 42 ]. The authors built convolutional neural network models using leaf pictures of healthy and sick plants. An open-source PlavtVillage dataset with 87,848 images of 25 plants classified into 58 categories and a model was used to identify plant/disease pairs with a 99.53% success rate (or healthy plant). The authors suggest constructing a real-time plant disease diagnosis system based on the proposed model [ 43 ].

In this paper, the authors reviewed all CNN variants for plant disease classification. The authors also briefed all deep learning principles used for leaf disease identification and classification. The authors mainly focused on the latest CNN models and evaluated their performance. Here, the authors summarized CNN variants such as VGG16, VGG19, and ResNet. In this paper, the authors discuss pros, cons, and future aspects of different CNN variants [ 44 ].

This work is mainly focused on investigating an optimal solution for plant leaf disease detection. This paper proposes a segmentation-based CNN to provide the best solution to the defined problem. This paper uses segmented images to train the model compared to other models trained on the complete image. The model outperformed and achieved 98.6% classification accuracy. The model was trained and tested on independent data with ten disease classes [ 45 ].

A detailed learning technique for the identification of disease in tomato leaves using enhanced CNNs is presented in this article.

  • The dataset for tomato leaves is built using data augmentation and image annotation tools. It consists of laboratory photos and detailed images captured in actual field situations.
  • The recognition of tomato leaves is proposed using a Deep Convolutional Neural Network (DCNN). Rainbow concatenation and GoogLeNet Inception V3 structure are all included.
  • In the proposed INAR-SSD model, the Inception V3 module and Rainbow concatenation detect these five frequent tomato leaf diseases.

The testing results show that the INAR-SSD model achieves a detection rate of 23.13 frames per second and detection performance of 78.80% mAP on the Apple Leaf Disease Dataset (ALDD). Furthermore, the results indicate that the innovative INAR-SSD (SSD with Inception module and Rainbow concatenation) model produces more accurate and faster results for the early identification of tomato leaf diseases than other methods [ 46 ].

An EfficientNet, a convolutional neural network with 18,161 plain segmented tomato leaf images, is used to classify tomato diseases. Two leaf segmentation models, U-net and Modified U-net, are evaluated. The models’ ability was examined categorically (healthy vs disease leaves and 6- and 10-class healthy vs sick leaves). The improved U-net segmentation model correctly classified 98.66% of leaf pictures for segmentation. EfficientNet-B7 surpassed 99.95% and 99.12% accuracy for binary and six-class classification, and EfficientNet-B4 classified images for ten classes with 99.89 percent accuracy [ 47 ].

Disease detection is crucial for crop output. Therefore, disease detection has led academics to focus on agricultural ailments. This research presents a deep convolutional neural network and an attention mechanism for analyzing tomato leaf diseases. The network structure has attention extraction blocks and modules. As a result, it can detect a broad spectrum of diseases. The model also forecasts 99.24% accuracy in tests, network complexities, and real-time adaptability [ 48 ].

Convolutional Neural Networks (CNNs) have revolutionized image processing, especially deep learning methods. Over the last two years, numerous potential autonomous crop disease detection applications have emerged. These models can be used to develop an expert consultation app or a screening app. These tools may help enhance sustainable farming practices and food security. The authors looked at 19 studies that employed CNNs to identify plant diseases and assess their overall utility [ 49 ].

To depict the illustrations, the authors depended on the PlantVillage dataset. The authors did not evaluate the performance of the neural network topologies using typical performance metrics such as F1-score, recall, precision, etc. Instead, they assessed the model’s accuracy and inference time. This article proposes a new deep neural network model and evaluates it using a variety of evaluation metrics.

3. Materials and Methods

In this part, cutting-edge methodologies, models, and datasets are utilized to attain outcomes.

3.1. Dataset

There were ten unique classes of disease in the sample. Tomato leaves of nine types were infected, and one class was resistant. We used reference photos and disease names to identify our dataset classes, as shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g001.jpg

Sample leaf image with disease and pathogen for ( a ) Bacterial_Spot(Xanthomonas vesicatoria), ( b ) Early_Blight(fungus Alternaria solani), ( c ) Late_Blight( Phytophthora infestans ), ( d ) Leaf_Mold(Cladosporium fulvum), ( e ) Septoria_Leaf_Spot(fungus Septoria lycopersici), ( f ) Spider_Mites(floridana), ( g ) Target_Spot(fungus Corynespora), ( h ) Tomato_Mosaic_Virus(Tobamovirus), ( i ) Tomato_Yellow_Leaf_Curl_Virus(genus Begomovirus), ( j ) Healthy_Leaf.

In the experiment, the complete dataset was divided in the ratio of 80:20 for training and testing and validation data.

3.2. Image Pre-Processing and Labelling

Before training the model, image pre-processing was used to change or boost the raw images that needed to be processed by the CNN classifier. Building a successful model requires analyzing both the design of the network and the format of input data. We pre-processed our dataset so that the proposed model could take the appropriate features out of the image. The first step was to normalize the size of the picture and resize it to 256 × 256 pixels. The images were then transformed into grey. This stage of pre-processing means that a considerable amount of training data are required for the explicit learning of the training data features. The next step was to group tomato leaf pictures by type, then mark all images with the correct acronym for the disease. In this case, the dataset showed ten classes in test collection and training.

3.3. Training Dataset

Preparing the dataset was the first stage in processing the existing dataset. The Convolutional Neural Network process was used during this step as image data input, which eventually formed a model that assessed performance. Normalization steps on tomato leaf images are shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g002.jpg

Classifier model used.

3.4. Convolutional Neural Network

The CNN is a neural network technology widely employed today to process or train the data in images. The matrix format of the Convolution is designed to filter the pictures. For data training, each layer is utilized in the Convolution Neural Network, including the following layers: input layer, convo layer, fully connected layer pooling layer, drop-out layer to build CNN, and ultimately linked dataset classification layer. It can map a series of calculations to the input test set in each layer. The complete architecture is shown in Figure 3 , and a description of the model is in Table 1 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g003.jpg

CNN model architecture.

Hyper-parameter of deep neural network.

3.4.1. Convolutional Layer

A convolution layer is used to map characteristics using the convolution procedure with the presentation layer. Each function of the map is combined with several input characteristics. Convolution can be defined as a two-function operation and constitutes the basis of CNNs. Each filter is converted to each part of the input information, and a map or 2D function map is generated. The complexity of the model encounters significant layer convolutional performance optimization. Calculated in the following equation for input z of the i th coalescent layer (1):

Where × is a convolution operation and f is used for an activation function, and Q is a layer kernel convolution. w i = [ Q i 1 ,   Q i 2 ,   … ,   Q i J ] , J   is the kernel layer convolution amount. Each kernel of Q i is a weight matrix K × K × L . The number of input channels is K as the window size.

3.4.2. Pooling Layer

The pooling layer increases the number of parameters exponentially to maximize and improve precision. Furthermore, with growing parameters, the size of the maps is reduced. The pooling layer reduces the overall output of the convolution layer. It reduces the number of training parameters by considering the spatial properties of an area representing a whole country. It also distributes the total value of all R activations to the subsequent activation in the chain. In the m-th max-pooled band, there are J-related filters that are combined.

where N ∈ {1, …., R} is pooling shift allowing for overlap between pooling zones where N < R. It reduces the output dimensionality from K convolution bands to M = (( K - R ))/( N + 1) pooled bands and the resulting layer is p = [ p _1, …, p _ m ] ∈ R ^( M . J .)

Finally, a maximum of four quadrants indicates the value maximum with average pooling results.

3.4.3. Fully Connected Layer

Each layer in the completely connected network is connected with its previous and subsequent layers. The first layer of the utterly corresponding layer is connected to each node in the pooling layer’s last frame. The parameters used in the CNN model take more time because of the complex computation; it is the critical drawback of the fully linked sheet. Thus, the elimination of the number of nodes and links will overcome these limitations. The dropout technique will satisfy deleted nodes and connections.

3.4.4. Dropout

An absence is an approach in which a randomly selected neuron is ignored during training, and they are “dropped out” spontaneously. This means that they are briefly omitted from their contribution to the activation of the downstream neurons on the forward transfer, and no weight changes at the back are applied to the neuron. Thus, it avoids overfitting and speeds up the process of learning. Overfitting is when most data has achieved an excellent percentage through the training process, but a difference in the prediction process occurs. Dropout occurs when a neuron is located in the network in the hidden and visible layers.

Performance Evaluation Metrics . The accuracy, precision, recall, and F1-score measures are used to evaluate the model’s performance. To avoid being misled by the confusion matrix, we applied the abovementioned evaluation criteria.

  • Accuracy . Accuracy ( A cc ) is a measure of the proportion of accurately classified predictions that have been made so far. It is calculated as follows:

Note that abbreviations such as “ TP ”, “ TN ”, “ FP ”, and “ FN ” stand for “true positive”, “true negative”, “false positive”, and “false negative”, respectively.

  • Precision . Precision ( Pre ) is a metric that indicates the proportion of true positive outcomes. It is calculated as follows:
  • Recall . Recall ( Re ) is a metric that indicates the proportion of true positives that were successfully detected. It is calculated as follows:
  • F1-Score. The F1-Score is calculated as the harmonic mean of precision and recall and is defined as follows:

Proposed Algorithm: Steps involved for Disease Detection

  • Step 1 : Input color of the image I RGB of the leaf procure from the PlantVillage dataset.
  • Step 2 : Given I RGB , generate the mask M veq using CNN-based segmentation.
  • Step 3 : Encrust I RGB with M veq to get M mask .
  • Step 4 : Divide the image M mask into smaller regions K tiles (square tiles).
  • Step 5: Classify K tiles from M mask into Tomato.
  • Step 6 : Finally, K tiles is the leaf part to detect disease.
  • Step 7 : Stop.

The disease detection starts with inputted image I RGB from the multiclass dataset. After input image I RGB is the mask segmented M veq using CNN. The mask image is divided into a different region K tiles . Afterward, it selects the Region of Interest (RoI), and the same is used to detect leaf disease.

The proposed algorithm for disease detection is given below:

4. Results Analysis and Discussion

The complete experiment was performed on Google Colab. The result of the proposed method is described with different test epochs and learning rates and explained in the next sub-section.

This research used epoch 50 and epoch 100 for comparison, though learning rates were 0.0001. Figure 4 a shows the comparison between training and validation loss, and Figure 4 b shows the comparison between training accuracy and validation accuracy.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g004.jpg

( a ) Training loss vs validation loss (rate of learning 0.0001 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.0001 and epoch 50).

Figure 5 a shows the comparison between training loss and validation loss, and Figure 5 b shows training accuracy and validation accuracy. Here, Figure 5 b shows that the accuracy rate of 98.43% is achieved with a training step at 100 epochs and the rate of learning 0.0001. Therefore, it is reasonable to infer that more iterations will result in higher data accuracy based on the research technique. However, the number of epochs increases as the training phase lengthens.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g005.jpg

( a ) Training loss vs validation loss (rate of learning 0.0001 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.0001 and epoch 100).

This assessment looks to evaluate how machine learning plays a role in the process. For example, one of the training variables used to calculate the weight correction value for the course is the learning rate (1). This test is based on the epochs 50 and 100, while the learning rates are 0.001 and 0.01 used for comparison. Figure 6 a shows the comparison between training loss and validation loss, and Figure 6 b shows training accuracy and validation accuracy.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g006.jpg

( a ) Training loss vs validation loss (rate of learning 0.001 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.001 and epoch 50).

Figure 7 a shows the comparison between training loss and validation loss, and Figure 7 b shows training accuracy and validation accuracy. According to Figure 7 b, the accurate rate of 98.42% is indicated by step 50 and a learning rate of 0.001. Furthermore, Figure 8 a shows the comparison between training loss and validation loss, and Figure 8 b shows training accuracy and validation accuracy. Figure 8 b shows that a level of accuracy of 98.52% is achieved. Figure 9 a shows the training and validation losses, and Figure 9 b shows training and validation accuracy. It also shows an accuracy rate of 98.5% with the 100 steps and 0.01 learning rate. Based on the assessment process used, it can evaluate a more accurate percentage of data with a greater learning rate.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g007.jpg

( a ) Training loss vs validation loss (rate of learning 0.001 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.001 and epoch 100).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g008.jpg

( a ) Training loss vs validation loss (rate of learning 0.01 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.01 and epoch 50).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g009.jpg

( a ) Training loss vs validation loss (rate of learning 0.01 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.01 and epoch 100).

As shown in the results of Table 2 , the accuracy rate is dependent on both the learning rate and the epoch: The more significant the epoch value, the more precise the calculation. Table 2 describes that the experiment was performed by varying different parameters such as epoch (two values) and learning rate (three values). Different learning rates were used to find out detection accuracy. The accuracy of the experiment on different variations values is shown in Table 2 .

Test results.

The precision, recall, and F1-Score of the model is shown in Figure 10 (a–c). The performance of parameters is measured by accuracy, but other factors such as precision, recall, and F1-score also contribute to it. These factors are computed for all the classes and shown below in the experiment performed. These factors are calculated on the true positive, true negative, false positive, and false negative values for all the classes. The high precision shows that the accuracy will be increased. A high recall value indicates the number of relevant positive values. The f-score represents the weighted average of the precision and recall.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g010.jpg

( a ) Precision; ( b ) Recall; ( c ) F1-Score.

In Table 3 , the proposed approach’s performance is compared to that of three standardized models. The results show that the proposed model outperforms the other classical models using segmentation and an added extra layer in the model.

Comparison with other models.

5. Conclusions and Future Scope

The article discussed a deep neural network model for detecting and classifying tomato plant leaf diseases into predefined categories. It also considered morphological traits such as color, texture, and leaf edges of the plant. This article introduced standard profound learning models with variants. This article discussed biotic diseases caused by fungal and bacterial pathogens, specifically blight, blast, and browns of tomato leaves. The proposed model detection rate was 98.49 percent accurate. With the same dataset, the proposed model was compared to VGG and ResNet versions. After analyzing the results, the proposed model outperformed other models. The proposed approach for identifying tomato disease is a ground-breaking notion. In the future, we will expand the model to include certain abiotic diseases due to the deficiency of nutrient values in the crop leaf. Our long-term objective is to increase unique data collection and accumulate a vast amount of data on several diseases of plants. To improve accuracy, we will apply subsequent technology in the future.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The authors, therefore, gratefully acknowledge DSR’s technical and financial support.

Author Contributions

Conceptualization, N.K.T., V.G. and A.A.; methodology, H.M.A., S.G.V. and D.A.; validation, S.K. and N.G.; formal analysis, A.A. and N.G.; investigation, N.K.T. and V.G.; resources, A.A.; data curation, S.G.V. and H.M.A.; writing—original draft, N.K.T., V.G. and A.A.; writing—review and editing, S.K., H.M.A. and N.G.; supervision, S.G.V. and D.A.; project administration, H.M.A. and S.G.V. All authors have read and agreed to the published version of the manuscript.

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D438-830-1442). The authors, therefore, gratefully acknowledge DSR’s technical and financial support.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Plant Leaf Disease Detection using Image Processing

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. (PDF) A Survey on the Plant Leaf Disease Detection Techniques

    leaf disease detection research paper

  2. (PDF) Diverse Plant Leaf Disease Detection Using CNN

    leaf disease detection research paper

  3. (PDF) A Survey on Plant Leaf Disease Detection

    leaf disease detection research paper

  4. (PDF) Detection of Disease in Plant Leaf using Image Segmentation

    leaf disease detection research paper

  5. (PDF) A Survey on Plant Leaf Disease Detection

    leaf disease detection research paper

  6. (PDF) Plant Disease Detection using Deep Learning

    leaf disease detection research paper

VIDEO

  1. Gold leaf electroscope.Detection of charge.⚡#physics #class10 #shorts #education #viral

  2. How to Control Bacterial Leaf spot #shorts

  3. What if biosensors could help treat rare diseases? [Science and Technology Podcast]

  4. line change guys (leaf disease detection using cnn)

  5. PLANT-LEAF DISEASE DETECTION USING AMAZON REKOGNITION BY AWS

  6. plant leaf disease detection and classification using multiclass SVM classifier

COMMENTS

  1. Leaf disease detection using machine learning and deep ...

    1. Introduction This section gives an overview of this research study with several subheadings that briefly discuss the challenges and techniques of leaf disease detection, the objective of this research work, a major contribution of this review and the search process.

  2. Plant Disease Detection and Classification by Deep Learning

    Many Machine Learning (ML) models have been employed for the detection and classification of plant diseases but, after the advancements in a subset of ML, that is, Deep Learning (DL), this area of research appears to have great potential in terms of increased accuracy.

  3. Plant Disease Detection and Classification by Deep Learning—A Review

    This review provides the research progress of deep learning technology in the field of crop leaf disease identification in recent years. In this paper, we present the current trends and challenges for the detection of plant leaf disease using deep learning and advanced imaging techniques. We hope that this work will be a valuable resource for ...

  4. [2308.14087] A comprehensive review on Plant Leaf Disease detection

    Leaf disease is a common fatal disease for plants. Early diagnosis and detection is necessary in order to improve the prognosis of leaf diseases affecting plant. For predicting leaf disease, several automated systems have already been developed using different plant pathology imaging modalities. This paper provides a systematic review of the literature on leaf disease-based models for the ...

  5. Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review

    Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review by Bulent Tugrul , Elhoucine Elfatimi and Recep Eryigit * Department of Computer Engineering, Ankara University, Ankara 06830, Türkiye * Author to whom correspondence should be addressed. Agriculture 2022, 12 (8), 1192; https://doi.org/10.3390/agriculture12081192

  6. Plant diseases and pests detection based on deep learning: a review

    Plant diseases and pests detection is a very important research content in the field of machine vision. It is a technology that uses machine vision equipment to acquire images to judge whether there are diseases and pests in the collected plant images [].At present, machine vision-based plant diseases and pests detection equipment has been initially applied in agriculture and has replaced the ...

  7. Plant Leaf Disease Detection using Deep Learning: A Review

    Due to certain disease when the leaves of the plants are infected, production is reduced and also the quality of the food is also affected. The aim is to study different types of plants diseases and how these diseases can be detected by using modern advanced machines and techniques. ... Automation can help in early-stage disease detection stage ...

  8. An advanced deep learning models-based plant disease detection: A

    1 Introduction The use of ML and DL in plant disease detection has gained popularity and shown promising results in accurately identifying plant diseases from digital images. Traditional ML techniques, such as feature extraction and classification, have been widely used in the field of plant disease detection.

  9. Plant leaf disease classification and damage detection system using

    This paper presents an automatic plant leaf damage detection and disease identification system. The first stage of the proposed method identifies the type of the disease based on the plant leaf image using DenseNet. The DenseNet model is trained on images categorized according to their nature, i.e., healthy and the type of the disease.

  10. Using Deep Learning for Image-Based Plant Disease Detection

    Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach.

  11. Leaf Disease Detection Using Deep Learning

    Leaf Disease Detection Using Deep Learning Abstract: Economy contributes the most for the productivity of the agriculture. In agricultural field, the disease in plants is more common and the detection of disease in plants has become more feasible due to the above reason.

  12. Plant Disease Detection Using Image Processing and Machine Learning

    This paper proposes a smart and efficient technique for detection of crop disease which uses computer vision and machine learning techniques. The proposed system is able to detect 20 different diseases of 5 common plants with 93% accuracy. Keywords: Digital image processing, Foreground detection, Machine learning, Plant disease detection.

  13. A real-time approach of diagnosing rice leaf disease using deep

    To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) was employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions.

  14. Early Detection and Classification of Tomato Leaf Disease Using High

    PMCID: PMC8659659 PMID: 34883991 Early Detection and Classification of Tomato Leaf Disease Using High-Performance Deep Neural Network Naresh K. Trivedi, 1 Vinay Gautam, 2 Abhineet Anand, 1 Hani Moaiteq Aljahdali, 3 Santos Gracia Villar, 4,5 Divya Anand, 6 Nitin Goyal, 1,* and Seifedine Kadry 7 Dimitrios Moshou, Academic Editor

  15. An Overview of the Research on Plant Leaves Disease detection using

    In this paper we review the need of simple plant leaves disease detection system that would facilitate advancements in agriculture. Early information on crop health and disease...

  16. Potato Leaf Diseases Detection Using Deep Learning

    With the enhancement in agricultural technology and the use of artificial intelligence in diagnosing plant diseases, it becomes important to make pertinent research to sustainable agricultural development. Various diseases like early blight and late blight immensely influence the quality and quantity of the potatoes and manual interpretation of these leaf diseases is quite time-taking and ...

  17. Leaf Disease Detection using Image Processing

    ... It also used a Bacterial Foraging optimization for allocation of optimal weight to radial basis function neural network and authors also performed an experiment on some common fungal...

  18. (PDF) Plant Leaf Disease Prediction

    This project aims to develop an optimal and more accurate method for detecting diseases of plants by analysing leaf images. Discover the world's research Content uploaded by Vaishnavi...

  19. Plant Leaf Disease Detection using Image Processing

    Plant Leaf Disease Detection using Image Processing ... So to answer that I am writing this research paper because all the foods and grains that we eat is controlled by pesticides and insecticides that harms our body and not good for our health. In this paper I have defined the techniques to detect the diseases in the leaf of the plant by image ...