昆虫学报 ›› 2021, Vol. 64 ›› Issue (12): 1444-1454.doi: 10.16380/j.kcxb.2021.12.010

• 研究论文 • 上一篇    下一篇

基于性诱和深度学习的草地贪夜蛾成虫自动识别计数方法

邱荣洲1, 赵健2, 何玉仙1, 陈韶萍1, 黄美玲3, 池美香1, 梁勇2, 翁启勇1,*   

  1. (1. 福建省农业科学院植物保护研究所, 福建省作物有害生物监测与治理重点实验室, 福州 350013; 2. 福建省农业科学院数字农业研究所, 福州 350003; 3. 漳州市长泰区植保站, 福建漳州 363900)
  • 出版日期:2021-12-20 发布日期:2021-11-26

An automatic identification and counting method of Spodoptera frugiperda (Lepidoptera: Noctuidae) adults based on sex pheromone trapping and deep learning

QIU Rong-Zhou1, ZHAO Jian2, HE Yu-Xian1, CHEN Shao-Ping1, HUANG Mei-Ling3, CHI Mei-Xiang1, LIANG Yong2, WENG Qi-Yong1,*   

  1.  (1. Fujian Key Laboratory for Monitoring and Integrated Management of Crop Pests, Institute of Plant Protection, Fujian Academy of Agricultural Sciences, Fuzhou 350013, China; 2. Institute of Digital Agriculture, Fujian Academy of Agricultural Sciences, Fuzhou 350003, China; 3. Plant Protection Station of Changtai District of Zhangzhou City, Zhangzhou, Fujian 363900, China)
  • Online:2021-12-20 Published:2021-11-26

摘要:

【目的】探究深度学习在草地贪夜蛾Spodoptera frugiperda成虫自动识别计数上的可行性,并评估模型的识别计数准确率,为害虫机器智能监测提供图像识别与计数方法。【方法】设计一种基于性诱的害虫图像监测装置,定时自动采集诱捕到的草地贪夜蛾成虫图像,结合采集船形诱捕器粘虫板上草地贪夜蛾成虫图像,构建数据集;应用YOLOv5深度学习目标检测模型进行特征学习,通过草地贪夜蛾原始图像、清除边缘残缺目标、增加相似检测目标(斜纹夜蛾成虫)、无检测目标负样本等不同处理的数据集进行模型训练,得到Yolov5s-A1, Yolov5s-A2, Yolov5s-AB, Yolov5s-ABC 4个模型,对比在不同遮挡程度梯度下的测试样本不同模型检测结果,用准确率(P)、召回率(R)、F1值、平均准确率(average precision, AP)和计数准确率(counting accuracy, CA)评估各模型的差异。【结果】通过原始图像集训练的模型Yolov5s-A1的识别准确率为87.37%,召回率为90.24%,F1值为88.78;清除边缘残缺目标图像集训练得到的模型Yolov5s-A2的识别准确率为93.15%,召回率为84.77%,F1值为88.76;增加斜纹夜蛾成虫样本图像训练的模型Yolov5s-AB的识别准确率为96.23%,召回率为91.85%,F1值为93.99;增加斜纹夜蛾成虫和无检测对象负样本训练的模型Yolov5s-ABC的识别准确率为94.76%,召回率为88.23%,F1值为91.38。4个模型的AP值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC> Yolov5s-A2>Yolov5s-A1,其中Yolov5s-AB与Yolov5s-ABC结果相近;CA值从高到低排列如下:Yolov5s-AB>Yolov5s-ABC>Yolov5s-A2>Yolov5s-A1。【结论】结果表明本文提出的方法应用于控制条件下害虫图像监测设备及诱捕器粘虫板上草地贪夜蛾成虫的识别计数是可行的,深度学习技术对于草地贪夜蛾成虫的识别和计数是有效的。基于深度学习的草地贪夜蛾成虫自动识别与计数方法对虫体姿态变化、杂物干扰等有较好的鲁棒性,可从各种虫体姿态及破损虫体中自动统计出草地贪夜蛾成虫的数量,在害虫种群监测中具有广阔的应用前景。

关键词: 草地贪夜蛾, 机器视觉, 深度学习, YOLO算法, 种群监测, 图像识别, 自动计数

Abstract: 【Aim】 To explore the feasibility of deep learning in automatic recognition counting of Spodoptera frugiperda adults, and to evaluate the recognition counting precision of the model, so as to provide image recognition and counting methods for intelligent pest monitoring by machine. 【Methods】 A self-designed pest image monitoring device based on sexual attraction was used to automatically and regularly collect images of trapped S. frugiperda adults. Combined with the collection of images of S. frugiperda adults on sticky coloured cards with ship-shape trap, a dataset was constructed. The YOLOv5 deep learning object detection model was used for feature learning. The models of Yolov5s-A1, Yolov5s-A2, Yolov5s-AB and Yolov5s-ABC were obtained by model training using different image datasets, including the original images of S. frugiperda adults, S. frugiperda adult images with edge incomplete objects removed, S. frugiperda adult images with similar detection objects (S. litura adults) added, and negative samples without detection objects. The detection results of test samples under different occlusion gradients using different models were compared. Precision (P), recall (R), F1-measure, average precision (AP), and counting accuracy (CA) were used to evaluate these models. 【Results】 The recognition precision, recall and F1-measure of Yolov5s-A1 trained by the original image set reached 87.37%, 90.24% and 88.78, respectively. The model Yolov5s-A2 trained by images with edge incomplete objects removed had a recognition precision of 93.15%, a recall of 84.77%, and F1-measure of 88.76. The recognition precision, recall and F1-measure of Yolov5s-AB trained by images of added S. litura adult samples reached 96.23%, 91.85% and 93.99, respectively. The model Yolov5s-ABC trained by negative samples without detection objects had a recognition precision of 94.76%, a recall of 88.23%, and F1-measure of 91.38. The order of average precision of four models from high to low was as follows: Yolov5s-AB>Yolov5s-ABC>Yolov5s-A2>Yolov5s-A1; and the result of Yolov5s-AB was similar to that of Yolov5s-ABC. The order of counting accuracy of four models from high to low was as follows: Yolov5s-AB>Yolov5s-ABC>Yolov5s-A2>Yolov5s-A1. 【Conclusion】 The results show that the method developed in this study is applicable for the recognition and counting of S. frugiperda adults on pest image monitoring equipment and the sticky coloured card with trap under control conditions, and the deep learning technology is effective for the identification and counting of S. frugiperda adults. The automatic recognition and counting method for S. frugiperda adults based on deep learning has good robustness to insect body posture changes, sundries interference, etc. It can automatically count the number of S. frugiperda adults with various body postures and damaged body. The method has a broad application prospect in pest population monitoring.

Key words: Spodoptera frugiperda, machine vision, deep learning, YOLO algorithm, population monitoring, image recognition, automatic counting