昆虫学报 ›› 2021, Vol. 64 ›› Issue (5): 611-617.doi: 10.16380/j.kcxb.2021.05.008

• 研究论文 • 上一篇    下一篇

基于F3Net显著性目标检测的蝴蝶图像前背景自动分割

黄世国1,2, 洪铭淋2, 张飞萍1, 何海洋2, 陈亿强2, 李小林2,*   

  1.   (1. 福建农林大学, 生态公益林重大有害生物防控福建省高校重点实验室, 福州 350002; 2. 福建农林大学, 智慧农林褔建省高校重点实验室, 福州 350002)
  • 出版日期:2021-05-20 发布日期:2021-05-31

F3Net based salient object detection for automatic foreground-background segmentation of butterfly images

HUANG Shi-Guo1,2, HONG Ming-Lin2, ZHANG Fei-Ping1, HE Hai-Yang2, CHEN Yi-Qiang2, LI Xiao-Lin2,*   

  1.  (1. Key Laboratory of Integrated Pest Management in Ecological Forests, Fujian Province University, Fujian Agriculture and Forestry University, Fuzhou 350002, China; 2. Key Laboratory of Smart Agriculture and Forestry, Fujian Province University, Fujian Agriculture and Forestry University, Fuzhou 350002, China)
  • Online:2021-05-20 Published:2021-05-31

摘要:

【目的】具有复杂背景的蝴蝶图像前背景分割难度大。本研究旨在探索基于深度学习显著性目标检测的蝴蝶图像自动分割方法。【方法】应用DUTS-TR数据集训练F3Net显著性目标检测算法构建前背景预测模型,然后将模型用于具有复杂背景的蝴蝶图像数据集实现蝴蝶前背景自动分割。在此基础上,采用迁移学习方法,保持ResNet骨架不变,利用蝴蝶图像及其前景蒙板数据,使用交叉特征模块、级联反馈解码器和像素感知损失方法重新训练优化模型参数,得到更优的自动分割模型。同时,将其他5种基于深度学习显著性检测算法也用于自动分割,并比较了这些算法和F3Net算法的性能。【结果】所有算法均获得了很好的蝴蝶图像前背景分割效果,其中,F3Net是更优的算法,其7个指标S测度、E测度、F测度、平均绝对误差(MAE)、精度、召回率和平均IoU值分别为0.940, 0.945, 0.938, 0.024, 0.929,0.978和0.909。迁移学习则进一步提升了F3Net的上述指标值,分别为0.961, 0.964, 0.963, 0.013, 0.965, 0.967和0.938。【结论】研究结果证明结合迁移学习的F3Net算法是其中最优的分割方法。本研究提出的方法可用于野外调查中拍摄的昆虫图像的自动分割,并拓展了显著性目标检测方法的应用范围。

关键词: 蝴蝶, 显著性目标检测, 深度学习, 图像分割, 自动分割, F3Net

Abstract: 【Aim】 It is difficult to segment the foreground-background of butterfly images with complex backgrounds. This study aims to explore an automatic foreground-background segmentation method using deep learning based salient object detection method. 【Methods】 The F3Net salient object detection algorithm was trained by using the DUTS-TR dataset to obtain the foreground-background prediction model. Then the model was applied to the dataset of butterfly images with complex background to implement automatic foreground-background segmentation of images. To further improve the accuracy of automatic segmentation, transfer learning was utilized by keeping ResNet backbone unchanged and retraining network through cross feature module, cascade decoders and pixel sensitive loss module to optimize model parameters, and then the better automatic segmentation model was obtained. Meanwhile, other five salient object detection algorithms based on deep learning were also applied to automatic segmentation and compared with F3Net on performance. 【Results】 With all the algorithms good butterfly foreground-background segmentation results were obtained. Among these algorithms, F3Net was the better algorithm, and the algorithm got the values of 0.940, 0.945, 0.938, 0.024, 0.929, 0.978 and 0.909 for the seven indexes S-measure, E-measure, F-measure, mean absolute error (MAE), precision, recall and average IoU, respectively. Transfer learning further improved the values of the above indexes of F3Net, which were 0.961, 0.964, 0.963, 0.013, 0.965, 0.967 and 0.938, respectively. 【Conclusion】 The experimental results showed that F3Net with transfer learning is the best segmentation algorithm. The method developed in this study can be applied to automatic segmentation of insect images taken in field investigations and extends the application range of salient object detection methods.

Key words: Butterfly, salient object detection, deep learning, image segmentation, automatic segmentation, F3Net