Citation: | QIAN Cheng, ZHANG Jiapeng, TU Xueying, LIU Huang, QIAO Gan, LIU Shijing. Turbot fish egg recognition and counting method based on CBAM-UNet[J]. South China Fisheries Science, 2024, 20(6): 132-144. DOI: 10.12131/20240123 |
The accurate statistics of the number of turbot (Scophthalmus maximus) eggs is a crucial factor affecting the seedling selection process. Due to the small size, high transparency, and easy adhesion of turbot eggs, the manual counting methods are inefficient and prone to significant errors. To achieve automated, rapid and accurate counting of turbot eggs, a turbot fish egg counting method based on convolutional block attention mechanism and U-shaped convolutional neural network (CBAM-UNet) is proposed. According to the imaging characteristics of turbot eggs, a standardized fish egg sampling structure composed of an industrial camera, a diffuse reflection light source, and an image acquisition box was first designed and developed to obtain shadow-free high-definition egg images and construct a fish egg image sample set. Then, with UNet network as the basic semantic segmentation model, a dual attention mechanism was further introduced to enhance the segmentation of the boundaries and details of fish eggs and the background, so as to improve the expression ability of the model to the egg features and the segmentation accuracy. Finally, a multiple linear regression model was constructed based on the segmented fish egg area, shooting height, and the number of fish eggs to achieve accurate counting of the fish eggs. Experimental results show that the proposed recognition and counting method based on CBAM-UNet can effectively improve the accuracy of fish egg counting, with an average counting error of 6.32%, lower than the average counting error of other models and manual statistical methods (Quality comparison method).
[1] |
陈宏博, 佟志明, 都昆仑. 大菱鲆的海水养殖技术[J]. 农民致富之友, 2021(14): 167. doi: 10.3969/j.issn.1003-1650.2021.14.162
|
[2] |
于冠杰. 计算机人工智能在作物病害识别与防治中的应用[J]. 分子植物育种, 2024, 22(12): 4146-4151.
|
[3] |
汤永华, 张志鹏, 林森, 等. 基于深度学习的鱼类识别相关技术研究现状及展望[J]. 海洋渔业, 2024, 46(2): 246-256.
|
[4] |
LI D L, MIAO Z, PENG F, et al. Automatic counting methods in aquaculture: a review[J]. J World Aquac Soc, 2021, 52(2): 269-283. doi: 10.1111/jwas.12745
|
[5] |
李琼, 姚遥, 杨青春, 等. 基于MATLAB图像处理的大豆颗粒检测方法研究[J]. 中国农学通报, 2018, 34(30): 20-25
|
[6] |
司艳丽, 朱伟兴. 基于RGB-D图像的重叠颗粒物分层计数[J]. 信息技术, 2019, 43(5): 81-86
|
[7] |
张杭文, 袁国良, 张云, 等. 基于数字图像处理的鱼卵计数的研究[J]. 电子设计工程, 2013(14): 190-193.
|
[8] |
ARTHUR D E, FALKE J A, BLAIN-ROTH B J, et al. Alaskan yelloweye rockfish fecundity revealed through an automated egg count and digital imagery method[J]. N Am J Fish Manag, 2022, 42(4): 828-838. doi: 10.1002/nafm.10768
|
[9] |
王硕, 范良忠, 刘鹰. 基于计算机视觉的大菱鲆鱼苗计数方法研究[J]. 渔业现代化, 2015, 42(1): 16-19.
|
[10] |
方志强, 肖书浩, 熊禾根, 等. 基于机器视觉及SVM的零件产品计数系统[J]. 制造业自动化, 2018, 40(7): 37-40, 48.
|
[11] |
刘腾飞, 刘威. 基于Matlab 的肿瘤细胞识别系统[J]. 电子设计工程, 2021, 29(6): 1-5.
|
[12] |
高常鑫, 徐正泽, 吴东岳, 等. 深度学习实时语义分割综述[J]. 中国图象图形学报, 2024, 29(5): 1119-1145.
|
[13] |
郭婧, 王飞. 多尺度特征融合与交叉指导的小样本语义分割[J]. 中国图象图形学报, 2024, 29(5): 1265-1276.
|
[14] |
蔡改贫, 刘占, 汪龙, 等. 基于形态学优化处理的标记符分水岭矿石图像分割[J]. 科学技术与工程, 2020, 20(23): 9497-9502.
|
[15] |
RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI. Berlin: Springer, 2015: 234-241.
|
[16] |
DU G T, CAO X, LIANG J M, et al. Medical image segmentation based on U-Net: a review[J]. J Imag Sci Technol, 2020, 64(2): 20508.1-20508.12. doi: 10.2352/J.ImagingSci.Technol.2020.64.2.020508
|
[17] |
刘祥, 田敏, 梁金艳. 基于RCH-UNet的新疆密植棉花图像快速分割及产量预测[J]. 农业工程学报, 2024, 40(7): 230-239.
|
[18] |
LIN D, LI Y, PRASEAD S, et al. CAM-guided multi-path decoding U-Net with triplet feature regularization for defect detection and segmentation[J]. Knowl-Based Syst, 2021, 228: 107272. doi: 10.1016/j.knosys.2021.107272
|
[19] |
ZHOU Z, SIDDIQUEE M M R, TAJBAKHSH N, et al. Unet++: redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Trans Med Imag, 2019, 39(6): 1856-1867.
|
[20] |
CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer, 2018: 833-851.
|
[21] |
汪华登, 刘金, 黎兵兵, 等. 融合结构化卷积和双重注意力机制的轻量级眼底图像分割网络[J]. 计算机辅助设计与图形学学报, 2024, 36(5): 1-16.
|
[22] |
VASWANI A, SHAZEER N, PARMAR, et al. Attention is all you need[C]//Advances in Neural information Processing Systems (NIPS). Cambridge: MIT Press, 2017: 6000-6010.
|
[23] |
CHEN B L, HUANG Y, XIA Q Q, et al. Nonlocal spatial attention module for image classification[J]. Inter J Adv Robotic Syst, 2020, 17(5): 1-10.
|
[24] |
HU J, SHEN L, ALBANIE S, et al, Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 7132-7141.
|
[25] |
WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer, 2018: 3-19.
|
[26] |
EVERINGHAM M, ESLAMI S M A, van GOOL L, et al. The pascal visual object classes challenge: a retrospective[J]. Int J Comput Vision, 2015, 111: 98-136. doi: 10.1007/s11263-014-0733-5
|
[27] |
ADADI M, AGARWAL A, BARHAM P, et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems[J]. arXiv e-prints, 2016: arXiv: 1603.04467.
|
[28] |
CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer, 2018: 801-818.
|
[29] |
DICE L R. Measures of the amount of ecologic association between species[J]. Ecology, 1945, 26(3): 297-302. doi: 10.2307/1932409
|
[30] |
CHEN J, KAO S H, HE H, et al. Run, don't walk: chasing higher FLOPS for faster neural networks[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver. Piscataway, NJ: IEEE, 2023: 12021-12031.
|
[31] |
HYNDMAN R J, ATHANASOPOULOS G. Forecasting: principles and practice[M]. Melbourne: OTexts, 2014: 46-52.
|
[32] |
LI Q G, ZHENG H Z, CUI T W, et al. Identification and location method of strip ingot for autonomous robot system using kmeans clustering and color segmentation[J]. IET Control Theory A, 2023, 17(16): 2124-2135. doi: 10.1049/cth2.12481
|
[33] |
LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA. Piscataway, NJ: IEEE, 2015: 3431-3440.
|