基于CBAM-UNet的大菱鲆鱼卵识别计数方法

Turbot fish egg recognition and counting method based on CBAM-UNet

  • 摘要: 大菱鲆 (Scophthalmus maximus) 鱼卵数量的精准统计是影响其苗种优选过程的重要因素。由于大菱鲆鱼卵体积微小、透明度高且容易粘连,目前人工操作效率低下且误差较大。为了实现鱼卵的自动、快速、准确计数,根据大菱鲆鱼卵的成像特点,提出了一种基于卷积块注意力机制和U形卷积神经网络 (CBAM-UNet) 的大菱鲆鱼卵计数方法。首先,设计了一套由工业相机、漫反射光源和图像采集箱构成的标准化鱼卵采样结构,获取无影的高清鱼卵图像,构建鱼卵图像样本集;随后,以UNet网络作为基础语义分割模型,针对鱼卵图像中无法有效分割鱼卵与背景边界及细节等问题,进一步引入了双重注意力机制,以增强模型对鱼卵特征的表达能力,提高分割精度;最后,根据分割后鱼卵面积、拍摄高度及鱼卵数量等构建多元线性回归模型,实现鱼卵的精准计数。结果表明,所提出的基于CBAM-UNet的鱼卵识别计数方法能有效提高大菱鲆鱼卵的计数精确度,平均计数误差为6.32%,低于基于其他模型和人工统计方法(质量比对法)的平均计数误差。

     

    Abstract: The accurate statistics of the number of turbot (Scophthalmus maximus) eggs is a crucial factor affecting the seedling selection process. Due to the small size, high transparency, and easy adhesion of turbot eggs, the manual counting methods are inefficient and prone to significant errors. To achieve automated, rapid and accurate counting of turbot eggs, a turbot fish egg counting method based on convolutional block attention mechanism and U-shaped convolutional neural network (CBAM-UNet) is proposed. According to the imaging characteristics of turbot eggs, a standardized fish egg sampling structure composed of an industrial camera, a diffuse reflection light source, and an image acquisition box was first designed and developed to obtain shadow-free high-definition egg images and construct a fish egg image sample set. Then, with UNet network as the basic semantic segmentation model, a dual attention mechanism was further introduced to enhance the segmentation of the boundaries and details of fish eggs and the background, so as to improve the expression ability of the model to the egg features and the segmentation accuracy. Finally, a multiple linear regression model was constructed based on the segmented fish egg area, shooting height, and the number of fish eggs to achieve accurate counting of the fish eggs. Experimental results show that the proposed recognition and counting method based on CBAM-UNet can effectively improve the accuracy of fish egg counting, with an average counting error of 6.32%, lower than the average counting error of other models and manual statistical methods (Quality comparison method).

     

/

返回文章
返回