one publication added to basket [337222] | Automated detection and counting of Artemia using U-shaped fully convolutional networks and deep convolutional networks
Wang, G.; Van Stappen, G.; De Baets, B. (2021). Automated detection and counting of Artemia using U-shaped fully convolutional networks and deep convolutional networks. Exp. Syst. Appl. 171: 114562. https://hdl.handle.net/10.1016/j.eswa.2021.114562
In: Expert Systems With Applications. Elsevier: New York; Oxford. ISSN 0957-4174; e-ISSN 1873-6793, meer
| |
Author keywords |
Object detection; Target classification; Artemia detection and counting; Marker proposal network; U-shaped fully convolutional network; Deep convolutional network |
Abstract |
The brine shrimp Artemia is a widely used cost-effective diet in aquaculture. In many Artemia studies, e.g., in a quality assessment of Artemia hatching, an automated method for detecting and counting the Artemia objects in images would be highly desired. However, there are few such works in literature. Moreover, it is very challenging to separate Artemia objects that are highly adjacent. In this paper, we propose a two-stage method for Artemia detection and counting, combining a target marker proposal network with a target classification network. In the first stage, the marker proposal network is implemented using U-shaped fully convolutional networks. This module can indicate target candidates, separate adjacent objects and obtain the object structural information simultaneously. In the second stage, using deep convolutional networks, we design a classifier to classify the target candidates into categories or label as a non-target, thereby obtaining the Artemia detection and counting results. Furthermore, an Artemia detection and counting dataset is collected to train and test the proposed method. Experimental results confirm that the proposed method can accurately detect and count the Artemia objects that have high degrees of adjacency in images, outperforming an ad hoc method based on hand-crafted features and the state-of-the-art YOLO-v3 method. |
|