Abstract
Purpose: Breast cancer ranks first among cancers affecting women's health. Our goal is to
develop a fast, high-precision, and fully automated breast cancer detection algorithm to improve the
early detection rate of breast cancer.
Methods: We compare different object detection algorithms, including anchor-based and anchor-free
object detection algorithms for detecting breast lesions. Finally, we find that the fully convolutional onestage
object detection (FCOS) showed the best performance in the detection of breast lesions, which is
an anchor-free algorithm. 1) Considering that the detection of breast lesions requires the context information
of the ultrasound images, we introduce the non-local technique, which models long-range dependency
between pixels to the FCOS algorithm, providing the global context information for the detection
of the breast lesions. 2) The variety of shapes and sizes of breast lesions makes detection difficult.
We propose a new deformable spatial attention (DSA) module and add it to the FCOS algorithm.
Results: The detection performance of the original FCOS is that the average precision (AP) for benign
lesions is 0.818, and for malignant lesions is 0.888. The FCOS with a non-local module improves the
performance of the breast detection; the AP of benign lesions was 0.819, and that of malignant lesions
was 0.894. Combining the DSA module with the FCOS improves the performance of breast detection;
the AP for benign lesions and malignant lesions is 0.840 and 0.899, respectively.
Conclusion: We propose two methods to improve the FCOS algorithm from different perspectives to
improve its performance in detecting breast lesions. We find that FCOS combined with DSA is beneficial
in improving the localization and classification of breast tumors and can provide auxiliary diagnostic
advice for ultrasound physicians, which has a certain clinical application value.
Keywords:
Object detection algorithm, breast cancers, non-local module, deformable spatial attention, FCOS, ultrasound.
Graphical Abstract
[3]
Zhang LN, Li S, Zheng XY, et al. Advantages and disadvantages of the X-ray and ultrasound examination for breast cancer diagnosis. Zhongguo Yike Daxue Xuebao 2010; 39: 485-6.
[5]
Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 2015; 28: 91-9.
[6]
Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv 2018.
[7]
Liu W, Anguelov D, Erhan D, et al. Single shot multibox detector. In: Leibe B, Matas J, Sebe N, Welling M, Eds. Computer vision. Cham: Springer 2016; pp. 21-37.
[8]
Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV). 2017 Oct 22-29; Venice, Italy. 2980-8.
[9]
Zhou X, Wang D, Krähenbühl P. Objects as points. arXiv 2019.
[10]
Tian Z, Shen C, Chen H, He T. Fcos: Fully convolutional one-stage object detection. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019 Oct 27-Nov 2; Seoul, Korea (South). pp. 9627-.
[13]
Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, UT, USA. 7794-803.
[14]
Dai J, Qi H, Xiong Y, et al. Deformable convolutional networks. In: 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22-29; Venice, Italy. pp. 764-773.
[19]
Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, HI, USA. pp. 2117-.
[20]
He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22-29; Venice, Italy. pp. 2961-.
[23]
Law H, Deng J. Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV); 2018 Sep 8-14; Munich, Germany. pp. 734-50.
[24]
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Adv Neural Inf Process Syst 2017; 5998-6008.
[25]
Woo S, Park J, Lee JY, Kweon IS. Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV); 2018 Sep 8-14; Munich, Germany. pp. 3-19.