Recognition of Motion-blurred CCTs based on Deep and Transfer Learning

Authors

  • Yun Shi College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing
  • Yanyan Zhu

DOI:

https://doi.org/10.4114/intartif.vol23iss66pp1-8

Keywords:

Chinese character coded target (CCT), deep learning, image recognition, motion blur, transfer learning

Abstract

Considering the need for a large number of samples and the long training time, this paper uses deep and transfer learning to identify motion-blurred Chinese character coded targets (CCTs). Firstly, a set of CCTs are designed, and the motion blur image generation system is used to provide samples for the recognition network. Secondly, the OTSU algorithm, the expansion, and the Canny operator are performed on the real shot blurred image, where the target area is segmented by the minimum bounding box. Thirdly, the sample is selected from the sample set according to the 4:1 ratio as the training set and the test set. Under the Tensor Flow framework, the convolutional layer in the AlexNet model is fixed, and the fully-connected layer is trained for transfer learning. Finally, experiments on simulated and real-time motion-blurred images are carried out. The results show that network training and testing only take 30 minutes and two seconds, and the recognition accuracy reaches 98.6% and 93.58%, respectively. As a result, our method has higher recognition accuracy, does not require a large number of trained samples, takes less time, and can provide a certain reference for the recognition of motion-blurred CCTs.

Downloads

Download data is not yet available.

Published

2020-08-27

How to Cite

Shi, Y., & Zhu, Y. (2020). Recognition of Motion-blurred CCTs based on Deep and Transfer Learning. Inteligencia Artificial, 23(66), 1-8. https://doi.org/10.4114/intartif.vol23iss66pp1-8