DOI 10.17586/0021-3454-2024-67-9-767-775
UDC 004.8.81
HANDWRITTEN TEXT RECOGNITION OF HISTORICAL DOCUMENTS USING DEEP NEURAL NETWORK TECHNOLOGIES
Siberian Federal University, Institute of Space and Information Technologies, Department of Artificial Intelligence Systems;
A. V. Pyataeva
Siberian Federal University, Institute of Space and Information Technologies, Department of Artificial Intelligence Systems;
S. S. Zamyslova
Siberian Federal University, Institute of Space and Information Technologies, Department of Artificial Intelligence Systems ;
E. D. Rukosueva
Siberian Federal University, Institute of Space and Information Technologies, Department of Artificial Intelligence Systems ;
K. V. Bogdanov
Siberian Federal University, Institute of Space and Information Technologies, Department of Software Engineering;
Read the full article

Reference for citation: Unterberg A. M., Pyataeva A. V., Zamyslova S. S., Rukosueva E. D., Bogdanov K. V. Handwritten text recognition of historical documents using deep neural network technologies. Journal of Instrument Engineering. 2024.
Vol. 67, N 9. P. 767–775 (in Russian). DOI: 10.17586/0021-3454-2024-67-9-767-775
Abstract. The application of deep neural network technologies to the problem of handwriting recognition in pre-reform Russian is considered. The initial data used are scanned JPG images of historical documents from the 19th century, in particular containing various noises and interference, which complicates the work of the recognition algorithm. Text recognition is performed in three stages: noise removal, segmentation (highlighting) of text lines in the image, since the input data for the deep neural network are precisely the lines, and then recognition of the text of the highlighted lines using the pre-trained Tesseract OCR model, which performs electronic translation of images of handwritten or printed text into text data. The model used is a convolutional recurrent neural network; the model is a combination of a convolutional neural network for extracting local features from an image and a recurrent neural network represented by two layers of bidirectional LSTM networks for processing the sequence. Using this model allows for reliable recognition of handwritten text.
Abstract. The application of deep neural network technologies to the problem of handwriting recognition in pre-reform Russian is considered. The initial data used are scanned JPG images of historical documents from the 19th century, in particular containing various noises and interference, which complicates the work of the recognition algorithm. Text recognition is performed in three stages: noise removal, segmentation (highlighting) of text lines in the image, since the input data for the deep neural network are precisely the lines, and then recognition of the text of the highlighted lines using the pre-trained Tesseract OCR model, which performs electronic translation of images of handwritten or printed text into text data. The model used is a convolutional recurrent neural network; the model is a combination of a convolutional neural network for extracting local features from an image and a recurrent neural network represented by two layers of bidirectional LSTM networks for processing the sequence. Using this model allows for reliable recognition of handwritten text.
Keywords: neural networks, natural language processing, historical documents, deep learning, Tesseract — OCR Library
References:
References:
- Carbonell M., Fornés A., Villegas M., Lladós J. Pattern Recognition Letters, 2020, vol. 136, рр. 219–227, DOI: 10.1016/j.patrec.2020.05.001.
- Mestha P., Asif S., Mayekar M. Lecture Notes in Networks and Systems, 2022, рр. 567–580, DOI: 10.1007/978-3030-84760-9_48.
- Souibgui M.A., Fornes A., Kessentini Y., Megyesi B. Pattern Recognition Letters, 2022, vol. 160, рр. 43–49, DOI: 10.1016/j.patrec.2022.06.003.
- Kang L., Riba P., Rusinol M., Fornes A., Villegas M. Pattern Recognition, 2022, vol. 129, art. no. 108766, DOI: 10.1016/j.patcog.2022.108766.
- Gonwirat S., Surinta O. Engineering and Applied Science Research, 2022, no. 4(49), pp. рр. 505–520, DOI: 10.14456/ easr.2022.50.
- Aradillas J., Murillo-Fuentes J., Olmos P. IEEE Access, 2021, vol. 9, рр. 76674–76688, DOI: 10.1109/ ACCESS.2021.3082689.
- Im C., Kim Y., Mandl T. Multimedia Tools and Applications, 2022, vol. 81, рр. 5867–5888, DOI: 10.1007/s11042-02111754-7.
- Jiju A., Tuscano Sh., Badgujar Ch. International Journal of Engineering and Management Research, 2021, no. 2(11), pp. 83–86, DOI:10.31033/ijemr.11.2.11.
- Lombardi F., Marinai S. Journal of Imaging, 2020, no. 6(10), pp. 110, DOI: 10.3390/jimaging6100110.
- Sporici D., Cusnir E., Boiangiu C. Symmetry, 2020, no. 5(12), pp. 715, DOI: 10.3390/sym12050715.
- Pyataeva A.V., Genza S.A. CEUR Workshop Proceedings, 2019, vol. 2534, рр. 248–252.