ISSN 0021-3454 (print version)
ISSN 2500-0381 (online version)
Menu
Summaries of the issue

INFORMATICS AND INFORMATION PROCESSES

731
An approach to the implementation of an algorithm for the emotional state of a person using convolutional neural networks is presented. Based on the general concept of scientific research, a variant of complicating the hierarchy of identifiable emotions is considered. A comparative analysis of the application of the windowed Fourier transform and the MFCC algorithm as a tool for processing information data is carried out. The variant of complication of the proposed method is considered as a logical transition from a simpler mathematical apparatus, presented in the form of a windowed Fourier transform to the use of mel-frequency cepstral coefficients. This allowed to form a more informative input data set without complicating the neural network architecture, the methodology of scientific research was adjusted and, using an idealized database, the accuracy of identification close to 100% was achieved. The rationale for using Deep Network Designer as a tool for creating neural network architecture is given.

SYSTEM ANALYSIS, CONTROL, AND INFORMATION PROCESSING

741
A method for dynamic updating of a formal model of parallel processes, intended for debugging and verification of microcontroller software during field testing, is considered. The proposed method is based on the application of methods of process mining and initially differs from previous approaches in that it allows recording the observed behavior of the system in a formal model and updating this model in real time during the operation of the system. This approach allows to significantly reduce the memory resource costs for event logging, maintain the cause-and-effect relationship between them, monitor the system in cases where access to it is limited for a long time, and build process models for distributed systems in real time. The method, embodied in the form of a library in the C language, is implemented as a set of pre-prepared tables representing a dynamically updated model of the system processes in the form of an event graph with frequency characteristics updated due to the receipt of information about events in the system. A formula for assessing the necessary resources for target platforms is given, and instructions for using the developed toolkit are given.
751
The use of intervals of equal length or intervals of equal probability for using the χ2-type criterion is discussed. In this case, intervals of equal probability are predetermined by the distribution law being tested. When forming the initial sample based on real production data, it is often immediately grouped with predetermined and unchangeable grouping boundaries in production and may not satisfy the recommendations for applying χ2-type criteria. A method is proposed for constructing a set of optimal grouping intervals by combining some of the intervals available in the initial sample. An optimal set of such intervals is understood to be a set of intervals that has the least square deviation of weighted frequencies of hits from a discrete uniform distribution, which makes it possible not to change the set of intervals when changing the selected distribution law and to automatically solve the problem of choosing the optimal number of intervals. Some properties of such sets are listed, examples of situations arising during their construction are considered, and an example of forming such an optimal set is given.
759
For multi-user systems of random multiple access to a common communication channel, built on the basis of the ALOHA algorithm, a method for organizing conflict resolution algorithms is considered. In such systems, the time in the common channel is divided into slots equal to the duration of the message transmission time, and subscribers randomly select a slot for transmission. In some systems, the slots have different durations, which under certain conditions can increase the speed of the algorithm. To determine these conditions, options for the influence of the slot duration on the speed of the algorithm are considered. It is shown that the speed of algorithms built on the basis of the ALOHA algorithm can be increased if the relative duration of the empty slot is different from one. An algorithm is proposed that provides the maximum speed when this condition is met. An optimization problem is formulated and solved for choosing the optimal value of the parameter at which the speed of the proposed algorithm is maximum. Similar results are demonstrated for the case when the relative duration of the empty slot is much greater than one.
HANDWRITTEN TEXT RECOGNITION OF HISTORICAL DOCUMENTS USING DEEP NEURAL NETWORK TECHNOLOGIES Aleksander M. Unterberg, Anna V. Pyataeva, Svetlana S. Zamyslova, Ekaterina D. Rukosueva, Konstantin V. Bogdanov
767
The application of deep neural network technologies to the problem of handwriting recognition in pre-reform Russian is considered. The initial data used are scanned JPG images of historical documents from the 19th century, in particular containing various noises and interference, which complicates the work of the recognition algorithm. Text recognition is performed in three stages: noise removal, segmentation (highlighting) of text lines in the image, since the input data for the deep neural network are precisely the lines, and then recognition of the text of the highlighted lines using the pre-trained Tesseract OCR model, which performs electronic translation of images of handwritten or printed text into text data. The model used is a convolutional recurrent neural network; the model is a combination of a convolutional neural network for extracting local features from an image and a recurrent neural network represented by two layers of bidirectional LSTM networks for processing the sequence. Using this model allows for reliable recognition of handwritten text.

OPTICAL AND OPTO-ELECTRONIC INSTRUMENTS AND SYSTEMS

776
An improved engineering methodology for assessing the probability of detection and recognition of terrain objects using air- and ground-based television cameras operating in the visible and/or near infrared range of the spectrum during the day and night is considered. The presented methodology is conceptually similar to that for thermal imaging devices, which makes it possible to obtain comparable estimates of the performance indicators of these types of surveillance equipment and, therefore, to predict the performance of the entire optical-electronic complex, consisting of television and thermal imaging channels. The proposed calculation and analytical method, unlike the known ones, takes into account a number of additional significant factors: the television camera operation depending on the level of natural illumination of the area both in noise-limited and contrast-limited modes, when their efficiency is limited, respectively, by the noise of the camera or the limited contrast sensitivity of the visual analyzer of the human operator; the type and severity of cloudiness, weakening the irradiation of the object from the Sun; atmospheric turbulence, select image deciphering conditions (brightness, contrast, visible magnification); an improved model of the operator’s visual analyzer during spatial-temporal integration of visual signals and its qualifications.
STUDY OF A LOW-COHERENCE INTERFEROMETRIC PROBE OPERATING IN THE SCANNING MEASUREMENT MODE Majorov Evgeniy E., Aleksander V. Arefiev, Ramiz B. Guliyev, Vera P. Pushkina, Dagaev Alexander V.
790
A low-coherence interferometric probe operating in the scanning measurement mode is presented. Data on the surface relief during the movement of the developed probe and the change in the path difference of the reference mirror in the interferometer arm are obtained. The functional diagram of the optical measuring unit and the scanning measurement mode are described, and the processing of signals from photodetectors is analyzed. A pattern of irregular wave fronts when low-coherence radiation falls on a rough surface, an intensity curve of the interference pattern when the surface relief changes along the OZ axis, and the results of measuring the amplitude and the envelope of the interference signal during defocusing are obtained.

MEDICAL DEVICES, SYSTEMS, AND PRODUCTS

CONTRAST AGENT DISTRIBUTION IN THE LUMEN AND WALL OF THE ABDOMINAL AORTA ACCORDING TO CT-ANGIOGRAPHIC STUDY DATA Maria R. Kodenko, Yury A. Vasilev, Andrey V. Samorodov, Nicholaу S. Kulberg, Roman V. Reshetnikov
798
An approach to the approximation and analysis of the CT density signal component associated with intravascular radiocontrast agent (RCA) based on computed tomography angiography (CTA) images of the abdominal aorta is presented. The aim of the work is to study the possibility of extracting and analyzing the RCA-induced component in the lumen and wall of the abdominal aorta on the CTA image. A functionality for describing one-dimensional and two-dimensional distribution of the CTA as a set of sums of sigmoid of a special type is proposed. The nonlinear least squares method with Levenberg – Marquardt optimization is used for approximation. The algorithm is tested on an open data set consisting of 594 CTA images. Data preparation is performed using specialized software Slicer 3D. The results demonstrate the absence of statistically significant differences in the CTA density values between the original images and the approximation results (p> 0.05, paired Wilcoxon test). The sensitivity of the model to different distributions of the RCA in the area of aneurysm, thrombosis and origin of the main arteries is demonstrated. Sensitivity is defined as the presence of statistically significant differences in the calculated parameters of the model for the area of homogeneous and nonhomogeneous distribution of the RCA within each of the CT studies. The values of the root-mean-square approximation error for the specified areas do not differ statistically significantly and are unimodally distributed (p > 0.7) within a single CT study. The proposed approach can be useful for personalizing CTA, developing algorithms for processing CTA data, synthesizing non-contrast CT data, and training artificial intelligence algorithms.