ISSN 0021-3454 (print version)
ISSN 2500-0381 (online version)
Summaries of the issue


The features of heart rhythm disturbances classification based on the electrocardiogram obtained from one lead are studied. A primary set of eight informative features is proposed: three for the RR-interval duration and five for the R-waveshape. An effective combination of the proposed features for classification of three states of heart rhythm (normal cardio cycle, ventricular extrasystole, atrial extrasystole) using algorithms of logistic regression and random forest is proposed. The records of II lead from multi-channel electrocardiograms databases of MIT-BIH Arrhythmia DB and St.-Petersburg Institute of Cardiological Engineering „INCART“ are applied. It is found that the most informative features for the considered classes of cardiac rhythm disorders are the clutch coefficient and the i-th R-wave kurtosis coefficient γi. The best accuracy of classification according to the average balanced F-measure for dataset without class balancing is 92.58 % for logistic regression and 92.11 % for random forest; with class balancing the result is 86.17 % for logistic regression and 84.55 % for random forest. The experimental results show that to classify the heart rhythm disturbances under consideration, it is advisable to use one criterion of duration and form. The obtained results can be used in the synthesis and analysis of classification systems for heart rhythm disorders.
USING DEEP LEARNING IN PNEUMONIA DIAGNOSIS FROM X-RAYS PATTERNS Anastasia S. Raskopina, Viktoriya V. Bozhenko, Tatiana M. Tatarnikova
With the development of neural networks, new effective solutions in the field of medical diagnosis are opening up. The level of accuracy and reliability achieved by neural networks reduces the risk of false positives and diagnostic errors. In the task of diagnosing pneumonia from X-ray images, various machine learning algorithms such as the support vector machine (SVM), K-nearest neighbors (KNN), convolutional neural networks (CNN) are compared. The advantages of these methods in the task of medical diagnostics are discussed. Machine learning algorithms are brought to software implementation, and for each of them training parameters are selected experimentally. To compare the methods, a standard metric, accuracy, was used, and the methods are also compared by training time. The corresponding experiments are conducted on real data from X-ray images of patients with pneumonia. The experimental results demonstrate better accuracy of deep neural networks compared to traditional machine learning methods, which confirms the effectiveness of their potential use for the diagnosis and treatment of this disease.
DATA MINING IN THE DIAGNOSIS OF ANEMIA BY CLINICAL INDICATORS Viktoriya V. Bozhenko, Natalia Yu. Chernysh, Tatiana M. Tatarnikova
A set of medical data obtained from the information system of a network laboratory for outpatient observation, which contains test indicators of patients diagnosed with anemia, is studied. The set contains indicators of a general blood test, reticulocytes, additional biochemical markers of iron metabolism and the inflammatory process. A program is developed to automate the process of analyzing the test set according to the proposed processing algorithm, taking into account the medical data characteristics. Preliminary preparation and data cleaning are completed, statistical and factor analysis are carried out. Analysis of the selected groups of data makes it possible to find some common indicators for patients with anemic syndrome. Using factor analysis, the number of variables is reduced and four main factors (groups of initial characteristics) necessary to describe the data under study are identified. The results obtained can be used to provide static reports to a medical organization. Also, the studied data are prepared to allow the use of machine learning methods and deeper analysis in order to identify the most effective diagnosis of anemia in the early stages.
Methods for building optimized deep learning accelerators are discussed. Traditional approaches to fault-tolerant deep learning accelerators are shown to rely on redundant computation, which results in significant overheads including training time, power consumption, and integrated circuit size. A method is proposed that considers differences in the vulnerability of individual neurons and the bits of each neuron, which partially solves the problem of computational redundancy. The method allows you to selectively protect model components at the architectural and circuit levels, which reduces overhead without compromising the reliability of the model. It is shown that quantization of the deep learning accelerator model allows data to be represented in fewer bits, which reduces hardware resource requirements.


A method is presented for obtaining pseudo-random numbers to be used further in the development of interactive applications on the Unity engine with the collection of information from pressure and color sensors connected to the Arduino microcontroller. The method involves using the results of periodic measurements of pressure, temperature, illumination, and colors on RGB channels in a room, bit shifting them by a random number of digits, obtaining the “grain” of a pseudo-random number generator by taking the remainder after comparing the number with the current UNIX time. An application is been developed that implements the proposed method of generating pseudorandom numbers. The uniformity of distribution is checked and the correlation coefficient is assessed using a sample of random numbers.
The problem of load balancing in large information systems is discussed. In conditions of high load of information systems caused by big data and an increase in the number of users, effective distribution of the load between system resources becomes critical. Existing load balancing algorithms are considered, considering a centralized or distributed approach to building the architecture of an information system. A description of different architectures for building information systems is given, highlighting the features of load balancing mechanisms for them. The results of a full-scale experiment on virtual servers to evaluate the effectiveness of load balancing algorithms are presented. The computing characteristics of the servers were set differently and the expected request execution time was played out randomly, which makes the experiment close to real conditions.
The features of developing a balanceable data structure focused on accelerated access to elements with high priority are considered. Similar structures can be used in problems of modeling discrete information sources. A self-balancing binary search tree is proposed, optimized for efficient storage and retrieval of data based on priorities that correlate with the probability of symbol generation. The solution overcomes the limitations of existing data structures, taking into account memory and performance requirements in the context of specific information processing tasks.


The features of studying intraseasonal variability of natural-territorial complexes in the Arctic zone of the Russian Federation using multispectral and radar space monitoring are considered. Due to special lighting conditions and high cloudiness, the capabilities of spacecraft with optical instruments on board for surveying the territories of the Russian Arctic, are limited. To ensure space monitoring of natural territories of the Russian Arctic, it is necessary to develop techniques using radar methods that do not depend on shooting conditions. Using the example of a single region of the Russian Arctic – the Taz Peninsula – the intraseasonal variability of the most characteristic types of natural-territorial complexes (shrub-lichen tundra; sphagnum bogs; grass willows; sandbanks and anthropogenic objects) is analyzed. Research methods include interferometric processing of original SAR radar data from the Sentinel-1B spacecraft and processing of data from the Sentinel-2A, 2B spacecrafts using algorithms for classification and calculation of spectral vegetation indices. Based on the results of classification of the surfaces of the Taz peninsula test site, carried out using multispectral space information, four reference areas of the natural-territorial complex were selected. An analysis of the stability and variability of surfaces in selected areas of the test site was carried out on the basis of calculated series of interferometric coherence for each type of natural-territorial complex in the snow-free period of 2021. To interpret the results obtained, statistically processed series of meteorological observations of air temperature and precipitation and data from vegetation indices were used. The results of the study may be most in demand in industrial and environmental monitoring of the oil and gas industry and environmental protection in order to maintain technosphere safety and identify the degree of anthropogenic disturbance in the territories of the Russian Arctic.
The problem of designing a system for monitoring lightning activity is considered. The designed system is based on sensors that evaluate lightning discharge parameters by receiving electromagnetic radiation generated when an electric charge moves along the lightning channel. It is shown that mathematical modeling can be used to ensure the basic characteristics of the designed systems for monitoring lightning activity, such as the probability of detection, the accuracy of determining the coordinates and current of a lightning discharge for a selected working area. A model is proposed that makes it possible to design a system configuration capable of providing specified characteristics by obtaining model estimates and their subsequent analysis. The created model of a lightning activity monitoring system is used to estimate the probability of detecting a lightning discharge by an experimental network deployed in St. Petersburg.


An approach to analyzing and managing data when conducting satellite environmental monitoring of territories and interpreting the results obtained is presented. The approach is based on the experience of applied research during annual satellite monitoring of specified territories in the North-West region of the Russian Federation. Digital maps of geospatial layers of information, built on the basis of space information of various spatial resolutions and materials from field surveys of given territories, are considered as geodata for monitoring the parameters of objects controlled from space and identifying environmentally potentially hazardous areas. A geoportal - a software and hardware complex for processing data and presenting the results of recognition of land characteristics - is developed. The results of environmental monitoring of the state of objects are the output geodata of the geoportal for users, and also, stored in the archive, a source of data for further monitoring of intertemporal and spatial variability of environmental components. Currently, the results of the studies are most in demand when conducting industrial and environmental monitoring in the oil and gas industries, construction and agriculture.