DOI 10.17586/0021-3454-2025-68-12-1011-1019
UDC 004.622
ENERGI: A MULTIMODAL DATA CORPUS OF INTERACTION OF PARTICIPANTS IN VIRTUAL COMMUNICATION
St. Petersburg Federal Research Center of the RAS, Speech and Multimodal Interfaces Laboratory ; Junior Researcher
A. N. Velichko
St. Petersburg Federal Research Center of the Russian Academy of Sciences, Saint Petersburg, 199178, Russian Federation; Senior Researcher
A. A. Karpov
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS), Saint Petersburg, 199178, Russian Federation; Professor, Head of Laboratory
Reference for citation: Dvoynikova A. A., Velichko A. N., Karpov A. A. ENERGI: a multimodal data corpus of interaction of participants in virtual communication. Journal of Instrument Engineering. 2025. Vol. 68, N 12. P. 1011–1019 (in Russian). DOI: 10.17586/0021-3454-2025-68-12-1011-1019.
Abstract. A statistical analysis of the multimodal ENERGI (ENgagement and Emotion Russian Gathering Interlocutors) data corpus containing audio-video recordings of communication in Russian by a group of people obtained using the Zoom teleconference system has been performed. The corpus data is annotated into three classes: participant engagement (high, medium, low), emotional arousal (high, medium, low), and emotional valence (positive, neutral, negative), as well as ten classes of communicative gestures. The corpus contains 6.4 hours of video recordings of group communications, with a total of 18 unique speakers; the data is annotated using 10-second time intervals. ENERGI’s advantages over other corpora include its multimodality, Russian language support, speaker diversity, natural recording conditions, and extensive annotation across several behavioral parameters of communication participants. The corpus can be used to develop a multimodal automated system for analyzing the behavioral aspects of participants in virtual group communications.
Abstract. A statistical analysis of the multimodal ENERGI (ENgagement and Emotion Russian Gathering Interlocutors) data corpus containing audio-video recordings of communication in Russian by a group of people obtained using the Zoom teleconference system has been performed. The corpus data is annotated into three classes: participant engagement (high, medium, low), emotional arousal (high, medium, low), and emotional valence (positive, neutral, negative), as well as ten classes of communicative gestures. The corpus contains 6.4 hours of video recordings of group communications, with a total of 18 unique speakers; the data is annotated using 10-second time intervals. ENERGI’s advantages over other corpora include its multimodality, Russian language support, speaker diversity, natural recording conditions, and extensive annotation across several behavioral parameters of communication participants. The corpus can be used to develop a multimodal automated system for analyzing the behavioral aspects of participants in virtual group communications.
Keywords: data corpus, engagement of participants, emotional arousal, valence of emotions, communicative gestures
Acknowledgement: The work was carried out within the framework of the budget theme of the St. Petersburg Federal Research Center of the RAS, No. FFZF-2025-0003.
References:
Acknowledgement: The work was carried out within the framework of the budget theme of the St. Petersburg Federal Research Center of the RAS, No. FFZF-2025-0003.
References:
- Uzdiaev M.Yu., Karpov A.A. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, no. 5(24), pp. 834–842. (in Russ.)
- Gupta A., Balasubramanian V. arXiv preprint, arXiv:1609.01885, 2016.
- Ben-Youssef A., Clavel C., Essid S. et al. Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI), 2017, рр. 464–472, DOI: 10.1145/3136755.3136814.
- Del Duchetto F., Baxter P., Hanheide M. Frontiers in Robotics and AI, 2020, vol. 7, DOI: 10.3389/frobt.2020.00116.
- Kaur A., Mustafa A., Mehta L., Dhall A. 2018 Digital Image Computing: Techniques and Applications (DICTA), 2018, рр. 1–8, DOI: 10.1109/DICTA.2018.8615851.
- Delgado K., Origgi J.M., Hasanpoor T. et al. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, рр. 3628–3636.
- Churaev E.N. Personalizirovannyye modeli raspoznavaniya psikhoemotsional’nogo sostoyaniya i vovlechonnosti lits po video (Personalized Models for Recognizing Psycho-Emotional State and Facial Engagement from Video), Candidate’s thesis, St. Petersburg, 2025, 134 р. (in Russ.)
- Karimah S.N., Hasegawa S. Smart Learning Environments, 2022, no. 1(9), pp. 31, DOI: 10.1186/s40561-022-00212-y.
- Celiktutan O., Skordos E., Gunes H. IEEE Transactions on Affective Computing, 2017, no. 4(10), pp. 484–497, DOI: 10.1109/TAFFC.2017.2737019.
- Pabba C., Kumar P. Expert Systems, 2022, no. 1(39), pp. e12839, DOI: 10.1111/exsy.12839.
- Chatterjee I., Goršič M., Clapp J. D., Novak D. Frontiers in Neuroscience, 2021, vol. 15, рр. 757381, DOI: 10.3389/ fnins.2021.757381.
- Sümer Ö., Goldberg P., D’Mello S. et al. IEEE Transactions on Affective Computing, 2021, no. 2(14), pp. 1012–1027, DOI: 10.1109/TAFFC.2021.3127692.
- Vanneste P., Oramas J., Verelst T. et al. Mathematics, 2021, no. 3(9), pp. 287, DOI: 10.3390/math9030287.
- Dresvyanskiy D., Sinha Y., Busch M. et al. Speech and Computer. SPECOM 2022. Lecture Notes in Computer Science, 2022, рр. 163–177, DOI: 10.1007/978-3-031-20980-2_15.
- Cafaro A., Wagner J., Baur T. et al. Proceedings of the ICMI, 2017, рр. 350–359, DOI: 10.1145/3136755.3136780.
- Busso C., Bulut M., Lee C.C. et al. Language resources and evaluation, 2008, no. 4(42), pp. 335–359, DOI: 10.1007/ s10579-008-9076-6.
- Ringeval F., Sonderegger A., Sauer J., Lalanne D. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2013, рр. 1–8, DOI: 10.1109/FG.2013.6553805.
- Kossaifi J., Walecki R., Panagakis Y. et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, no. 3(43), pp. 1022–1040. DOI: 10.1109/TPAMI.2019.2944808.
- Dvoynikova A.A. Almanac of scientific works of young scientists of ITMO University, 2023, vol. 1, рр. 251–256. (in Russ.)
- Certificate of registration of the database 2023624954, Baza dannykh proyavleniy vovlechennosti i emotsiy russkoyazychnykh uchastnikov telekonferentsiy (ENERGI — ENgagement and Emotion Russian Gathering Interlocutors) (Database of Manifestations of Engagement and Emotions of Russian-Speaking Participants in Teleconferences (ENERGI - ENgagement and Emotion Russian Gathering Interlocutors)), A.A. Dvoynikova, A.A. Karpov, Priority 25.12.2023. (in Russ.)
- Dvoynikova A.A., Karpov A.A. Journal of Instrument Engineering, 2024, no. 11(67), pp. 984–993, DOI: 10.17586/0021- 3454-2024-67-11-984-993. (in Russ.)
- Sloetjes H., Wittenburg P. Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008), 2008.
- Lyusin D.V. Psychological diagnostics, 2006, vol. 4, рр. 3–22. (in Russ.)








