ISSN 0021-3454 (print version)
ISSN 2500-0381 (online version)
Menu

4
Issue
vol 67 / April, 2024
Article

DOI 10.17586/0021-3454-2024-67-4-330-337

UDC 004.85

METHODS FOR OPTIMIZING NEURAL NETWORK MODELS

N. S. Mokretsov
St. Petersburg Electrotechnical University, Department of Information Systems;


E. D. Arkhiptsev
Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg, 197022, Russian Federation; Post-Graduate Student

Reference for citation: Mokretsov N. S., Arkhiptsev E. D. Methods for optimizing neural network models. Journal of Instrument Engineering. 2024. Vol. 67, N 4. P. 330—337 (in Russian). DOI: 10.17586/0021-3454-2024-67-4-330-337.

Abstract. Methods for building optimized deep learning accelerators are discussed. Traditional approaches to fault-tolerant deep learning accelerators are shown to rely on redundant computation, which results in significant overheads including training time, power consumption, and integrated circuit size. A method is proposed that considers differences in the vulnerability of individual neurons and the bits of each neuron, which partially solves the problem of computational redundancy. The method allows you to selectively protect model components at the architectural and circuit levels, which reduces overhead without compromising the reliability of the model. It is shown that quantization of the deep learning accelerator model allows data to be represented in fewer bits, which reduces hardware resource requirements.
Keywords: deep learning, deep learning accelerator, fault tolerance, cross-layer optimization, learning model quantization

References:
  1. Chen Y., Luo T., Liu S., Zhang S., He L., Wang J., Li L., Chen T., Xu Z., Sun N. Annual IEEE/ACM Intern. Symp. on Microarchitecture, 2014, vol. 47, рр. 609–622.
  2. Liu C., Chu C., Xu D., Wang Y., Wang Q., Li H., Li X., Cheng K., Hyca T. IEEE Transact. on Computer-Aided Design of Integrated Circuits and Systems, 2021, no. 10(41), pp. 3400–3413.
  3. Dixit A., Wood A. 2011 Intern. Reliability Physics Symp., IEEE, 2011, pp. 5B–4.
  4. Hoang L.H., Hanif M.A., Shafique M. Design, Automation & Test in Europe Conf. & Exhibition (DATE), IEEE, 2020, рр. 1241–1246.
  5. Ardakani A., Gross W.J. IEEE Workshop on Signal Processing Systems (SiPS), IEEE, 2021, рр. 52–57.
  6. Mittal S. Journal of Systems Architecture, 2020, vol. 104, рр. 101.
  7. Chen Z., Li G., Pattabiraman K. Annual IEEE/IFIP Intern. Conf. on Dependable Systems and Networks (DSN), IEEE, 2021, vol. 51, рр. 1–13.
  8. Chen Y. H., Emer J., Sze V. ACM SIGARCH computer architecture news, 2016, no. 3(44), pp. 367–379.
  9. Libano F., Wilson B., Anderson J., Wirthlin M. J., Cazzaniga C., Frost C., Rech P. IEEE Transact. on Nuclear Science, 2018, no. 1(66), pp. 216–222.
  10. Mahdiani H. R., Fakhraie S. M., Lucas C. IEEE Transact. on Neural Networks and Learning Systems, 2012, no. 8(23), pp. 1215–1228.
  11. Schorn C., Guntoro A., Ascheid G. Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2018, pp. 979–984.
  12. Mokretsov N.S., Tatarnikova T.M. Proc. of Saint Petersburg Electrotechnical University, 2023, no. 7(16), pp. 68–75. (in Russ.)
  13. Sovetov B.Y., Tatarnikova T.M., Cehanovsky V.V. Proc. of 22nd Intern. Conf. on Soft Computing and Measurements, SCM 2019, 2019, рр. 121–124.
  14. Wang H., Feng R., Han Z.F., Leung C.S. IEEE Transact. on Neural Networks and Learning Systems, 2017, no. 8(29), pp. 3870–3878.
  15. Bertoa T.G., Gambardella G., Fraser N. J., Blott M., McAllister J. IEEE Design & Test., 2022, https://doi.org/10.1109/MDAT.2022.3174181.
  16. Rabe M., Milz S., Mader P. Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, рр. 129–141.