UDC 004.032.26
DOI: 10.36871/2618-9976.2022.08.001

Authors

Victor V. Erokhin
Doctor of Technical Sciences, Associate Professor, Moscow State Institute of International Relations (University) of the Ministry of Foreign Affairs of Russia, Moscow, Russia
Elena V. Eliseeva
Candidate of Pedagogical Sciences, Associate Professor, I.G. Petrovski Bryansk State University, Moscow, Russia

Abstract

The article presents research in the field of studying the influence of scaling parameters of artificial neural networks on their performance under constant training, generalization and classification errors. The fundamental relationship between the size of the architecture (model) of an artificial neural network and its performance is revealed. This relationship allows the use of artificial neural networks of small sizes to select its optimal hyperparameters, and the use of these hyperparameters in artificial neural networks of large dimensions.

Keywords

Artificial neural networks, Neural network hyperparameters, Deep neural networks, Modeling, Optimization