Please use this identifier to cite or link to this item: https://dspace.ncfu.ru/handle/20.500.12258/21937
Title: Accelerating Extreme Search of Multidimensional Functions Based on Natural Gradient Descent with Dirichlet Distributions
Authors: Abdulkadirov, R. I.
Абдулкадиров, Р. И.
Lyakhov, P. A.
Ляхов, П. А.
Nagornov, N. N.
Нагорнов, Н. Н.
Keywords: Dirichlet distribution;Optimization;Natural gradient descent;K–L divergence;Generalized Dirichlet distribution
Issue Date: 2022
Publisher: MDPI
Citation: Abdulkadirov, R., Lyakhov, P., Nagornov, N. Accelerating Extreme Search of Multidimensional Functions Based on Natural Gradient Descent with Dirichlet Distributions // Mathematics. - 2022. - Volume 10. - Issue 19. - Номер статьи 3556. - DOI10.3390/math10193556
Series/Report no.: Mathematics
Abstract: The high accuracy attainment, using less complex architectures of neural networks, remains one of the most important problems in machine learning. In many studies, increasing the quality of recognition and prediction is obtained by extending neural networks with usual or special neurons, which significantly increases the time of training. However, engaging an optimization algorithm, which gives us a value of the loss function in the neighborhood of global minimum, can reduce the number of layers and epochs. In this work, we explore the extreme searching of multidimensional functions by proposed natural gradient descent based on Dirichlet and generalized Dirichlet distributions. The natural gradient is based on describing a multidimensional surface with probability distributions, which allows us to reduce the change in the accuracy of gradient and step size. The proposed algorithm is equipped with step-size adaptation, which allows it to obtain higher accuracy, taking a small number of iterations in the process of minimization, compared with the usual gradient descent and adaptive moment estimate. We provide experiments on test functions in four- and three-dimensional spaces, where natural gradient descent proves its ability to converge in the neighborhood of global minimum. Such an approach can find its application in minimizing the loss function in various types of neural networks, such as convolution, recurrent, spiking and quantum networks.
URI: http://hdl.handle.net/20.500.12258/21937
Appears in Collections:Статьи, проиндексированные в SCOPUS, WOS

Files in This Item:
File Description SizeFormat 
scopusresults 2376 .pdf
  Restricted Access
2.37 MBAdobe PDFView/Open
WoS 1484 .pdf
  Restricted Access
113.63 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.