Please use this identifier to cite or link to this item:
https://dspace.ncfu.ru/handle/20.500.12258/23476Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Abdulkadirov, R. I. | - |
| dc.contributor.author | Абдулкадиров, Р. И. | - |
| dc.contributor.author | Lyakhov, P. A. | - |
| dc.contributor.author | Ляхов, П. А. | - |
| dc.date.accessioned | 2023-05-12T12:40:37Z | - |
| dc.date.available | 2023-05-12T12:40:37Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Abdulkadirov, R.I., Lyakhov, P.A. A new approach to training neural networks using natural gradient descent with momentum based on Dirichlet distributions // Computer Optics. - 2023. - 47 (1), pp. 160-169. - DOI: 10.18287/2412-6179-CO-1147 | ru |
| dc.identifier.uri | http://hdl.handle.net/20.500.12258/23476 | - |
| dc.description.abstract | In this paper, we propose a natural gradient descent algorithm with momentum based on Dirichlet distributions to speed up the training of neural networks. This approach takes into account not only the direction of the gradients, but also the convexity of the minimized function, which significantly accelerates the process of searching for the extremes. Calculations of natural gradients based on Dirichlet distributions are presented, with the proposed approach introduced into an error backpropagation scheme. The results of image recognition and time series forecasting during the experiments show that the proposed approach gives higher accuracy and does not require a large number of iterations to minimize loss functions compared to the methods of stochastic gradient descent, adaptive moment estimation and adaptive parameter-wise diagonal quasi-Newton method for nonconvex stochastic optimization. | ru |
| dc.language.iso | en | ru |
| dc.relation.ispartofseries | Computer Optics | - |
| dc.subject | Dirichlet distributions | ru |
| dc.subject | Natural gradient descent | ru |
| dc.subject | Pattern recognition | ru |
| dc.subject | Machine learning | ru |
| dc.title | A new approach to training neural networks using natural gradient descent with momentum based on Dirichlet distributions | ru |
| dc.type | Статья | ru |
| vkr.inst | Факультет математики и компьютерных наук имени профессора Н.И. Червякова | ru |
| vkr.inst | Северо-Кавказский центр математических исследований | ru |
| Appears in Collections: | Статьи, проиндексированные в SCOPUS, WOS | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| WoS 1593 .pdf Restricted Access | 107.48 kB | Adobe PDF | View/Open | |
| scopusresults 2527 .pdf Restricted Access | 131.64 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.