Please use this identifier to cite or link to this item: https://dspace.ncfu.ru/handle/123456789/29181
Title: Analysis of an Existing Method for Detecting Adversarial Attacks on Deep Neural Networks
Authors: Lapina, M. A.
Лапина, М. А.
Dyudyun, G. D.
Дюдюн, Г. Д.
Kotlyarov, D. V.
Котляров, Д. В.
Rjevskaya, N. V.
Ржевская, Н. В.
Keywords: Adversarial attack;Pattern recognition;Artificial intelligence;Attack algorithm;Information security;Machine learning;Malicious machine learning;Neural network
Issue Date: 2024
Publisher: Springer Science and Business Media Deutschland GmbH
Citation: Lapina M., Dudun G., Kotlyarov D., Rjevskaya N., Subramanian S.J. Analysis of an Existing Method for Detecting Adversarial Attacks on Deep Neural Networks // Lecture Notes in Networks and Systems. - 2024. - 1044 LNNS. - pp. 316 - 329. - 10.1007/978-3-031-64010-0_29
Series/Report no.: Lecture Notes in Networks and Systems
Abstract: Analyzes the existing method of detecting adversarial attacks on deep neural networks, proposed by researchers from Carnegie Mellon University and the Korean Institute of Advanced Technologies (KAIST) Ko, G. and Lim, G in 2021. Examines adversarial attacks, as well as the history of research on the topic. The paper considers the concepts of interpreted and not interpreted neural networks and features of methods of protection of the types of neural networks considered. The method for protecting against adversarial attacks is also considered to be applicable to both types of neural networks. An example of an attack simulation is given, which makes it possible to identify a sign showing that an attack has been committed.
URI: https://dspace.ncfu.ru/handle/123456789/29181
Appears in Collections:Статьи, проиндексированные в SCOPUS, WOS

Files in This Item:
File SizeFormat 
scopusresults 3200.pdf
  Restricted Access
133.23 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.