Please use this identifier to cite or link to this item: https://dspace.ncfu.ru/handle/123456789/32630
Title: Procedural-semantic models in the description of text-generating functions of language neural networks
Authors: Gusarenko, S. V.
Гусаренко, С. В.
Gusarenko, M. K.
Гусаренко, М. К.
Keywords: Semantic model;Deep understanding;Linguistic neural network;Generated text;Semantic operation;Ontologies
Issue Date: 2025
Publisher: INOIT ALMAVEST
Citation: Gusarenko, S. V., Gusarenko, M. K. Procedural-semantic models in the description of text-generating functions of language neural networks // FILOLOGICHESKIE NAUKI-NAUCHNYE DOKLADY VYSSHEI SHKOLY-PHILOLOGICAL SCIENCES-SCIENTIFIC ESSAYS OF HIGHER EDUCATION. – 2025. – 6. - рр. 117-126. - DOI 10.20339/PhS.6-25.117
Series/Report no.: FILOLOGICHESKIE NAUKI-NAUCHNYE DOKLADY VYSSHEI SHKOLY-PHILOLOGICAL SCIENCES-SCIENTIFIC ESSAYS OF HIGHER EDUCATION
Abstract: Linguistic neural networks are a product of human intelligence, but the semantic procedure that forms the entirety of the meanings and meaning of the generated text is currently not fully described by either artificial intelligence specialists or linguists working in this field. In this state of affairs, it seems advisable to study meaningful interpretations of the work of neural networks, in particular, the construction of models of semantic operations that are minimally necessary for a deep understanding of texts by neural networks. It is concluded that the procedure for generating a direct answer to a question in the text can be presented as a general model that includes the following semantic operations: converting the inversion structure of a question in the prompt into the direct structure of a representative response sentence; the operation of determining descriptions in the text analyzed by the neural network — coreferences of descriptions in the prompt; referring to frame structures (or ontologies) for the detection of semantic links between these descriptions; the operation of identifying a semantic structure in the text that corresponds to the task and the structure of the previously formed response sentence. Neural networks were able to determine the humorous nature of a text they were unfamiliar with and had not previously published, which suggests their ability to identify a comic device regardless of the material on which this technique was performed. This, in turn, allowed us to assume that the so-called attention mechanisms in the studied neural networks identify latent connections and dependencies relevant to the task, which, under certain conditions in the text and within certain linguistic cultures, can be identified as the basis for creating a comic effect.
URI: https://dspace.ncfu.ru/handle/123456789/32630
Appears in Collections:Статьи, проиндексированные в SCOPUS, WOS

Files in This Item:
File SizeFormat 
WoS 2278.pdf
  Restricted Access
113.56 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.