Scientific discoveries by LLM methods: facts and interpretation
Research Article
How to Cite
Filimonov V.Y. Scientific discoveries by LLM methods: facts and interpretation. Humanities of the South of Russia. 2026. Vol. 15. No. 2. P. 161-172. DOI: https://doi.org/10.18522/2227-8656.2026.2.12 (in Russ.).
Abstract
Objective of the study. The analysis of the levels of manifestation of such categories as “fact” and “interpretation” in large language models.
Methodological basis of the study includes the developments of the categories of the social constructivism model by T. Sejnowski, the model of sensory thinking of artificial intelligence by D.J. Chalmers, the model of ethical patterns of artificial intelligence by V.N. Sokolchik, the model of text interpretation in the system of large language models by B.V. Orekhov.
The results of the study. The conducted research allows us to systematize the manifestations of the properties of fact and interpretation in the following representations of LLM: as an interpreter of personal opinion, as an interpreter of possible interpersonal discussion, as well as an interpreter of scientific facts and case studies in the framework of information databases. The following problematic areas of LLM development as an interpreter and a conceptual framework for the development of the field of scientific discoveries are identified: difficulties in generating LLM content for rare or poorly studied phenomena; incorrect and falsified bibliographic references; LLM errors in elementary tasks; independent invention of new words by LLM that are not typical for native speakers.
Prospects of the study. Within the framework of the research, the author forms the approaches to the development of a metaphor about the possibility of representing fact and interpretation in the form of means of evaluating information data from large language models in various subjective forms: from personal interpretation to interpretations of various arrays of scientific data.
Keywords:
fact, interpretation, scientific discoveries, large language models, LLM, LLM methods
References
Bochkova A. A. Artificial intelligence: strategies and methods for solving complex problems. Nadezhnost = Reliability. 2025; 25 (1): 46–57. https://doi.org/10.21683/1729-2646-2025-25-1-46-57. (In Russ.)
Grebenshchikova E. G. Scientific publications in the era of artificial intelligence. Nauchno-tekhnicheskaya informatsiya. Seriya 1: Organizatsiya i metodika informatsionnoy raboty = Scientific and technical information. Series 1: Organization and methodology of information work. 2024; 11: 39–43. https://doi.org/10.36535/0548-0019-2024-11-5. (In Russ.)
Zaitsev D. V. Why do large language models not (always) reason like humans? Vestnik Moskovskogo universiteta. Seriya 7: Filosofiya = Moscow University Bulletin. Series 7: Philosophy. 2024; 48 (1): 76–93. https://doi.org/10.55959/MSU0201-7385-7-2024-1-76-93. (In Russ.)
Zakharova M. V. Intelligent assistants for scientific research in universities. Mir nauki. Pedagogika i psikhologiya = World of Science. Pedagogy and psychology. 2024; 12 (4). (In Russ.)
Kuzminov Ya., Kruchinskaya E. The potential of generative artificial intelligence for solving professional tasks. Forsayt = Foresight. 2024; 18 (4): 67–76. https://doi.org/10.17323/2500-2597.2024.4.67.76. (In Russ.)
Kuzminov V. G., Shvetsov A. A. Artificial intelligence: techno-linguistic phenomenon or something more? Innovatsionnyye tekhnologii v obrazovatel’noy deyatel’nosti: Materialy XXVI Mezhdunarodnoy nauchno-metodicheskoy konferentsii, Nizhniy Novgorod, 07 fevralya 2024 goda = Innovative technologies in educational activity: Proceedings of the XXVI international scientific and methodological conference. Nizhny Novgorod; 2024: 451–460. (In Russ.)
Orekhov B. V. Text and knowledge in the context of large language models. Istoricheskaya informatika = Historical informatics. 2023; 4 (46): 104–113. https://doi.org/10.7256/2585-7797.2023.4.44180. (In Russ.)
Sejnowski T. The deep learning revolution: the most important AI research over the past 60 years. M.: Bombora = Moscow: Bombora, 2022; 304 p. ISBN 978-5-04-101347-9. (In Russ.)
Sokolchik V. N. Open science and modern scientific publications: ethical requirements and new ethical problems. Trudy BGTU. Seriya 6: Istoriya, filosofiya = Proceedings of BSTU. Series 6: History, philosophy. 2024; 1 (281): 142–148. https://doi.org/10.52065/2520-6885-2024-281-27. (In Russ.)
Chernyak M. A., Morozova S. A. “Mirror, GPT, tell me…” or the phenomenon of literary text in the post-literary era. Mir russkogo slova = World of Russian language. 2024; 4: 50–62. https://doi.org/10.21638/spbu30.2024.406. (In Russ.)
Epstein M. The future of the humanities: techno-humanism, creatorics, erotology, electronic philology and other sciences of the XXI century. M.: RIPOL klassik / Pangloss = Moscow: RIPOL Classic, Pangloss, 2019; 239 p. ISBN 978-5-386-12499-1. (In Russ.)
Baryshnikov P. What is scientific knowledge produced by large language models? Philosophical Problems of IT & Cyberspace. 2024: 89–103. https://doi.org/10.17726/philIT.2024.1.6.
Bhattacharyya M., Miller V. M., Bhattacharyya D., Miller L. E. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. 2023; 15 (5): e39238. https://doi.org/10.7759/cureus.39238.
Chalmers D. J. Does thinking require sensory grounding? From the history of philosophy to artificial intelligence. PhilArchive. 2023: 22–45.
Liu Y., Nan Y., Xu W., Hu X., Ye L., Qin Zh., Liu P. AlphaGo Moment for Model Architecture Discovery. 2025. arXiv. — URL: https://arxiv.org/abs/2507.18074.
Si Ch., Yang D., Hashimoto T. Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers. 2024. arXiv. — URL: https://arxiv.org/abs/2409.04109.
Zheng Y., Koh H. Y., Ju J., Nguyen A. T. N., May L. T., Webb G. I., Pan Sh. P. 2023. arXiv. — URL: https://arxiv.org/abs/2310.07984?utm_source=Securitylab.ru.
Zhu K., Zhang J., Qi Z., Shang N., Liu Z., Han P., Su Y., Yu H., You J. SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents. 2025. arXiv — URL: https://arxiv.org/abs/2505.23559.
Grebenshchikova E. G. Scientific publications in the era of artificial intelligence. Nauchno-tekhnicheskaya informatsiya. Seriya 1: Organizatsiya i metodika informatsionnoy raboty = Scientific and technical information. Series 1: Organization and methodology of information work. 2024; 11: 39–43. https://doi.org/10.36535/0548-0019-2024-11-5. (In Russ.)
Zaitsev D. V. Why do large language models not (always) reason like humans? Vestnik Moskovskogo universiteta. Seriya 7: Filosofiya = Moscow University Bulletin. Series 7: Philosophy. 2024; 48 (1): 76–93. https://doi.org/10.55959/MSU0201-7385-7-2024-1-76-93. (In Russ.)
Zakharova M. V. Intelligent assistants for scientific research in universities. Mir nauki. Pedagogika i psikhologiya = World of Science. Pedagogy and psychology. 2024; 12 (4). (In Russ.)
Kuzminov Ya., Kruchinskaya E. The potential of generative artificial intelligence for solving professional tasks. Forsayt = Foresight. 2024; 18 (4): 67–76. https://doi.org/10.17323/2500-2597.2024.4.67.76. (In Russ.)
Kuzminov V. G., Shvetsov A. A. Artificial intelligence: techno-linguistic phenomenon or something more? Innovatsionnyye tekhnologii v obrazovatel’noy deyatel’nosti: Materialy XXVI Mezhdunarodnoy nauchno-metodicheskoy konferentsii, Nizhniy Novgorod, 07 fevralya 2024 goda = Innovative technologies in educational activity: Proceedings of the XXVI international scientific and methodological conference. Nizhny Novgorod; 2024: 451–460. (In Russ.)
Orekhov B. V. Text and knowledge in the context of large language models. Istoricheskaya informatika = Historical informatics. 2023; 4 (46): 104–113. https://doi.org/10.7256/2585-7797.2023.4.44180. (In Russ.)
Sejnowski T. The deep learning revolution: the most important AI research over the past 60 years. M.: Bombora = Moscow: Bombora, 2022; 304 p. ISBN 978-5-04-101347-9. (In Russ.)
Sokolchik V. N. Open science and modern scientific publications: ethical requirements and new ethical problems. Trudy BGTU. Seriya 6: Istoriya, filosofiya = Proceedings of BSTU. Series 6: History, philosophy. 2024; 1 (281): 142–148. https://doi.org/10.52065/2520-6885-2024-281-27. (In Russ.)
Chernyak M. A., Morozova S. A. “Mirror, GPT, tell me…” or the phenomenon of literary text in the post-literary era. Mir russkogo slova = World of Russian language. 2024; 4: 50–62. https://doi.org/10.21638/spbu30.2024.406. (In Russ.)
Epstein M. The future of the humanities: techno-humanism, creatorics, erotology, electronic philology and other sciences of the XXI century. M.: RIPOL klassik / Pangloss = Moscow: RIPOL Classic, Pangloss, 2019; 239 p. ISBN 978-5-386-12499-1. (In Russ.)
Baryshnikov P. What is scientific knowledge produced by large language models? Philosophical Problems of IT & Cyberspace. 2024: 89–103. https://doi.org/10.17726/philIT.2024.1.6.
Bhattacharyya M., Miller V. M., Bhattacharyya D., Miller L. E. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. 2023; 15 (5): e39238. https://doi.org/10.7759/cureus.39238.
Chalmers D. J. Does thinking require sensory grounding? From the history of philosophy to artificial intelligence. PhilArchive. 2023: 22–45.
Liu Y., Nan Y., Xu W., Hu X., Ye L., Qin Zh., Liu P. AlphaGo Moment for Model Architecture Discovery. 2025. arXiv. — URL: https://arxiv.org/abs/2507.18074.
Si Ch., Yang D., Hashimoto T. Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers. 2024. arXiv. — URL: https://arxiv.org/abs/2409.04109.
Zheng Y., Koh H. Y., Ju J., Nguyen A. T. N., May L. T., Webb G. I., Pan Sh. P. 2023. arXiv. — URL: https://arxiv.org/abs/2310.07984?utm_source=Securitylab.ru.
Zhu K., Zhang J., Qi Z., Shang N., Liu Z., Han P., Su Y., Yu H., You J. SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents. 2025. arXiv — URL: https://arxiv.org/abs/2505.23559.
Article
Received: 13.12.2025
Accepted: 23.04.2026
Citation Formats
Other cite formats:
APA
Filimonov, V. Y. (2026). Scientific discoveries by LLM methods: facts and interpretation. Humanities of the South of Russia, 15(2), 161-172. https://doi.org/10.18522/2227-8656.2026.2.12
Section
INVESTIGATIONS OF YOUNG SCIENTISTS
JATS XML




