The ability of large language models to reason logically
DOI:
https://doi.org/10.46282/afi.2025.2.04Keywords:
artificial intelligence, logic, reasoningAbstract
The rise of large language models (LLMs) has sparked a new wave of debate in legal scholarship about the extent to which these systems are capable of replacing human thinking, particularly logical reasoning. This article examines whether LLMs can consistently apply the basic principles of formal logic in the interpretation and subsumption of legal norms. Methodologically, the research is based on a series of experiments that use five types of typical logical errors in legal reasoning. Several models (ChatGPT-5, Gemini 2.5 Pro, Copilot, Grok 4, and Perplexity) were tested at two levels of input: lay and expert prompts. The results show recurring logical inconsistencies across all models, with more convincing outputs only achieved for explicitly formulated expert tasks. The findings underscore the fundamental difference between statistical text generation and actual legal reasoning, which requires logical deduction and transparency. In conclusion, while LLMs can be useful tools to support lawyers, they are not yet capable of fully replacing their work.
References
1. BATHAEE, Yavar. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology. 2018, roč. 31, č. 2, s. 890-938. ISSN 0897-3393. Dostupné z: https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf. [cit. 2025-09-09].
2. BOURAS, Andrew. Integrating Randomness in Large Language Models: A Linear Congruential Generator Approach for Generating Clinically Relevant Content. Online. Eprint: 2407.03582. 2024, 13 s. Dostupné z: https://doi.org/https://doi.org/10.48550/arXiv.2407.03582. [cit. 2025-09-24]
3. ČIERNIK, Marek. Logické usudzovanie a umelá inteligencia v práve, In: Bratislavské právnické fórum 2025. Bratislava : Právnická fakulta Univerzity Komenského v Bratislave, 2025.
4. GAHÉR, František. Logika pre každého. 4. dopl. vyd. Bratislava: Iris, 2013. ISBN 978-80-89256-88-4.
5. GAHÉR, František. Interpretácia v práve II. Filozofia. 2015, roč. 70, č. 10, s. 789-799. ISSN 2585-7061.
6. JUNNAN Liu, HONGWEI Liu, LINCHEN Xiao et at. Are Your LLMs Capable of Stable Reasoning?. In: Findings of the Association for Computational Linguistics: ACL 2025, Vienna, Austria, s. 17594 – 17632. Dostupné z: 10.18653/v1/2025.findings-acl.905. [cit. 2025-09-10].
7. KLUG, Ulrich. Juristische Logik. 4., neubearbeitete Auflage. Berlin: Springer, 1982. ISBN 978-3-642-87157-3.
8. MARCUS, Gary. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. Online. Eprint: arXiv:2002.06177. 2020, 59 s. Dostupné z: https://doi.org/https://doi.org/10.48550/arXiv.2002.06177. [cit. 2025-09-09].
9. NAVEED, Humza; ULLAH KHAN, Assad a QIU, Shi. et. al. A Comprehensive Overview of Large Language Models. ACM Trans. Intell. Syst. Technol. 2025, vol. 16, no. 5, 47 s. ISSN 2157-6904. Dostupné z: https://doi.org/10.1145/3744746 [cit. 2025-09-10].
10. SHARMA, Mrinank; TONG, Meg; KORBAK, Tomasz et. al. Towards Understanding Sycophancy in Language Models. In: 12th International Conference on Learning Representations, ICLR 2024. Vienna, 2024, 35 s. Dostupné z: https://doi.org/10.48550/arXiv.2310.13548 [cit. 2025-09-10].
11. SHOJAEE, Parshin; MIRAZDEH, Iman; ALIZADEH, Keivan et. al. The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. In: NeurIPS, 2025. 30 s. Dostupné z: https://doi.org/10.48550/arXiv.2506. 06941 [cit. 2025-09-10].
12. ŠTĚPÁN, Jan. Logika a právo. 2., dopl. vyd., Praha: C.H. Beck, 2004. 128 s. ISBN 80-7179-872-X.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Acta Facultatis Iuridicae Universitatis Comenianae

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.