Özgün Law Firm

Özgün Law Firm

EVALUATION OF ACTS COMMITTED THROUGH ARTIFICIAL INTELLIGENCE UNDER TURKISH PENAL CODE

EVALUATION OF ACTS COMMITTED THROUGH ARTIFICIAL INTELLIGENCE UNDER TURKISH PENAL CODE

1. Abstract

This article examines acts committed through artificial intelligence (AI) from the perspective of Turkish Penal Code. Since AI is not recognized as a direct perpetrator, criminal liability is primarily attributed to human actors such as developers, users, and system owners. In light of international examples and recent developments, the legal challenges encountered in AI-generated content—particularly with systems like Grok—are analyzed. The study reveals that current Turkish Penal Code regulations do not adequately cover AI-specific acts, leading to significant normative gaps. Emphasis is placed on the necessity of enacting effective legal regulations in this field and aligning national standards with international norms.

Keywords      : Artificial Intelligence, AI and Penal Code, Criminal Liability, Concept of Perpetrator, Intent and Negligence, Principle of Legality, AI-Generated Crimes, Legal Responsibility, Algorithmic Crimes, Grok AI, AI Content Generation.

2. Introduction

Artificial intelligence (AI), one of the most striking technological advancements of the 21st century, has permeated nearly every aspect of human life and led to revolutionary transformations across numerous sectors. AI systems, which are actively employed in fields such as healthcare, transportation, education, and finance, have evolved from mere auxiliary tools into autonomous actors that actively participate in decision-making processes. These technological developments not only impact the socio-economic structure but also directly affect legal systems.

Penal Code is fundamentally built upon a system of liability that is based on human will and culpability. Within this framework, the perpetrator of a crime is traditionally a natural person who acts with intent or negligence. However, the growing capacity of AI to make autonomous decisions necessitates a reassessment of classical concepts in penal code theory, such as "perpetrator," "fault," "intent," and "negligence." The legal classification of an act committed by an AI system, the identification of the responsible party, and the determination of criminal liability remain questions without clear and definitive answers under the current legal paradigm.

The assessment of acts committed through AI within the context of penal code entails significant normative and practical gaps and controversies. This article aims to explore how AI should be approached within the framework of penal code; to what extent and to whom criminal liability may be attributed in acts committed via such technologies; how the existing legal system responds to these acts; and in what ways it falls short.

3. The Concept of Artificial Intelligence and Its Legal Characterization

Artificial intelligence (AI) can generally be defined as systems that emulate human intelligence. These systems are capable of performing cognitive processes such as learning, reasoning, problem-solving, perception, and even language comprehension through algorithms. From a technical perspective, AI is often evaluated within the subfields of machine learning, deep learning, neural networks, and natural language processing. However, the primary concern lies, for the legal discipline, in the decision-making capabilities of such technologies and how these decisions are to be evaluated within the legal order.

AI systems may be classified according to their level of technological advancement as assistive systems, semi-autonomous systems, and fully autonomous systems. While assistive systems function with human intervention, fully autonomous systems possess the capacity to make and execute decisions independently of human control. It is primarily this latter category that gives rise to legal challenges. The fundamental issue is determining who bears responsibility for the decisions or acts executed by these systems—a matter that cannot be directly answered by traditional legal theories.

Under Turkish law, there is no explicit regulation concerning the legal status of artificial intelligence. According to the civil law system in Türkiye, only natural and legal persons possess legal capacity. In this context, AI is not regarded as a legal subject but rather as a technical tool or object. However, this approach has become insufficient with the increasing complexity and autonomy of AI systems. In particular, legal systems built upon the notions of culpability and intent—such as penal code—face normative gaps as a result.

One of the most fundamental issues from a penal code standpoint is whether an act committed by AI can be classified as a legal “actus reus”. Under penal code, an act is generally defined as a voluntary human action. This brings forth numerous technical and ethical questions, such as whether AI can make independent decisions, to what extent such decisions are foreseeable, and how much they can be influenced or controlled. Therefore, the legal characterization of AI requires a multidisciplinary approach that includes not only legal scholars but also ethicists, computer scientists, and sociologists.

Penal code is a discipline grounded in the principle of liability based on human intent. For an act to constitute a crime, it must be explicitly defined in law, the perpetrator must have acted with intent or at least negligence, and the act must be unlawful. However, the rise of AI technologies significantly challenges these core principles of penal code.          

4. Artificial Intelligence in the Context of Fundamental Principles of Penal Code

4.1. The Principle of Legality and the Problem of Legal Certainty

One of the cardinal principles under penal code is the principle of legality ("nullum crimen, nulla poena sine lege"). According to this principle, an act can only be deemed criminal if it has been clearly defined as such by law prior to its commission. However, AI systems are capable of making decisions that are unforeseeable and may give rise to novel forms of conduct. For instance, in the case of an autonomous vehicle striking a pedestrian without a driver, the applicable legal provision and the liable party are not clearly defined in the law. This situation contradicts the principles of legal certainty and foreseeability that are essential to criminal justice.

4.2. The Concept of the Perpetrator and Capacity for Fault

Under penal code, the perpetrator is the individual who commits the act that legally constitutes the crime. This person must be an entity that acts with volition and intent and possesses legal capacity.  However, AI lacks legal personality and any capacity for fault. In other words, AI cannot be held liable as a perpetrator. At this point, questions arise as to whether liability should instead fall upon the programmer, the user, or the legal entity that owns the AI system.

4.3. Evaluation in Terms of Intent and Negligence

Intent refers to the perpetrator knowingly and willingly committing the act. Negligence, by contrast, involves any failure to exercise due care. AI systems are incapable of possessing either element, as they cannot form conscious decisions nor breach duty of care. Nevertheless, if an AI system is known to function erroneously in a foreseeable manner, users may be held liable for negligence if they continue to employ it. For example, continuing to use a defective AI system despite knowledge of its flaws may have significant legal consequences.

4.4. The Causal Link and the Problem of Foreseeability

Many AI systems operate as "learning systems" that evolve over time and adapt their decision-making processes. This feature makes it increasingly difficult to establish a causal link between an act and its outcome. From a penal code perspective, causality requires a direct connection between the perpetrator and the result of the crime. However, when an AI system autonomously learns and makes an erroneous decision, it remains unclear who should bear responsibility and to what extent.

5. Criminal Liability in Acts Committed Through Artificial Intelligence

The complex and autonomous nature of AI systems raises significant questions regarding who bears responsibility when unlawful acts are committed through such systems. According to the principle of individual responsibility under penal code, liability arises for the person who personally commits an act that fulfills the elements of a crime. However, since AI cannot be regarded as a direct perpetrator, determining responsibility often requires a multifaceted analysis involving multiple actors.

5.1. Can Artificial Intelligence Be Considered a Perpetrator?

Due to the lack of legal personality, artificial intelligence cannot hold the status of a perpetrator under penal code. In our current legal framework, only natural and juridical persons can be held criminally liable. As an AI system cannot act with intent or negligence on its own, it cannot be subjected to criminal sanctions. Therefore, AI is considered a tool—an "instrument-subject" that facilitates the commission of a crime rather than being the one who commits it.

5.2. Responsibility of the Programmer

Programmers who design and code AI systems directly shape the algorithms and decision-making capabilities of those systems. If the software contains code that enables or encourages criminal behavior, the programmer may be held liable. Especially in cases involving intentional programming errors, security vulnerabilities, or insufficient oversight, criminal liability may arise either through intent or negligence on the part of the developer.

5.3. Responsibility of the User

An individual who actively uses an AI system—whether as an employee or a private user—may bear criminal liability if they act based on the system’s outputs or provide data inputs that guide the system's behavior. If the user is in a position to foresee the unlawful outcomes generated by the system and nevertheless fails to intervene, they may be held liable for negligent conduct.

5.4. Responsibility of the Owner or Producing Company

The liability of companies that own or commercially distribute AI systems can be analyzed within the framework of corporate fault and organizational negligence. In particular, if due diligence is not exercised during development, or if the product is released despite known risks, criminal liability may be attributed to the juridical person or its executives.

5.5. Shared and Joint Liability

In some instances, criminal acts involving AI may give rise to collective responsibility rather than individual liability. For example, when a harmful result is caused by the combined effect of programmer error, user misuse, and a company’s failure to provide adequate oversight, joint and several liability may apply. In such cases, not only the identity of the perpetrator but also the degree of fault plays a significant role in sentencing.

6. Penal Code Implications of AI-Based Content Generation: A Case Study of Grok

The implications of artificial intelligence in the field of penal code have become a pressing issue across many legal systems worldwide. In particular, the emergence of large language models like Grok has triggered intense debates surrounding legal responsibility, freedom of expression, and hate speech in the context of AI-generated content.

6.1. Recent Developments Concerning Grok

6.1.1. First Official Intervention in Türkiye: The Grok case dramatically exemplifies the tension between technological advancement and legal accountability. The exploitation of jailbreak vulnerabilities in Grok to produce content involving hate speech, insults, and incitement to violence has exposed significant gaps in traditional penal code frameworks. For the first time in Türkiye, an AI chatbot faced access restrictions and a potential criminal investigation due to content targeting the President, Atatürk, and religious values. This incident serves as a socio-legal milestone, illustrating how broadly the definition of a “perpetrator” may be stretched. The Ankara Chief Public Prosecutor’s Office imposed an access ban on Grok, citing “insulting content” related to President Erdoğan, Atatürk, and religious values. The Information and Communication Technologies Authority (BTK) enforced the decision, restricting access  with reference to around 50 flagged items. [1] [2]

6.1.2. Poland’s Complaint to the EU:

The Polish Minister for Digital Affairs announced plans to refer the matter to the European Commission, citing Grok’s generation of antisemitic and defamatory content concerning political figures such as Prime Minister Donald Tusk. [3]

6.1.3. Content Removal Following Hate Speech Allegations:

Grok also faced backlash after antisemitic content praising Adolf Hitler was disseminated via its X platform account. Following complaints by the Anti-Defamation League (ADL), the developer company xAI announced the removal of such content. [4]

7. Evaluation in Respect of Turkish Penal Code Application

The advancement and increasing use of artificial intelligence technologies in Türkiye have introduced new challenges in the realm of penal code. However, as of now, the Turkish Penal Code (TPC) does not contain any specific provisions directly addressing AI-related issues. This absence of regulation results in legal uncertainties regarding the basis for determining criminal liability in concrete cases involving AI.

7.1. Existing Legal Framework and Its Limitations

The lack of specific provisions on AI in the TPC implies that responsibility must be assigned to human actors. Nevertheless, under the general provisions of the TPC —particularly those concerning intentional and negligent offenses—individuals such as developers, users, or system owners may be held criminally liable for harmful or criminal acts committed by AI. For example, regulations concerning cybercrimes (Articles 243–244 of the TPC) may be applied indirectly to AI-related misconduct, although their scope remains narrow and limited.

7.2. Normative Gaps and the Need for Regulation

Serious gaps remain regarding who qualifies as the perpetrator in AI-related offenses, how culpability is to be determined, the extent of AI's autonomous behavior, and the scope of responsibility. These deficiencies conflict with the principles of legal certainty and foreseeability and complicate the protection of victims' rights.

In both academic and practical circles in Türkiye, efforts are growing to develop penal code regulations specific to AI. Priority areas should include:

- The introduction of specific offense types related to AI-based crimes,

- Clarification of the responsibilities of developers and users,

- Defining the criminal liability of platform and system owners.

8. Conclusion

Acts committed through artificial intelligence constitute a complex area that is not adequately addressed under current penal code frameworks. Since AI cannot be recognized as a perpetrator, liability is primarily concentrated on human actors such as developers, users, and system owners. However, the absence of specific provisions regarding this issue in the Turkish Penal Code leads to legal gaps and uncertainty in practice.

Contemporary examples—such as legal controversies surrounding AI-generated content by models like Grok—demonstrate the need to reassess both the boundaries of freedom of expression and the foundations of criminal liability. It is of great importance for Türkiye to adopt new regulations specifically tailored to AI, in line with international standards, that establish clear accountability and oversight mechanisms.

In conclusion, the innovations brought about by artificial intelligence necessitate normative reforms in penal code, updates in judicial practices, and the adoption of multidisciplinary approaches. Only through such developments can the benefits of AI be harnessed safely and legal justice be effectively maintained.

Efe Öztürk, Legal Intern


References:

1. Turkish Penal Code

2. Turkish Civil Code

3. https://www.aa.com.tr/tr/gundem/yapay-zeka-uygulamasi-grokun-paylasimlari-hakkinda-sorusturma-baslatildi/3625624

4. https://www.reuters.com/business/media-telecom/turkey-blocks-xs-grok-chatbot-alleged-insults-erdogan-2025-07-09/

5. https://www.reuters.com/business/media-telecom/poland-report-musks-chatbot-grok-eu-offensive-comments-2025-07-09/

6. https://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfb

MAKALEYİ PAYLAŞIN
MAKALEYİ YAZDIRIN