Dr. Anuradha Girme, Dr. Ujwala S. Bendale, Utpal Gharde
ABSTRACT
In an era marked by rapid advances in AI and robotics, the lines between science fiction and reality are blurring. 4As we marvel at the capabilities of AI systems and autonomous robots, we also face an urgent and complex challenge: the ability of these machines to commit automated crimes. AI- related crimes can be attributed to programming errors, malicious intent, or even simply unintended consequences of their autonomous decision-making. In a world increasingly shaped by AI and robotics, the emergence of automated crimes committed by AI robots poses a major challenge to traditional legal frameworks. Our current legal system is largely designed to hold people accountable for their actions. However, it is not clear how our laws would apply to AI robots, which are not human and do not have the same moral culpability5. It becomes evident that the urgency for legal reforms to address these novel challenges is paramount. Specific reforms, such as new laws that would make manufacturers liable for the actions of their AI robots, and new laws that would create a new category of legal liability for AI robots themselves.
Keywords: AI crimes, Autonomous AI robots, Legal reforms for AI, Ethical dilemmas for AI Crimes, Criminal potential of AI robots, Regulatory challenges for AI, AI’s impact on society, Protecting the future from AI, AI technology and its challenges.
1 Asst Prof at New Law College, Bharati Vidyapeeth, Pune
2 Principal in-charge of New Law College, Bharati Vidyapeeth, Pune
3 LLM Student at Bharti Vidyapeeth, New Law College, Pune, M. Tech. from IIT Kharagpur, LLB from ILS Law College, Pun
4 “Blurring lines between sci-fi and reality: why AI needs responsible policy intervention” available at https://economictimes.indiatimes.com/tech/technology/blurring-lines-between-sci-fi-and-reality-why-ai-needs- responsible-policy-intervention/articleshow/95169733.cms (Visited on Jan 21, 2024)
5 Karnouskos, S. “Symbiosis with artificial intelligence via the prism of law, robots, and society” Artif Intell Law 30, 93–115 (2022). https://doi.org/10.1007/s10506-021-09289-1
Introduction
It’s undeniable that the world of AI has changed the way we live, work, and interact with technology. From self-driving cars to virtual personal assistants, AI systems have permeated every aspect of modern life, promising unprecedented convenience and efficiency6. Yet beneath the surface of this technological revolution lies a seemingly dormant volcano that demands our immediate attention: the possibility of AAI robots engaging in nefarious acts. As society races to pursue technological innovation, it is increasingly clear that the dark side of AI poses profound challenges to our legal and ethical frameworks7. In this article, we embark on a journey into the mysterious world of AAI robots, exploring their criminal potential and the urgent need for legal reform. The ethical and legal dilemmas that arise when machines gain autonomy are not limited to science fiction; they truly represent our reality, and we must face them with urgency, caution and commitment to protecting our future.
This article will look at the dark side of AI, focusing on AAI robots and the need for regulatory reform. We will discuss the different types of crimes that AI robots can commit, the factors that may contribute to AI robots committing crimes, and the legal challenges that arise from automated crimes by AI robots perform. We will also argue that urgent regulatory reforms are needed to address this emerging threat. We will discuss the legal challenges that arise from autonomous crimes by AI robots.
Research Methodology
This article uses a qualitative research approach with a doctrinal nature. The researcher has reviewed research papers and online articles to form opinion about the subject.
Closer Look at AI Robots and Their Benefits
AAI robots refer to robots capable of independent operation and decision-making, free from human intervention. 8These robots harness the power of AI to learn and adapt to their surroundings. While still in their early developmental stages, these AAI robots hold the promise
6 “Artificial intelligence and essay” available at https://www.coursesidekick.com/management/490177 (Visited on Jan 21, 2024)
7 Cheng X, Lin X, Shen XL, Zarifis A, Mou J. “The dark sides of AI” Electron Mark. 2022;32(1):11-15. doi: 10.1007/s12525-022-00531-5. Epub 2022 Feb 22. PMID: 35600917; PMCID: PMC8862697.
8 “AI Autonomous and Adaptive Systems – Dawn of a New Era” available at https://www.hellotars.com/blog/ai- autonomous-and-adaptive-systems-dawn-of-a-new-era/ (Visited on Jan 21, 2024)
of transforming numerous industries and aspects of our daily lives. AI robots have the potential to not only operate vehicles, deliver goods, participate in assembly line work, and conduct surgical procedures but also engage in military combat scenarios. The advantages of AAI robots are manifold. They can improve safety and efficiency in various industries and reduce dependence on human labour for dangerous or monotonous tasks. At the same time, AI robots can also commit crimes9.
Some examples are given below:
- An AI surgical robot could malfunction and injure a patient10, or an AI robot delivering a package could mistakenly identify a person as an obstacle and cause an accident11.
- AI robots operating in warehouses can be programmed to steal goods and secretly transport them out without being detected.
- AI customer service bots can trick people into revealing personal information or credit card details12.
- An AI bot designed to handle insurance claims can be programmed to submit fraudulent claims on behalf of its operators, while another AI bot focuses on generating content.
- Social media content may establish fake accounts to spread false information or propaganda13.
- AI robots responsible for managing computer networks can be programmed to infiltrate other networks, steal data or disrupt operations.
9 “Evil AI: These are the 20 most dangerous crimes that artificial intelligence” will create available at https://www.zdnet.com/article/evil-ai-these-are-the-20-most-dangerous-crimes-that-artificial-intelligence-will- create/
(Visited on Jan 21, 2024)
10 Ferrarese A, Pozzi G, Borghi F, Marano A, Delbon P, Amato B, Santangelo M, Buccelli C, Niola M, Martino V, Capasso E. “Malfunctions of robotic system in surgery: role and responsibility of surgeon in legal point of view” Open Med (Wars). 2016 Aug 2;11(1):286-291. doi: 10.1515/med-2016-0055. PMID: 28352809; PMCID: PMC5329842.
11 “Solving The Last-Mile Delivery Problem” available at https://semiengineering.com/solving-the-last-mile- delivery-problem/ (Visited on Nov 3, 2023)
12 “What happens when thousands of hackers try to break AI chatbots” available at https://www.npr.org/2023/08/15/1193773829/what-happens-when-thousands-of-hackers-try-to-break-ai-chatbots (Visited on Jan 21, 2024)
13 “Could AI swamp social media with fake accounts?” available at https://www.bbc.com/news/business- 64464140 (Visited on Nov 3, 2023)
The AI Spectrum: From Basic Tasks to Human-Like Abilities
AI technologies are classified based on their ability to imitate human attributes, the technology they use for this purpose, their practical applications, and their theoretical concepts. There are three main types of AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).14
ANI represents the basic level of AI. Its purpose is to perform specific tasks, such as facial recognition, language translation, or driving. 15ANI systems excel at these specific tasks but lack the ability to apply their knowledge to new situations. On the other hand, AGI is a form of theoretical AI that can perform any intellectual task comparable to that of a human. 16AGI systems will be able to learn, adapt to new situations, reason and make decisions similar to human cognition. ASI is another form of hypothetical AI that could surpass AGI in terms of intelligence. The ASI system is said to excel in all areas, including creativity, problem solving and decision making, far beyond human capabilities.
The difference between ANI, AGI and ASI can be best illustrated with the help of following examples:
ANI: An example is a facial recognition system capable of identifying individuals in a crowd
AGI: The illustration can is a robot that has the ability to ‘learn and adapt to unusual situations, such as a robot helping in a disaster zone.
ASI: One version may involve a computer capable of designing and manufacturing technologies more advanced than any human creation17.
The feasibility of implementing AGI or ASI remains a subject of debate among experts. The development of ASI depends on the continued advancement of current AI capabilities in
14 “Narrow AI vs. General AI vs. Super AI: Key Comparisons” available at https://www.spiceworks.com/tech/artificial-intelligence/articles/narrow-general-super-ai-difference/ (Visited on Jan 21, 2024)
15 “What is AI? Everything to know about artificial intelligence” available at https://www.zdnet.com/article/what- is-ai-heres-everything-you-need-to-know-about-artificial-intelligence/ (Visited on Jan 21, 2024)
16 “artificial general intelligence (AGI)” available at https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI (Visited on Jan 21, 2024)
17 “What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence” available at https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible (Visited on Nov 1, 2023)
various technology areas, including: large language models18, multimodal AI19, neural networks20, electronics neuromorphic algorithms21, Evolutionary algorithms22 (EA), AI-based programming, whole brain simulation23, brain implants and mind hives24
The Dark Side of AI: Risks and Potential Automated Crimes
While AI can benefit society in many ways, it also poses new risks. AI Robots are capable of committing a range of automated crimes, including physical damage, theft, fraud, cyber warfare, invasion of privacy, hacking and cyberattacks, property damage, harassment and cyberbullying, intellectual property violations, accidents and injuries, violations of drone regulations, and discriminatory conduct25. As AI technology advances and AI systems become better able to learn on their own, it is possible that AI will learn to commit crimes on their own. If AI learn and act on its own, it may eliminate threats without any human intervention26. In some cases, this could lead to deadly outcomes.
An illustration is given below: AI systems can be designed to detect certain types of threats or dangers, including those that may pose a risk to children. AI can also be used to filter and monitor online content to prevent children from accessing inappropriate or harmful material27. AI-based security systems can also be used to monitor areas where children are present and alert authorities or caregivers if they detect suspicious activity. At the same time, if an AI robot
18 Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). “ChatGPT and a New Academic Reality: AI” Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing. JASIS&T. http://dx.doi.org/10.1002/asi.24750
19 Fei, N., Lu, Z., Gao, Y. et al. “Towards artificial general intelligence via a multimodal foundation model” Nat Commun 13, 3094 (2022). https://doi.org/10.1038/s41467-022-30761-2
20 “Types of AI” available at https://uq.pressbooks.pub/digital-essentials-artificial-intelligence/chapter/types-of- ai/ (Visited on Jan 21, 2024)
21 Ahmed, K. S. Shereif, F. F. “Neuromorphic Computing between Reality and Future Needs”, Neuromorphic Computing [Working Title]. IntechOpen, Apr. 01, 2023. doi: 10.5772/intechopen.110097.
22 “Artificial superintelligence” available at https://www.techtarget.com/searchenterpriseai/definition/artificial- superintelligence-ASI (Visited on Jan 21, 2024)
23 Klarmann, Noah, “Artificial intelligence narratives: An objective perspective on current developments” 18 Mar 2021, arXiv:2103.11961v1 [cs.AI]
24 “Artificial superintelligence (ASI)” available at https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI (Visited on Jan 21, 2024)
25 Chen, Z. “Ethics and discrimination in artificial intelligence-enabled recruitment practices” Humanit Soc Sci Commun 10, 567 (2023). https://doi.org/10.1057/s41599-023-02079-x
26 “Benefits & risks of Artificial Intelligence” available at https://futureoflife.org/ai/benefits-risks-of-artificial-intelligence/ (Visited on Jan 21, 2024)
27 “AI Content Moderation for Responsible Social Media Practices” available at https://labelyourdata.com/articles/ai-content-moderation (Visited on Jan 21, 2024)
detects that children are in immediate danger, it could decide to take action to eliminate dangers around it, including humans.
AI can be used to identify unusual behaviour patterns that may indicate potential threats28, such as cyberbullying or child abuse. It can also analyse text or speech for signs of distress or danger in online communications or phone conversations29.
For example, an AI robot may learn from online content that rapists should be killed instead of imprisoned and decide to kill the rapist or an AI robot may learn that an accused was released from court due to a loophole in the legal system and decide to punish the accused itself.
Another classic example of this is the AI-driven car that must decide whether to save an elderly person or a child 30in a particular circumstance. For a normal person, it would seem illogical to choose the elderly person, but the AI may have valid reasons for doing so. AI may have access to the medical records 31of the child and the elderly person in split second and know that the child has serious medical issues and has less time to live than the elderly person.
Legal Reform To Combat AI-related crimes
The need for legal reform to address automated crimes committed by AI robots 32is of utmost urgency. The need is driven by a several compelling factors such as:
Increasing Autonomy and Sophistication of AI Robots
AI robots are evolving rapidly, gaining autonomy and sophistication. They can now execute tasks that were traditionally within the realm of human capability33, such as driving, conducting surgery, and making complex investment decisions. This escalating sophistication raises
28 “How Security Analysts Can Use AI in Cybersecurity” available at https://www.freecodecamp.org/news/how- to-use-artificial-intelligence-in-cybersecurity (Visited on Jan 21, 2024)
29 “Artificial Intelligence Could Help Solve America’s Impending Mental Health Crisis” available at https://time.com/5727535/artificial-intelligence-psychiatry/ (Visited on Jan 21, 2024)
30 “Should a self-driving car kill the baby or the grandma? Depends on where you’re from” available at https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self- driving-trolley-problem/ (Visited on Jan 21, 2024)
31 “Artificial intelligence makes scanning medical records easier” available at https://www.techtarget.com/searchhealthit/tip/Artificial-intelligence-makes-scanning-medical-records-easier (Visited on Nov 1, 2023)
32 Chakrabarti, Soumyadeep, Ray, Ranjan Kumar. “Artificial Intelligence And The Law” DOI: 10.47750/pnr.2023.14.S02.15
33 Anderson, Jenna. Rainie, Lee. “Artificial Intelligence and the Future of Humans” available at https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
concerns about the potential for AI robots to engage in automated crimes as their capabilities expand.
Lack of Legal Personhood for AI
AI robots currently do not possess legal personhood. 34They are not considered as legal entities and, therefore, cannot be held criminally accountable for their actions. This absence of legal recognition complicates the attribution of responsibility for AI-driven crimes.
Serious Consequences of AI Crimes
The consequences of AAI crimes are profound, encompassing substantial harm to humans, both in physical and economic terms. For instance, an AI robot designed for driving could cause fatal accidents35, while another focused-on investment decisions could lead to financial devastation for investors.36
Legal Lag Behind AI Development
Legal frameworks are struggling to keep pace with the rapid development of AI technology37. The legal system’s evolution tends to be sluggish, especially when confronted with complex and rapidly advancing technologies like AI. Consequently, a regulatory gap exists, leaving AI robots largely unregulated.
Transnational Nature of AI Crimes
AI-related crimes may transcend national boundaries, necessitating international cooperation and standardization. Legal reform can facilitate cross-border collaboration to address AI- related issues consistently.
34 Hildebrant, Mireille. “Legal personhood for AI?” available at https://lawforcomputerscientists.pubpub.org/pub/4swyxhx5/release/5 (Visited on Nov 1, 2023)
35 Lazzaro, Sage. “A deadly Uber self-driving car crash 5 years ago exposed A.I. workplace issues that businesses still need to resolve” available at https://fortune.com/2023/08/01/uber-self-driving-car-fatality-unresolved-ai- workplace-issues/ (Visited on Jan 21, 2024)
36 “Role of AI (Artificial Intelligence) in Investment Decision – Cap or Slap?” available at https://wealthdesk.in/blog/ai-investment-decision/ (Visited on Jan 21, 2024)
37 de Geer, Boudy. Navigating the Legal Landscape: AI and Current Laws available at https://www.linkedin.com/pulse/navigating-legal-landscape-ai-current-laws-boudy-de-geer/ (Visited on Jan 21, 2024)
Inconsistencies in Legal Treatment
The absence of legal reform may result in gaps and inconsistencies in the treatment of AI- related crimes. This can lead to legal uncertainties, confusion, and uneven enforcement.38
Implications for Cybersecurity and Data Privacy
AI crimes, including cyberattacks and data breaches, have profound implications for cybersecurity and data privacy. 39Legal reforms can reinforce data protection laws and establish protocols for responding to AI-related security incidents.40
Creating Incentives for Responsible AI Development
Legal reforms can incentivize responsible AI development 41by imposing legal consequences for negligence or malicious behaviour. This can encourage developers and operators to prioritize safety and ethical considerations in AI systems.42
Promoting Public Awareness and Education
Legal reforms can play a pivotal role in raising public awareness about the potential threats posed by AI crimes. They can also encourage AI education and responsible use of AI technologies.43
With the increasing complexity and impact of AI technology, urgent legal reform is essential to combat automated crimes committed by AI robots.
38 Berk, Richard A. “Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement” Annual Review of Criminology doi: 10.1146/annurev-criminol-051520-012342 https://www.annualreviews.org/doi/10.1146/annurev-criminol-051520-012342
39 Comiter, Marcus. “Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It” available at https://www.belfercenter.org/publication/AttackingAI (Visited on Jan 21, 2024)
40 “The impact of the General Data Protection Regulation (GDPR) on artificial intelligence” Panel for the Future of Science and Technology, EPRS, European Parliamentary Research Service, Scientific Foresight Unit (STOA), PE 641.530 – June 2020
41 Habuka, Hiroki. “Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency” available at https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency
42 Yaqoob, Tayyub. “Ethical considerations in AI development and deployment” available at https://cointelegraph.com/explained/ethical-considerations-in-ai-development-and-deployment (Visited on Jan 21, 2024)
43 “Blueprint for an AI Bill of Rights: Making automated aystems work for the American people” available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (Visited on Jan 21, 2024)
Challenges of Applying Human-Centric Laws to AI44
The current legal system is designed to hold humans accountable for their actions45, and it is ill-suited to handle the complexity of AI-driven activities and AI-enabled autonomous robots. Some key elements, as mentioned below, highlight the limitations of our legal framework in addressing AI:
Human-centred framework
The legal system is rooted in the concept of action , culpability and human intention46. The law is based on human behaviour and the basic principle is the presumption of innocence until proven guilty. AI, not humans, lacks the intention and consciousness that underpin human responsibility.47
Lack of legal status for AI
In most legal systems, AI is considered an asset or tool created and controlled by humans48. This limits the ability to assign responsibility to AI itself and raises questions about who should be held accountable when AI systems commit crimes.
Criminal intent and mens rea
Many crimes require proof of criminal intent or mens rea, which AI systems inherently lack49. For instance, they cannot possess the requisite intent to commit fraud or harm, making it difficult to establish criminal liability.
44 Kehler, Tom. “A Framework for a Human-Centered AI Based on the Laws of Nature: Integrating natural and artificial intelligence” available at https://towardsdatascience.com/a-framework-for-a-human-centered-ai-based- on-the-laws-of-nature-a8bfbb233250 (Visited on Jan 21, 2024)
45 “AIs could soon run businesses” available at https://cio.economictimes.indiatimes.com/news/artificial- intelligence/ais-could-soon-run-businesses/104813190 (Visited on Jan 21, 2024)
46 Hart, H. L. A., ‘INTENTION AND PUNISHMENT’, Punishment and Responsibility: Essays in the Philosophy of Law, 2nd edn (Oxford, 2008; online edn, Oxford Academic, 1 Jan. 2009)
47 Hildt, Elisabeth. “Artificial Intelligence: Does Consciousness Matter?” Frontiers in Psychology, https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01535, DOI=10.3389/fpsyg.2019.01535, ISSN=1664- 1078
48 Jain, Ayushi. “Artificial Intelligence: An Asset or Liability” Pen Acclaims, http://www.penacclaims.com/wp- content/uploads/2020/09/Yashi-Jain.pdf, ISSN 2581-5504
49 Anand, Parul. “Exploring criminal liability in self learning artificial intelligence” available at https://www.nujssacj.com/post/exploring-criminal-liability-in-self-learning-artificial-intelligence (Visited on Jan 21, 2024)
Challenges in attribution
In cases involving AI, it can be challenging to attribute actions to a responsible party50. AI systems often involve multiple stakeholders, including developers, operators, and users, making it complex to determine who should be held accountable.
Regulatory gaps
Legal systems are often slow to adapt to the rapid advancements in AI technology. 51This creates regulatory gaps where certain AI-related activities may not fall clearly within the purview of existing laws.
Transparency and accountability
AI decision-making processes can be highly opaque and complex. Proving accountability in cases of AI-driven harm is difficult when it’s unclear how a specific decision was made or who was responsible for it52.
International jurisdiction
AI operates across borders, making it challenging to enforce and apply laws consistently on a global scale, particularly in cases where AI-driven actions affect multiple jurisdictions53.
Liability of corporations54
Many AI systems are developed and operated by corporations. While humans within these organizations may make decisions related to AI, establishing corporate liability for AI actions is a complex legal issue.
50 Mittelstadt, Brent. “The impact of artificial intelligence on the doctor-patient relationship” available at https://rm.coe.int/inf-2022-5-report-impact-of-ai-on-doctor-patient-relations-e/1680a68859 (Visited on Jan 21, 2024)
51 Greenstein, S. Preserving the rule of law in the era of artificial intelligence (AI). Artif Intell Law 30, 291–323 (2022). https://doi.org/10.1007/s10506-021-09294-4
52 Pratt, Mary K. “AI accountability: Who’s responsible when AI goes wrong?” available at https://www.techtarget.com/searchenterpriseai/feature/AI-accountability-Whos-responsible-when-AI-goes- wrong (Visited on Jan 21, 2024)
53 Rainie, Lee. Anderson, Jenna. Vogels, Emily A. “Worries about developments in AI” available at https://www.pewresearch.org/internet/2021/06/16/1-worries-about-developments-in-ai/ (Visited on Jan 21, 2024) 54 Sereboff, Scott. “A.I., Personal Privacy and Corporate Liability” available at https://www.linkedin.com/pulse/ai-personal-privacy-corporate-liability-scott-sereboff/ (Visited on Jan 21, 2024)
There is a growing recognition of the need for legal reform to address the inadequacy of our current legal system in dealing with AI-driven actions. Policymakers and legal experts are exploring the development of new laws and regulations that specifically address the responsibility and accountability for AI systems. These reforms may include defining legal personhood for AI, 55creating a legal framework for AI ethics, and establishing clear liability standards for AI-related activities.
In the rapidly evolving field of AI and robotics, adapting our legal system to include non-human agents is an important step in ensuring that AI technologies are developed, used and managed ethically and responsibly. This change in regulatory perspective will be key to meeting the challenges posed by the increasing autonomy and complexity of AI systems.
Exploring Liability Models for AI Robots
Applying existing laws to AI robots presents a multifaceted challenge rooted in the fundamental differences between AI robots and humans, and the existing legal framework tailored to human behaviour. The intersection of AI robots and our current legal system raises various considerations on how these laws might be extended to apply to AI robots.
Determining Legal Personality for AI Robots56
One potential avenue involves defining the legal personality of AI robots. This legal recognition could empower AI robots to engage in contractual agreements, own property, and bear responsibility for their actions, thereby aligning them with legal frameworks designed for human entities.
Exploring Strict Liability for AI Robots57
Introducing a framework of strict liability for AI robots presents another facet. Under this framework, the owner, developer, or operator of an AI system becomes automatically liable for any harm or damage caused by the AI, irrespective of intent. Such an approach simplifies
55 Chesterman, S. “Artificial intelligence and the limits of legal personality”, International & Comparative Law Quarterly, 69(4), 819-844. doi:10.1017/S0020589320000366
56 Kurki, Visa A.J., ‘The Legal Personhood of Artificial Intelligences’, A Theory of Legal Personhood (Oxford, 2019; online edn, Oxford Academic, 19 Sept. 2019), https://academic.oup.com/book/35026/chapter/298856312
57 Wendehorst, Christiane. “Strict Liability for AI and other Emerging Technologies”, JETL 2020; 11(2): 150–180, https://doi.org/10.1515/jetl-2020-0140
attributing responsibility but also carries implications for those involved in AI development and deployment.
Implementing a Negligence Standard for AI Robots58
Alternatively, establishing a negligence standard for AI robots could hold AI developers, owners, or operators accountable if they fail to take reasonable precautions to prevent AI from causing harm. This approach necessitates an assessment of whether due care was exercised in AI development and operation.
Extending Product Liability Laws to AI Robots59
Expanding product liability laws to encompass AI robots would categorize them as products subject to defects or excessive danger. Such an extension would require re-evaluating the AI robots as consumer products and the corresponding liabilities.
Holding Third Parties Responsible for AI Actions60
In instances where AI robots are operated by third parties, be they businesses or individuals, exploring the accountability of these third-party operators for AI actions is an additional consideration.
Additional Reforms
Beyond these considerations, there are other potential reforms that warrant exploration. These include defining key concepts in AI law, such as “autonomous AI robots” and “AI crimes”, to provide legal clarity. Furthermore, the establishment of specialized courts or tribunals tailored to AI-related legal disputes can facilitate the resolution of intricate issues. Additionally, investment in research on the ethical and social implications of AI is crucial to guide the
58 Conklin, Michael. “The Reasonable Robot Standard: Bringing Artificial Intelligence Law into the 21st Century” available at https://yjolt.org/blog/reasonable-robot-standard (Visited on Jan 21, 2024)
59 Chandler, Katie. Behrendt, Dr. Philipp. Bakier, Christopher. “AI product liability – moving ahead with a modernised legal regime” available at https://www.taylorwessing.com/en/interface/2023/ai—are-we-getting-the- balance-between-regulation-and-innovation-right/ai-product-liability—moving-ahead-with-a-modernised-legal- regime (Visited on Jan 21, 2024)
60 Renieris, Elizabeth M. Kiron, David , Mills, Steven. Gupta, Abhishek. “Responsible AI at Risk: Understanding and Overcoming the Risks of Third-Party AI” https://sloanreview.mit.edu/article/responsible-ai-at-risk- understanding-and-overcoming-the-risks-of-third-party-ai/
development of well-informed legal frameworks.
International Measures to Address AAI Crimes61
The article underscores the pressing need to address AAI crimes on an international scale. To effectively combat these emerging threats, a multifaceted approach is proposed.
The establishment of international organizations, standards and mechanisms for monitoring and responding to AAI crimes is crucial. For instance, a body composed of human and AI can be composed to monitor and control AAI. 62The aim should be to identify rogue AAI robots based on their “activity” patterns and remotely shutdown any such outlying AAIs.63 International ethical guidelines for AI should be developed to prioritize transparency, fairness, and accountability in AI technology64. International cooperation in AI research and development is seen to foster responsible innovation. Encouraging transparency and accountability in the AI development process is vital for ensuring ethical use.
This will require common definitions and standards for AI-related offenses to foster consistency in interpretation and enforcement across borders. International agreements and treaties, aimed at facilitating cross-border cooperation in investigating and prosecuting AI- related offenses should be formulated. 65
The strengthening of Mutual Legal Assistance Treaties (MLATs) is a crucial step to enable efficient cross-border cooperation in collecting evidence, facilitating extraditions, and enhancing legal cooperation in AI-related crime cases. Furthermore, mechanisms for information sharing and reporting to encourage countries to collaborate on investigations and
61 Vihul, Liis. “International Legal Regulation of Autonomous Technologies” available at https://www.cigionline.org/articles/international-legal-regulation-autonomous-technologies/ (Visited on Jan 21, 2024)
62 “International Community Must Urgently Confront New Reality of Generative, Artificial Intelligence, Speakers Stress as Security Council Debates Risks, Rewards” available at https://press.un.org/en/2023/sc15359.doc.htm
63 Kraft, Amy. “Microsoft shuts down AI chatbot after it turned into a Nazi” available at https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/ (Visited on Jan 21, 2024)
64 “Ethics of Artificial Intelligence” available at https://www.unesco.org/en/artificial- intelligence/recommendation-ethics (Visited on Jan 21, 2024)
65 “193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence” available at https://news.un.org/en/story/2021/11/1106612 (Visited on Jan 21, 2024)
prosecutions related to AI crimes should be created. 66
Harmonizing cybersecurity and data privacy regulations 67is essential to ensure consistent data protection standards in the context of AI, promoting security and privacy. Developing cross- border jurisdictional frameworks is essential to address the transnational nature of AI-related crimes and determine which country has the authority to prosecute offenders. Promotion of international cybersecurity standards and encouragement for the adoption of global data protection principles are key components of the strategy. Additionally, diplomatic channels and dispute resolution mechanisms to address international conflicts related to AI crimes should be established. Another part of the strategy is to support capacity-building programs in less- developed countries to enhance their ability to combat AI crimes, bolster cybersecurity, and ensure compliance with international norms68.
International laws can effectively address potential AAI crimes by implementing strategies such as creating international treaties and conventions to prohibit the development and use of AAI weapons69. Additionally, the development of international standards for the development, use, and testing of AAI systems is recommended to ensure their responsible application.
Promoting international cooperation on AAI safety and security is another facet, intended to foster collaborative efforts in safeguarding against AI-related threats. 70There are challenges involved in developing and enforcing international laws for tackling potential AAI crimes. These challenges include defining what constitutes an AAI crime, adapting laws to keep pace with rapidly evolving AI technology, and enforcing international laws against non-state actors. To address these challenges, several steps are recommended. The international community can develop a set of criteria for determining whether an AI system can commit a crime. Establishing
66 Sagar, Faraz Alam. Sundaram, Sara. Sharma, Pragati. “Understanding Cross Border Legal Assistance” available at https://corporate.cyrilamarchandblogs.com/2020/10/understanding-cross-border-legal-assistance/ (Visited on Jan 21, 2024)
67 “Regulations for AI in Cybersecurity” available at https://www.linkedin.com/pulse/regulations-ai- cybersecurity-cyberarch-consulting/ (Visited on Jan 21, 2024)
68 Muller, Lilly Pijnenburg. “Cyber security capacity building in developing countries: challenges and opportunities” available at https://cybilportal.org/wp-content/uploads/2020/06/NUPIReport03-15-Muller.pdf (Visited on Jan 21, 2024)
69 “Lethal Autonomous Weapon Systems (LAWS)” available at https://disarmament.unoda.org/the-convention- on-certain-conventional-weapons/background-on-laws-in-the-ccw/ (Visited on Jan 21, 2024)
70 Meltzer, Joshua P. Kerry F, Cameron. “Strengthening international cooperation on artificial intelligence” available at https://www.brookings.edu/articles/strengthening-international-cooperation-on-artificial- intelligence/ (Visited on Jan 21, 2024)
a permanent body of experts is suggested to monitor AI technology development and make recommendations for updating international laws.
There is also a need to develop mechanisms for imposing sanctions on non-state actors that commit AAI crimes. By implementing these strategies, the international community can tackle the challenges associated with developing and enforcing international laws related to AAI crimes, ultimately ensuring that AAI systems are used conscientiously and ethically.
Conclusion
The article discusses the potential dangers of AI autonomous robots and the need for legal reform to address these dangers. AI autonomous robots are becoming more complex and capable, and they are used in a variety of applications, including self-driving cars, medical devices, and military weapons. AAI robots pose a number of dangers, including committing crimes, sabotaging critical infrastructure, and creating new forms of warfare. Our current legal system is not ready to deal with the dangers of AAI robots. According to current law, AI robots are not considered legal entities and therefore cannot be held criminally responsible for their actions. We must develop new legal frameworks to hold autonomous robots accountable for their actions. These frameworks could include holding AAI robot manufacturers responsible for the actions of their robots and creating a new type of liability for AAI robots themselves. Additionally, it will be important to establish international treaties and conventions that prohibit the development and use of certain types of AAI weapons. These legal reforms are necessary to protect the public from the dangers of AAI robots.