LAST FEW DAYS
Start 50 Plan: From R$69 to R$59/month and Plus 50 Plan: From R$119 to R$99/month.
Liability for damages resulting from AI systems

Liability for damages resulting from AI systems

Publicado em:
27
/
12
/
2023

In the contemporary world, the rise of Artificial Intelligence (“AI”) systems has demonstrated a capacity for innovation in various sectors that was unimaginable very recently. However, this rapid evolution also brings numerous challenges that need to be evaluated and discussed. As the use of AI becomes increasingly common and frequent, doubts and issues arise related to liability for damages arising from the use of these systems.

According to Tim Cook, Apple's CEO: “What we must all do is make sure that we are using AI in a way that benefits society, not that deteriorates it”.

This article will explore legal, ethical, and practical aspects of this topic.

Importance of Artificial Intelligence

Increasingly, our society is immersed in a digital environment, where technology plays a fundamental role in our personal and professional lives. In this scenario, AI emerges in a transformative way, modifying the way in which we interact with the world around us. AI is present in virtual assistants in our smartphones even advanced algorithms that drive business decisions.

Whether through automation of routine tasks, personalization of product recommendations, or optimization of industrial processes, AI has the potential to significantly improve efficiency and convenience in a variety of areas. On a personal level, we see as an example the increasing integration of virtual assistants and voice recognition technologies within our homes, simplifying daily tasks and providing a more integrated experience with technology.

In the professional context, AI can play an important role in sectors such as finance, health, manufacturing, and others. Advanced data analysis and robot learning systems are being used to make complex decisions, anticipate market trends, and optimize supply chains. The automation of repetitive processes frees up human resources for more strategic and creative activities.

However, because it is a very recent reality and still without specific regulation, there are many challenges that need to be mapped and studied. Among them are those related to legal liability resulting from damages caused by failures or inadequate decisions of these systems.

Responsibility in Artificial Intelligence

Responsibility, in its broadest sense, is the legal obligation to be held accountable for actions, decisions, or impacts resulting from a specific activity. It is the recognition that individuals or entities have ethical and legal duties associated with their actions and that they must bear their consequences, whether positive or negative. Responsibility serves as a foundation for social and legal order, ensuring balance between freedom of action and the need to protect individual and collective interests.

In contexts where AI is used, accountability becomes a challenge due to the autonomous and complex nature of the systems, which can make decisions without direct human intervention. Accountability may lie in human actions related to the development and implementation of systems and impacts resulting from their autonomous actions. This means that everyone involved in an autonomous decision of an AI system, from its developers to the end users, has an obligation to understand and manage the risks associated with the use of that system.

Determining accountability when an action is performed by an AI system is a complex challenge. The decentralized and autonomous nature of AI blurs the line of responsibility.

Risks and challenges in AI systems

First of all, it is worth highlighting the main risks and challenges that arise with the frequent use of AI systems.

Algorithmic bias

It is manifested in the presence of discriminatory patterns in the decisions made by these systems, which may result in unequal treatment based on characteristics such as race, gender, or other attributes. An example of this are selection processes carried out by AI, where algorithms may inadvertently favor candidates of a certain gender, expanding existing disparities, as happened with Amazon. This is because artificial intelligence repeats historical patterns to educate itself, reflecting discriminatory behaviors that already exist in our society. The presence of biases compromises equity and raises ethical and legal questions about responsibility for the consequences of these decisions.

Lack of transparency

We have difficulty fully understanding how an algorithm makes decisions, which becomes a barrier to explaining or understanding the reasoning behind the system's actions. A famous case that reflects this aspect was an AI created by Facebook that created its own language for communicating with another robot that is unintelligible to humans. This can generate distrust on the part of users and stakeholders, who may hesitate to trust systems whose operation is obscure. In addition, a lack of transparency can be an obstacle to identifying and resolving biases, errors, or unwanted behavior, making it difficult to effectively implement corrections and improvements.

Autonomous decision-making

AI systems are capable of making choices without human intervention. Determining who is responsible for these decisions and how they are held to account for potential errors is a major challenge. This demonstrates the urgent need to develop legal and ethical structures that clearly define responsibilities in autonomous decision-making scenarios.

Legislation and regulations

In this context, the creation of legislation and regulations that establish clear rules for the responsible use of AI is fundamental. In Brazil, there is still no specific law approved regulating the issue. However, there are some legal regulations that are applicable to the context of harm caused by AI.

General Data Protection Law - LGPD

It is the federal law that establishes rules for the processing of personal data, including those collected by an AI system. The law ensures fundamental rights, such as the right to privacy, transparency, and non-discrimination. The LGPD imposes specific obligations on the collection, storage, and sharing of personal data, ensuring that AI systems respect the privacy of individuals.

Consumer Protection Code - CDC

It establishes specific rules for consumer protection. These rules can be applied to protect consumers of AI systems in the event of a violation of legislation. The CDC also ensures that products and services that use AI systems meet quality and safety standards, protecting consumers from abusive practices and potential harm.

Brazilian Civil Code - CCB

It establishes civil liability rules, which include liability for damages caused by products and services. These rules can be applied to hold developers and users of AI systems responsible for damages caused. This guarantees legal protection to deal with adverse consequences resulting from algorithmic decisions or AI system failures.

Federal Constitution of Brazil - CFB

It is the fundamental document that establishes the fundamental rights of citizens and can be applied to protect individuals from AI systems that violate those rights. It serves as a guide to ensure that the development and use of technologies respect constitutional principles, including dignity, privacy, and equality.

Ordinance No. 271/2020 of the CNJ

It regulates the use of AI in the Judiciary. This ordinance determines that the responsibility for the use of AI in the Judiciary is shared between the bodies that create and maintain the AI models and their users, which demonstrates the management of risks and responsibilities associated with AI in the judicial context.

CNJ Resolution No. 332/2020

It addresses ethics, transparency, and governance in the production and use of AI in the Judiciary. It establishes that responsibility for the use of AI in the Judiciary is shared between Courts, developers, and users of AI systems. This resolution highlights the importance of ethical standards and transparent practices in the implementation of AI-based technologies in the judicial sector.

Bill No. 2,338/2023

It is still pending before the National Congress. It has already been approved by the Chamber of Deputies and is awaiting a vote in the Senate. This bill provides for the use of AI in Brazil and establishes that responsibility is shared between developers, operators, and users. If approved, this law will contribute significantly to the creation of an updated legal framework for the regulation of the use of AI in Brazil.

It can be seen, therefore, that there is a tendency to regulate liability for damages resulting from the use of AI systems in a shared way, so that each party that contributed in any way to the action that was carried out by the robot has a share of responsibility.

Responsibility of AI in practice

The following cases illustrate the increasing complexity and challenges associated with the use of AI in our daily lives.

Judicial Procedure before the Federal District and Territories Court of Justice - Case No. 0720848-94.2020.8.07.0001

In August 2023, the 8th Civil Panel of the Federal District and Territories Court of Justice ordered the developers of an Artificial Intelligence designed to carry out financial investments, a brokerage firm, and the brokerage agency to pay material damages to the plaintiffs in the amount invested. The planned financial transactions were not carried out, resulting in significant losses for the authors. The decision was based on the Consumer Protection Code and the Civil Code and highlights the responsibility of developers and intermediaries in the implementation and operation of AI systems, especially when it comes to dealing with third-party financial resources.

CNJ Investigation

An investigation conducted by the National Justice Council (“CNJ”) revealed that a federal judge assigned to Acre was using ChatGPT to draft his judgments, citing false case law attributed to the Superior Court of Justice (“STJ”). This case highlights the challenge of transparency and ethics in the use of AI in the professional environment. The situation raises questions about the verification and validation of information generated by AI systems, as well as about the responsibility of the operator (in this case, the judge) in the correct use of the technology.

Non-existent case law

In the United States, a lawyer cited non-existent case law during his client's defense against Avianca, receiving a reprimand from the judge. This example highlights the risks associated with indiscriminate trust in AI systems and the importance of due diligence on the part of professionals when using the information generated by those systems. In addition, it highlights the need for transparency and accountability in the use of AI in the professional context, highlighting the importance of validating the sources of information generated by these systems before using them.

AI systems offer benefits, such as automation and efficiency. However, they need clear regulations, ethical responsibility, and transparency in the creation, implementation, and use of AI systems.

How companies can protect themselves when using AI

The responsible incorporation of Artificial Intelligence into business operations requires strategic analysis to mitigate risks and ensure ethical and legal compliance. Below, we highlight some steps companies can take to protect themselves when using AI.

Policies and Procedures on the Use of AI

Establishing clear policies and procedures regarding the use of AI is critical. The objectives must be precisely defined, identifying the expected benefits and the associated risks. This includes the specification of security measures to be adopted, such as encryption and access control, in addition to the delimitation of the responsibility of those involved, such as developers and end users. Creating transparent policies fosters trust and ensures that the implementation of AI is aligned with the company's values and objectives.

Documenting the use of AI

Documenting the use of AI is an essential practice to ensure transparency and accountability. This involves specifying the data used for training and operating the algorithms, the clear description of the applied algorithms, and the documentation of the decisions made by the AI systems. Documentation facilitates audits and reviews and is important for answering ethical or legal questions that may arise regarding the use of AI.

Purchase appropriate insurance

Contracting appropriate insurance is an important strategy to protect companies against harm resulting from AI systems. These insurances may cover legal liability, financial damages, or other losses caused by algorithm failures, mistaken autonomous decisions, or privacy violations. Customizing the insurance policy to specific AI-related needs ensures effective coverage and provides an additional layer of financial protection.

Monitoring and control mechanisms

Implementing monitoring and control mechanisms is important to mitigate risks related to the use of AI. This includes detecting flaws in AI systems, monitoring performance and behavior, and identifying potential cases of misuse or algorithmic discrimination. Control mechanisms allow for rapid intervention in problematic situations and the implementation of continuous improvements to the algorithms to ensure compliance and effectiveness.

Conclusion

As AI is established as a transformative force in various spheres of society, from business decision-making to the administration of justice, its implementation must be guided by ethical principles, responsibility and legal compliance.
Nowadays, although there is still no specific law regulating the use of Artificial Intelligence, we can verify that, in the analysis of specific cases, there is a trend of shared responsibility among developers, intermediaries and operators of Artificial Intelligence systems, which is reflected in the bill that will yet be evaluated.