LAST FEW DAYS
Start 50 Plan: From R$69 to R$59/month and Plus 50 Plan: From R$119 to R$99/month.
Is it possible to reconcile privacy and artificial intelligence?

Is it possible to reconcile privacy and artificial intelligence?

Publicado em:
11
/
01
/
2024

Artificial Intelligence (AI) is an area of computer science that aims to study and develop systems and technologies based on algorithms to perform tasks, classifications, predictions, and make decisions, which reproduce behavior and human work. AI applications are diverse, from a Chatbot to facilitate interactions with clients to analysis of trends for granting credit, to name a few examples.

A point in common with these technologies is the dependence on the collection of large amounts of data to train the algorithms and adapt them to the context of use. From these data sets, AI is able to extract valuable information, automating repetitive work and allowing more objective decisions to be made.

Despite its advantages, the use of AI systems raises concerns for society and its rapid development drives legal discussions in several countries regarding the regulation of technology.

In this regard, the European Parliament approved in June the European Regulation on Artificial Intelligence (EU AI Act), defining obligations for systems considered to be high-risk. The EU AI Act has not yet been definitively passed, but it provides insights important to companies that intend to develop or even that are already developing this type of technology. In the same sense, the Brazilian Congress also has some bills, still pending, on the subject. Some more mature than others.

In parallel, legal challenges involving civil liability, intellectual property, and privacy have materialized with the rapid expansion of this technology. When the data processed relates to identified or identifiable individuals, personal data protection regulations apply to the systems and there seems to be a contradiction between the operation of AI systems and the principles and rules of privacy. The purpose of this text is to shed light on these apparent contradictions, as well as to point out possible solutions that have been designed by the market and regulatory authorities.

Key privacy challenges in IAs

Artificial intelligence systems present several challenges for privacy professionals. In this text, the three main ones will be presented, based on the principles and obligations provided for in General Personal Data Protection Act Brazilian (Law No. 13,709/2018):

Transparency and non-discrimination

Artificial intelligence algorithms learn from the data that feeds them - a technique called Machine learning. In other words, the algorithm is reprogrammed using the data from the databases used. This has two consequences in terms of privacy.

First, it means that the “reasoning” behind the AI decision or conclusion may not be known or understood by the developers of that technology themselves. This is the so-called “black box effect” of AI, hampering the ability to comply with the principle of transparency in data processing.

Second, because of this operating logic, data directly influences the results that AI achieves, since correlations are generated and patterns from those bases are identified. If the developers or this dataset are biased, the technology's results may reflect structural injustices in society, reinforcing biased representations of reality.

Adequacy, need, and purpose

Because of this changing structure of artificial intelligence systems, it can be difficult to predict what information will be extracted from a large database of personal data. This means that it's difficult to pre-define a specific and legitimate purpose for complex algorithms. Without identifying a legitimate purpose, it also becomes a challenge to identify which data represents the minimum necessary for the technology to work properly and whether the data used is adequate for the intended purpose.

Security and re-identification

Even if the data used to train an algorithm is depersonalized, the large amount of information treated can lead to the re-identification of the owners - called the “mosaic effect”. Thus, from a set of anonymized data, it is possible to infer who the information refers to, by the patterns identified.

Privacy best practices in the development of IAs

Both in the use and in the development of AI systems that use personal data, there are some best practice guides developed by the market and regulatory authorities. The objective, in general, is to minimize risks and demonstrate the concern of technology developers with the topic in the development and use of these systems.

In the case of companies, organizations or public bodies whose central activity or strategy for the future involves technology development, these good practices can be managed by an Artificial Intelligence Committee, with representatives from different areas, such as the legal department, privacy and data protection, integrity, security, in addition to the developers themselves - in high-risk cases, even the participation of users and representatives of civil society can be considered.

Policies and documentation

One of the starting points for technology development to be appropriate to data protection rules is to define specific processes aimed at this objective, formalized in policies and procedures. In these documents, it is also important to reflect the responsibilities and role of each team that will be part of the Artificial Intelligence Committee, as well as the strategy for recording the phases of technology development, to demonstrate the commitment to privacy at each of the stages. Whether using or developing AI systems, companies can keep an inventory of these technologies. One of the main objectives of this registry is to define the determined, legitimate and specific purpose of the use of personal data in that system, as well as a list of the types of data, forms of storage and treatment. Defining the legal basis is also a fundamental step and must be included in the inventory - after all, it gives the company the right to process personal data.

Development and testing

The purpose recorded in the inventory must be observed in the learning phase (that is, in the design, development, and training of the system with unbiased databases) and production (that is, in the implementation of the system). During the tests, it is important to consider the vulnerable groups that may be affected by the technology, documenting the measures implemented to prevent specific harm. To ensure that the laws are complied with and the possibly affected groups are considered, it is interesting to involve, in the decision-making process made by the AI, people who accompany the Outputs of the algorithm. This human influence can have different levels: total control of the tool's decisions; control only to annul possible decisions; or, monitoring and supervision, when it is necessary to intervene punctually, in case of unexpected events. Human monitoring can also help to comply with the principle of transparency, making it possible to explain to the owners the reasons for the decisions taken, should they be harmed by an AI decision. Another relevant point, especially considering the AI Act, is the registration and retention of Logs of technology events - that is, a record of the activities carried out by the developers (access, editing, deletion, etc.) -, to allow internal and external monitoring, if necessary.

Data Protection Impact Assessments and Reports (RIPD)

Documenting the privacy risks of the AI system in the use or development of AI is essential to establish the governance of the technology and reduce the likelihood and impact of any harm it may cause. European personal data protection authorities have extensive documentation on how to conduct these reports (Data Protection Impact Assessment - DPIA), which can be observed in this context, given that the National Authority for the Protection of Personal Data (ANPD) does not yet have regulations on the subject. In general, the objective is to establish a management system by predicting, identifying, analyzing, and defining appropriate measures to mitigate or eliminate risks associated with AI. The measures may be, for example, to improve the database used to train the algorithm (reducing or eliminating biases and complications that generate discriminatory results) or to establish redundancy and contingency processes in the system (such as when there are failures of a supplier involved in the operation of the technology).

Transparency and information

Both privacy and data protection laws and artificial intelligence bills around the world have transparency as their fundamental principle. This is because it is through accessible information that users can make informed decisions and trust the use of that technology. Thus, when using or developing IAs that process personal data, it is relevant to inform the owners how the data is collected and used by the models (respecting the intellectual property limit), as well as what risk mitigation measures have been implemented in the technology. In this sense, it is recommended that AI systems be presented as such, so that the owners know that it is not a person making that decision and carrying out that interaction. The company must also foresee the steps to be taken in the event of an incident involving AI, to assess the need to notify authorities and data subjects, ensuring transparency.

Monitoring

Once the technology has been launched or implemented in the workflow of the companies that use and develop the technology, it is important to keep monitoring of its operation and impact. Developing a plan for this monitoring organizes internal work and allows preventive action to anticipate possible damages.

Conclusion

The security and privacy risks of artificial intelligence are society's concerns, addressed by countries' regulatory agenda. Proof of this is the Bletchley Declaration, signed in November 2023 by 28 countries, including the USA, China, and the United Kingdom. The document aims to identify common risks and concerns among signatories regarding AI security.

In addition to this document, the legislative journey of the AI Act in the European Parliament also demonstrates this global effort to define rules in the development and application of technology. In Brazil, the debate is also ongoing. In Congress, there are several projects on the subject (such as, for example, Bill 759 of 2023 or Bill 2338, of the same year) and, recently, the president of the National Authority for the Protection of Personal Data (ANPD) took the position that AI should be part of the regulatory scope of the Authority.

This is because, in relation to privacy, the challenges posed by AI are fundamental, since the logic of the technology seems to run into principles of personal data protection laws, such as purpose, security, non-discrimination, necessity, etc. Some European Union data protection authorities, in this sense, have national guidelines and regulations on how AI systems that treat personal data may be appropriate to the legislation.

These good practices, which have been consolidating in the market, help companies that develop or use AI algorithms to accommodate their legal responsibilities and technological progress. Establishing adequate governance is essential not only to prevent technological advances from being predatory or harmful to vulnerable groups, but also to promote fairer relations.