Ana Clara Costa Cerceau
Enacted in August 2018, Brazil’s General Data Protection Law (LGPD) aims to safeguard citizens’ fundamental rights to freedom and privacy. The law came into force in September 2020, and the National Data Protection Authority (ANPD) is the body responsible for its enforcement and oversight.
In recent years, Artificial Intelligence (AI) has simultaneously become both a significant challenge and a powerful ally in people’s daily lives. The growing use of AI in the legal sector—especially for contract drafting—raises issues that warrant critical analysis.
Tools like ChatGPT offer practical and agile solutions, making it easier to prepare legal documents while significantly reducing costs. However, automating these processes requires careful reflection on the risks involved. Despite the efficiency of such tools, they have important limitations that can compromise the quality and legal security of the contracts generated.
Firstly, AI can produce contracts based on predefined models or templates, but its ability to tailor them to the requester’s specific needs is limited. Although the process is faster, AI often fails to capture the unique aspects of each situation, resulting in generic and unsuitable clauses for the parties involved. This can lead to misinterpretation and even the invalidation of the contract in the event of a breach.
Additionally, AI tools may not be up to date with recent legislative changes or local regulations, increasing the risk of including outdated clauses or those that conflict with current laws. This misalignment can render a contract invalid or disadvantage one of the parties, undermining its legal effectiveness. While highly efficient at processing large volumes of data, AI lacks the precision needed to assess such legal nuances, potentially producing documents with significant gaps or errors.
For example, consider a company that uses ChatGPT to draft a service agreement. The tool may generate standard clauses that overlook the specifics of the relationship between the parties. In a clause related to working hours or subordination, the AI might include terms that impose excessive control over how the service is to be performed—such as fixed schedules or mandatory adherence to internal company rules. These conditions could recharacterize the arrangement as an employment relationship, even when the intention was to maintain an independent contractor agreement. By introducing features typical of an employment contract, like subordination and personal obligation, the AI-generated contract could lead to legal reclassification, exposing the company to labor-related liabilities.
The lack of critical legal analysis and professional experience is another crucial point—even acknowledged by AI tools themselves. Technology does not have the interpretive capacity of a legal professional and therefore cannot reliably anticipate legal risks or identify key aspects of a contract. This becomes especially problematic with essential clauses, such as those involving dispute resolution, penalties for breach, or deadlines. For this reason, ChatGPT itself recommends that users consult a qualified attorney to assist with any legal matters or review of contracts it generates.
Another important concern involves data privacy and security. Using automated tools to create contracts often entails processing sensitive information. In this context, protecting such information is critical, as AI platforms may not offer the security required to safeguard data—especially in compliance with data protection laws like Brazil’s LGPD (General Data Protection Law). The leakage or misuse of confidential information can lead to serious legal consequences for the parties involved.
Despite technological advancements and the advantages of AI in automating legal processes, the role of a lawyer must remain an essential part of the process. Reviewing AI-generated contracts with a qualified professional is critical to ensure compliance with current legislation, protect the parties’ rights, and secure the legal soundness of the document. Human oversight ensures that clauses are customized to the specific needs of each contract, preventing risks and potential financial or legal harm.
Therefore, regulation and ethics in the use of AI are also central issues. For this technology to be used responsibly, clear guidelines must be established to safeguard individual and collective rights, as required by the LGPD, and to ensure that automation does not compromise the quality of legal services provided.