
The Spanish judiciary faces a new challenge: the use of artificial intelligence in legal documents has resulted in fake court rulings appearing in official records. This incident has served as a wake-up call for everyone working with new technologies in the legal field. The court’s decision not only determined the fate of a particular lawyer, but also set boundaries for accountability when using AI.
In October 2025, attorney I.G. filed a complaint on behalf of a company in the Sala de lo Social del Tribunal Superior de Justicia de Navarra. The document reportedly cited excerpts from supposed decisions of the Tribunal Constitucional, Tribunal Supremo, TSJ de Navarra, and TSJ de Madrid. However, judges quickly discovered that these quotes didn’t match any real court rulings, and the dates and case numbers didn’t correspond with official records. In fact, the case involved entirely fabricated decisions generated by artificial intelligence.
After uncovering these violations, the court initiated disciplinary proceedings in February on the grounds of bad faith. The law establishes fines ranging from 180 to 6,000 euros for such actions, and also allows forwarding the case to a professional association for further sanctions. The court’s decision noted that such errors not only breach good faith but also undermine respect for the judiciary and hinder the work of magistrates.
Reaction and consequences
The lawyer promptly admitted the mistake, explaining it as unintentional and due to insufficient review of text prepared with AI assistance. In her explanation, she emphasized that she was fully retracting the disputed citations, did not intend to mislead the court, and asked for her sincere apologies to be taken into account. The attorney also requested that, if the court decided to take action, it should limit itself to a minimal warning.
A few days later, the lawyer sent an additional statement in which she again expressed regret and stressed that this incident had been a serious lesson for her. She asked that her actions not be considered disrespectful or malicious, and for the disciplinary case to be closed. In her view, a verbal warning would have been sufficient as a last resort.
In the final decision, prepared under the guidance of Judge MarΓa JosΓ© Ramo, the court noted that the use of new technologies and materials generated by artificial intelligence requires special attention to ethical and legal issues. Responsibility for the accuracy and compliance of documents remains with lawyers, despite automation of the processes.
Technology and legal practice
The court in Navarra cited studies indicating that so-called AI ‘hallucinations’ are common in the legal field. The judges stressed that careless use of AI can lead to serious consequences, including accusations of bad faith and abuse of process. Professional ethics require lawyers to thoroughly verify all submitted materials.
In this case, the judges decided not to sanction the lawyer, taking into account her prompt response and admission of the mistake. However, the ruling emphasizes that this incident should serve as a warning to all legal professionals using AI without proper verification. Such incidents may lead to tighter control over the use of technology in courtrooms.
In Spain, new approaches to regulating the use of artificial intelligence in legal practice are already being discussed. According to RUSSPAIN.COM, such cases are becoming more frequent, requiring the professional community to develop unified standards and accountability measures.
Context and trends
In recent years, Spanish courts have increasingly faced questions related to the use of new technologies and ethics in legal work. For example, a recent case was discussed where the court refused to lift restrictions on a police officer accused of harassment, sparking debate about the balance between protecting rights and following procedure (details on the topic). Such situations demonstrate that Spain’s judicial system is being forced to adapt to new challenges linked to digitalization and automation.
The surge in cases where artificial intelligence generates unreliable data for legal documents is prompting courts and professional associations to review their internal regulations. Crucially, even in the absence of malicious intent, it is the personβnot the softwareβwho remains responsible for the final document. This is becoming a key principle for everyone working with AI in the legal sector.
In recent years, Spain has seen a rise in disciplinary cases related to mistakes made when using digital tools in court. In 2024, there was a case in which a lawyer submitted documents containing incorrect references to legal acts, resulting in an internal investigation. At the same time, the debate in Europe is intensifying around the need for specific standards for working with AI in legal practice. These trends reflect growing attention to issues of accountability and transparency in the era of digital technology.












