Medina Osorio Advogados | View firm profile
SUMMARY
This essay addresses the strategic and essential relevance of the constitutional right to understanding the decisions of public authorities, resulting from the consolidation and maturation of the constitutional principles of publicity, transparency, justification for administrative and jurisdictional acts, substantial due process, equality, adversarial proceedings, full defense, human dignity, proportionality, reasonableness, and other fundamental rights enshrined in the 1988 Constitution. The application of these rights depends vitally, in the digital age of complexity, on systemic traceability structured in intelligent, digital, auditable legal databases based on statistics and statistical models. In this sense, this work addresses the concept of the right to understanding as a systemic and deeper evolution, derived from the set of fundamental rights, which modifies the traditional paradigm of publicity and transparency in public sector decision-making bodies—especially in light of the mandatory observance of judicial precedents by judges, courts, and administrative authorities. In this regard, the aim is to contextualize the right to understanding within the normative universe that enables people’s access to structured legal databases, artificial intelligence, statistics, and the auditability of algorithms and databases. At the same time, this essay seeks to demonstrate the relevance of these contemporary tools for understanding the decision-making acts of public authorities.
Finally, it aims to emphasize the importance of strengthening the culture, coherence, traceability, predictability, and self-criticism of authorities regarding the content and identification of decision-making patterns, as well as strengthening the culture of teaching, the culture of precedents, artificial intelligence, and access to databases in institutions as a whole — including education institutions and access to justice.
KEYWORDS
Right to Understanding; Precedents; Transparency; Publicity; Artificial Intelligence; Legal Databases; Statistics; Auditability; Public Decisions and Due Process of Law.
ABSTRACT
This essay addresses the strategic and essential relevance of the constitutional right to understand public authorities’ decisions, as a result of the consolidation and maturation of constitutional principles such as publicity, transparency, reasoning of administrative and judicial acts, substantive due process of law, equality, adversarial proceedings, broad defense, human dignity, proportionality, reasonableness, and other fundamental rights enshrined in the 1988 Brazilian Constitution. In the digital era of complexity, the effective application of these rights depends fundamentally on systemic traceability, structured through intelligent, digital, auditable legal databases supported by statistics and statistical models. In this sense, the work discusses the concept of the right to understanding as a systemic and more profound evolution derived from the set of fundamental rights, modifying the traditional paradigm of publicity and transparency in public decision-making bodies—especially in light of the mandatory observance of judicial precedents by judges, courts, and administrative authorities. Accordingly, this essay aims to contextualize the right to understanding within the normative framework that enables public access to structured legal databases, artificial intelligence, statistics, and the auditability of algorithms and databases. Simultaneously, it seeks to demonstrate the importance of these contemporary tools in understanding public decision making acts. Finally, this essay also emphasizes the importance of strengthening a culture of coherence, traceability, predictability, and institutional self-criticism regarding the content and identification of decision-making patterns, as well as reinforcing a culture of education, precedents, artificial intelligence, and data access across institutions—from educational frameworks to mechanisms of access to justice.
KEYWORDS
Right to Understand; Precedents; Transparency; Advertising; Artificial Intelligence; Legal Databases; Statistics; auditability; Public Decisions; Due Process of Law.
1. Introduction
The classic right to publicity and transparency of judicial and administrative acts, provided for in Articles 5, items LX and XXXIII, 93, IX, and 37, caput, all of the 1988 Constitution, derives from the framework of contemporary liberal democracies. These same requirements of transparency and publicity coexist with the possibility of protecting intimacy, privacy, and confidentiality, whether in the cases provided for in the Constitution or in cases provided for by law 2. However, the digital age, intertwined with the concepts inherent to the era of complexity, in which transformations and the speed of events and plural thoughts interconnect, demands a rethinking of the hermeneutics regarding the scope of transparency and publicity surrounding decisions restricting fundamental rights. In the Brazilian Constitution of 1988, there is no doubt that it is necessary to interpret in a coherent and harmonious manner the requirements of transparency (art. 5, LX and XXXIII; art. 37, caput), publicity (art. 93, IX) and justification for judicial and administrative decisions (art. 93, IX; art. 37, caput), together with the mandatory observance of the prohibition of arbitrariness of public powers, resulting from substantive due process of law (art. 5, LIV), obedience to substantive and formal due process of law (art. 5, LIV and LV), compliance with legal certainty (art. 5, caput and XXXVI), equality (art. 5, caput and I), adversarial proceedings (art. 5, LV), full defense (art. 5, LV) and respect for human dignity (art. 1, III).
As if this indispensable integration of this set of constitutional requirements connected to the transparency and publicity of state decisions were not enough, the duty of coherence, objective good faith and institutional loyalty was also embraced by the legislator, under the aegis of the democratic principle, when providing for the mandatory observance of precedents by the Judiciary and administrative authorities, when issuing their decisions and formatting jurisprudence in the application of laws (articles 926, 927, 928, 489, § 1, items V and VI, and 1,036 to 1,041 of the 2015 Code of Civil Procedure).
In this context, the problem of judicial congestion, slowness, and overload affecting the judiciary is nothing new in Brazil. Furthermore, there is another serious structural problem: the unpredictability that plagues the system, given the lack of a culture of judicial precedent formation. The system adopted by articles 926, 927, 928, 489, §1, items V and VI, and 1036 to 1041 of the 2015 Code of Civil Procedure was not accompanied by a corresponding cultural implementation effort in the areas of education, training of judges, lawyers, and members of institutions essential to justice, much less by a national mobilization to foster this new culture.
There is no doubt that there is high-quality national literature on the theory of precedents, originating in English law and embodied, with adaptations, in North American law, from where it emerged into Brazilian law 3. Nevertheless, the Romano-Germanic culture of which Brazil is heir does not automatically adapt to a distinct and secular culture, which presupposes an entire tradition in the construction of precedents, and obstacles persist. In this essay, our purpose is not to address the theoretical obstacles, typical of the field of the great scholars of civil procedural law, related to the dogmatics of precedent theory.
As we warned initially, one of the first obstacles we will address in this work is related to the culture of teaching. However, there are essential pragmatic difficulties connected to the implementation of structured databases, artificial intelligence tools, statistics, transparency, and integration with the accessibility of administrative jurisprudence of institutions essential to justice, in addition to institutions that provide public services and enforce administrative norms. In short, the delivery of justice is not an exclusive prerogative of the Judiciary, and the constitutional principles that govern public administration, enshrined in Article 37, caput, of The 1988 Constitution must gain substantial depth to allow access to justice as a genuine fundamental right of the people, a circumstance that presupposes access to structured databases, closely interconnected with advanced artificial intelligence and statistical technologies, so that an authentic theory of judicial precedents can be constructed and administrative jurisprudence linked to these precedents, in a manner that is accessible and transparent to the public.
Furthermore, this essay also proposes to demonstrate that this new culture will strengthen institutions and the market with new paradigms of institutional integrity, quality, and protection of fundamental rights, as well as efficiency and competitiveness. In this scenario, our conclusion is that this new framework will be made possible through new transformative compliance models, both in the public and private sectors.
Regarding technological innovation in the legal field, it is important to reflect on three priority areas of action: (a) the careful implementation of artificial intelligence in the organization and analysis of documentary and case law collections; (b) systematic education on precedents in higher education institutions and government entities; and (c) expanding the application of the logic of precedents beyond the Judiciary, also encompassing the administrative sphere, such as audit courts, public defenders’ offices, and regulatory agencies.
One of the hallmarks of the contemporary world, especially in Brazil, is the normative tangle and the constant profusion of laws, constitutional amendments, sub-legal normative acts, regulations, and rules of all kinds—a phenomenon that constitutes a permanently complex and continually changing normative network. As if this sophisticated machinery were not enough, this entire abstract normative apparatus undergoes a surprising metamorphosis when applied to concrete cases submitted for judgment in judicial and administrative instances, where the most diverse authorities boast decision-making autonomy.
In practice, we are talking about thousands of judges, appellate judges, and many other ministers of higher courts, as well as members of public prosecutors, public attorneys’ offices, regulatory agencies, audit courts, autonomous agencies, decentralized administrations, and multiple state agencies, or even institutions overseeing activities essential to justice. The abstract normative tangle transforms into an even more complex and unpredictable jurisprudential multiplicity, aggravated by the difficulty of access for those administered and under their jurisdiction. This scenario greatly accentuates the compromise of expectations related to legal certainty, equality, transparency, impartiality, substantial publicity and prohibition of arbitrariness by public authorities.
2. Decisional transparency and databases as a constitutional imperative inherent to the right to understand decisions restricting fundamental rights
2.1. Legal Database: Architecture of Transparency and State Accountability
The State’s actions, in their contemporary form, transcend the limits of formal legislation and judicial decisions. The legal binding of a person—whether human or legal—does not derive solely from rules developed by the Legislative Branch or decisions handed down by courts. They unfold through a multitude of legally binding State actions: administrative decisions, judicial and administrative jurisprudence, agreements concluded with public authorities, administrative contracts, sub-legal normative acts, and other administrative acts that, directly or indirectly, restrict, modulate, or recognize fundamental rights.
This plurality of decision-making, dispersed across different spheres and instances of public power, demands an institutional response commensurate with its complexity and impact. It is not enough for such manifestations to be formally accessible; it is essential that they be organized in a systematic, intelligible, and technically structured manner. Thus, the need for a legal database arises, not as a merely archival instrument, but as a material foundation for institutional transparency and state accountability.
This database should gather and make accessible the State’s decision-making acts that, even if they do not have a typical normative form, produce relevant legal effects on the sphere of human rights. It is an infrastructure focused on qualified publicity, structured research, institutional auditing, and democratic governance.
In this scenario, it is imperative that we conceptually address the legal database and its impact on redefining the understanding of public authorities’ decisions.
A legal database is, initially, a digital structure, whether public or private, but necessarily organized, intelligent and auditable, whose purpose is to capture, gather, receive, classify, order, explain, enable third-party interaction and make data and information intelligible, in order to optimize the institutional performance of the respective holder of this data.
bank and its users, respecting the fundamental and individual rights involved, preserving, when necessary, the limits inherent to the duties of confidentiality, in addition to the institutional memory of the decisions and patterns detected.
This definition of a database, which always involves legal aspects, recognizes the database as a necessarily intelligent entity. In this context, the legal database must perform at least some essential, fully auditable functions: ordering and classification; intelligence and statistical functions; interactive and organizational functions; and institutional and protective security functions 4.
This scope includes judicial decisions, administrative decisions,
judicial and administrative jurisprudence, judicial and extrajudicial agreements signed before or with public authorities, administrative contracts, normative acts and any administrative acts that produce legal effects on the sphere of freedom, property, self- determination or legal prerogatives of the person.
The legal database, in this sense, is not just an informational repository, but a technical-normative instrument aimed at consolidating public integrity, institutional predictability and social control over state acts.
We will examine the legal, constitutional, and technical foundations of the proposed concept, as well as the operational challenges and potential of its application in the context of the digital transformation of the State and the consolidation of data-driven governance models.
The function of a database in the legal field transcends the instrumental concept of a mere document repository. It is an institutional infrastructure oriented toward the systematization, rationalization, and transparency of legal knowledge, which carefully gathers and organizes judicial decisions, administrative decisions, case law, formal agreements, and administrative acts with significant legal impact. By adopting logical, chronological, thematic, and functional criteria, this type of database provides strategic support to interpretation and application of the law, allowing qualified access to precedents, normative foundations and coherent lines of argument.
When well-structured, a legal database directly contributes to promoting legal certainty, institutional predictability, and the effectiveness of justice. Its function is not limited to consultation: it acts as a tool for consolidating understandings, supporting legal research, and reinforcing the integrity of state decisions. Thus, it ceases to be a secondary technical instrument and establishes itself as a fundamental pillar in the architecture of institutional trust, especially in a context of increasing regulatory complexity and the need for public oversight of state actions.
2.2. Artificial Intelligence and Statistics as Infrastructure for Institutional Inference and Public Motivation
Statistics, in the era of mass, standardized decisions and institutional and technological complexity 5, should be understood as the scientific form of rational listening. It is a structured system of inference about regularities and exceptions, capable of identifying patterns, indicating risks of arbitrariness, and offering epistemic support for the legitimacy of public decisions. By transforming data into judgments, statistics allows the State to understand itself, revise its language, and act with predictability, responsibility, and prudence.
It is not merely a technical tool for quantification, but a formal language of reasonableness. It organizes the relationship between variability and coherence, offering objective criteria for distinguishing acceptable fluctuations from unjustified deviations. In a scenario where public decisions produce massive, immediate, and cross-cutting effects, statistics become a basis for structural accountability: it provides judges, managers, and regulators with a methodical mirror of the institution itself.
It is in this same context that artificial intelligence should be understood, especially in its predictive and explainable aspects. Artificial intelligence is not a substitute for public reason, but a technical extension of its analytical capacity. When guided by structured legal data and combined with statistical inference, AI can detect patterns of institutional behavior, recognize decisions outside expected parameters, suggest relevant precedents, and reinforce argumentative consistency 6.
Artificial intelligence acts as an instrument of interpretative traceability, allowing public motivation to be transformed from a formal or rhetorical gesture into a reconstructible, auditable, and comparable process. Its function is to broaden the scope of institutional attention, detect inconsistencies before they consolidate as structural ambiguity, and provide technical support for normative coherence. Like statistics, artificial intelligence doesn’t decide: it illuminates, signals, and suggests—so that human judgment can act with greater depth, context, and prudence.
Integrated, statistics and artificial intelligence become the infrastructure for public motivation, institutional traceability, and the prohibition of arbitrary action. They operate as invisible pillars of a new form of decision-making accountability, which is no longer supported solely by the authority of the function, but by the verifiable coherence of its foundations.
Ultimately, it is about equipping public discourse with technical tools that reinforce its commitment to legality, predictability, and integrity in the 21st century.
Statistics and artificial intelligence in the context of public institutions in the 21st century XXI, must be understood as convergent expressions of the same applied rationality: institutional inference under uncertainty. Both operate not only on data; they operate on doubts, asymmetries, variations, and repetitions—those things that, in the daily grind of public decision-making, require prudence, comparison, and motivation. Statistics provides the method of listening. Rational; artificial intelligence expands the scale, speed, and capacity of pattern recognition. Together, they structure a silent verification architecture.
Statistics is not just a measurement technique. It is a scientific way of interpreting regularities, recognizing exceptions, and estimating risks based on evidence. It transforms dispersion into structure, variation into signal, and noise into diagnosis. In a complex institutional environment, marked by repeated decisions and conflicting interpretations, statistics act as a filter for reasonableness: it allows us to distinguish between legitimate variations and unjustified deviations. Its function is to anchor public discourse in criteria that can be audited, compared, and eventually revised.
Artificial intelligence, in its predictive and explainable aspects, should be understood as a computational continuation of statistical inference. Every model that suggests, classifies, or anticipates the outcome of an institutional decision does so based on probabilistic structures.
—sometimes hidden, but always inferential. When combined with structured legal databases and guided by traceability principles, AI becomes an interpretative extension of institutional memory: it allows us to identify relevant precedents, suggest argumentative convergences, and flag decisions that deviate from recognizable norms.
Public motivation — which, at the legal level, requires clear and verifiable grounds—finds legitimate technical support in statistics and artificial intelligence. It’s not about replacing judgment, but about qualifying it. Decisions that incorporate statistical inference and computational intelligence are no less human; they are more nuanced, more contextualized, and more open to public criticism. Motivation ceases to be a ritual of language and becomes a manifestation of institutional coherence fueled by standards, references, and accountability.
By integrating statistics and artificial intelligence, the State reinforces its commitment to the traceability of decision-making and the prohibition of arbitrariness. Where there are identifiable patterns, there must be criteria to justify ruptures. Where there is normative regularity, there must be control over exceptions. The role of these technologies is not to decide—it is to illuminate. They are tools for institutional listening: they allow the State to listen to itself, compare itself, explain itself, and, when necessary, correct itself.
This understanding — that statistics and artificial intelligence, integrated and based on structured legal databases, should act as the technical infrastructure for public motivation and as guarantees of traceability, coherence and the prohibition of arbitrariness — was recently corroborated, on an international scale, by the study of
Chutisant Kerdvibulvech (Big Data and AI-driven evidence analysis: a global perspective on citation trends, accessibility, and future research in legal applications, 2024) 7.
Kerdvibulvech demonstrates, through empirical analysis and a global literature review, that artificial intelligence systems applied to legal analysis — especially in document review, litigation prediction, forensic image analysis, and contract evaluation — only produce legitimate and admissible effects when accompanied by rigorous statistical validation, methodological traceability, and transparent ethical parameters. Statistics, in this context, do not appear as an accessory technique, but as an epistemic guarantee of institutional rationality.
Kerdvibulvech argues that statistical inference is essential for controlling biases, measuring uncertainty, and identifying patterns and exceptions. At the same time, artificial intelligence must be applied under interpretable and auditable guidelines, so that institutional decisions—administrative, judicial, or investigative—do not become automatic gestures, but rather motivated acts with depth, prudence, and inferential responsibility.
This finding confirms the central thesis of this topic: contemporary public decision- making requires, in addition to legal grounds, a technical basis for inference, verification, and explanation to enable the right to understanding. Furthermore, decisions need to be interpreted within a systemic context to enable societal understanding. By integrating statistics and AI, the public institution commits to a systemic and complex decision-making language model, authentically integrated, capable of resisting structural ambiguity, preventing normative inconsistencies, and ensuring predictability without rigidity. Motivation ceases to be a formal requirement and becomes a continuous exercise of institutional listening: listening to data, patterns, ruptures, and the limits of one’s own decision-making power.
The structuring of case law, administrative, and business databases should not only serve retrospective statistics or predictive artificial intelligence. Their deeper role is to provide a stable, auditable, and technically grounded language for the conclusion of out-of- court settlements and to guide public authorities in their decisions, whose legitimacy depends on consistency with past decisions—whether judicial, administrative, or business. The lack of this anchoring in precedents and prior agreements compromises not only fairness between parties in similar situations, but also the but also the logical integrity of the normative function exercised by institutions. As Chutisant Kerdvibulvech (2024) demonstrates, disorganized or disjointed data weaken artificial intelligence systems, impede pattern detection, and obscure systemic deviations. Therefore, to be legitimate, effective, and transparent, judicial or extrajudicial agreements must be integrated into the institutional memory formalized within contemporary technological standards rather than 20th-century methodology, reflecting already recognized standards and allowing public control over their consistency with the historical language of decisions.
In this context, the coordinated application of statistics and artificial intelligence offers public and private institutions the possibility of developing a cognitive infrastructure focused on analyzing, cross-referencing, and validating these databases. Statistics allow for mapping decision-making frequencies, identifying recurring argumentative patterns, and recognizing hermeneutical inflection points—including in historical series of out-of-court settlements and administrative decisions. Artificial intelligence, fueled by this structured universe, becomes capable of performing more sophisticated tasks: detecting internal inconsistencies, pointing out unjustified divergences between similar cases, assessing adherence to precedents, and predictively flagging risks arising from solutions outside the institutional framework.
This combined functionality acts as a silent engine of coherence. Systems trained with properly classified, versioned, and traceable data can offer, for example, suggestions for clauses aligned with the terms of previous similar agreements, alert to the risks of contradictory decisions, or project regulatory impacts not yet perceived by traditional legal rationality. The potential of this analytical architecture is not limited to efficiency, but reaches a deeper level: institutional self-awareness. It allows organizations to observe themselves, review themselves, learn from their own records, and, above all, establish a language that can be recognized and replicated responsibly.
The constitutional right to understanding inaugurates a new interpretative paradigm and breaks the limits of administrative and jurisdictional transparency and publicity. Decision-making acts must be understandable; it is not enough to be public and transparent. They must be well- founded, rational, and coherent. No one understands an arbitrary act. Coherence ceases to be merely a rhetorical aspiration and becomes monitorable, auditable, and measurable by metrics informed by empirical standards. Even an out-of-court or judicial settlement ceases to be an isolated gesture of convenience and becomes part of an ecosystem of decisions.
Interdependent, subject to technical scrutiny and comparative review. This expands the capacity of the State and corporations to engage with their own precedents—judicial, administrative, and business—without losing sight of the uniqueness of each case. Artificial intelligence and statistics, together, do not replace institutional deliberation, but rather accompany it with an ethical horizon: avoiding distortions, preserving memory, and preventing arbitrariness from masquerading as discretion.
For the theory of precedents to operate as the rational core of the legal system—as required by stare decisis—it is essential that case law be analyzed not only hermeneutically but also through statistical and computational resources capable of diagnosing its internal fractures. Statistics, in this field, allows us to identify divergent patterns between decisions on analogous cases, map the dispersion of reasoning across different chambers and courts, and quantify the degree to which decisions adhere to or deviate from qualified precedents, signaling which areas offer legitimate controversy or hermeneutical discretion and which areas are pure anomalies and arbitrariness. This is a diagnostic and predictive function: it highlights where the system behaves coherently and where, due to interpretative or contextual flaws, it begins to lose its consistency and even enter suspect zones.
Artificial intelligence, powered by structured databases with robust metadata (topic, thesis, adjudicating body, rapporteur, outcome, main grounds, legal provisions invoked), can go further: it can not only identify these inconsistencies but also project future deviations based on emerging decision-making trends. Through supervised algorithms, it is possible to train models that indicate, for example, the likelihood of a given thesis being revised, challenged, or ignored by certain instances or regions. This monitoring is vital for preserving the integrity of the precedent system, as it allows not only early warning of the erosion of consolidated understandings but also the continuous calibration of judicial language based on its own decisional memory.
For this process to be reliable, however, the databases’ feed is crucial. Without adequate technical curation, artificial intelligence becomes blind and statistics become illusory. It is essential that decisions are correctly classified, that qualified precedents are clearly marked, and that there is an institutional protocol for recording relevant grounds. The data needs to be cleaned, updated, standardized, and enriched with context. It’s not just about digitizing judgments: it’s about necessary to transform decisions into structured language, with specific fields that allow their algorithmic interpretation without loss of legal density. This structure will allow us to filter relevant cases, distinguishratio decidendiofobiter dicta, and accurately map the cores of normative meaning that radiate from the precedents.
Thus, the preservation of stare decisis in the age of complexity will not be guaranteed solely by formal declarations of binding force, but by the institutional capacity to systematically monitor, audit, and project legal coherence. Statistics and AI, in this sense, operate as instruments of lucidity: they reveal the structure behind discourse, the regularity behind exceptions, and the instability behind the appearance of uniformity. Law ceases to be a self-centered narrative and becomes a field of empirical observation and interpretative responsibility.
The logic of structuring, statistical analysis, and predictive interpretation applied to case law and out-of-court settlements should be extended to public contracts and regulatory acts, whose regulatory effects are often broader and more lasting than specific court decisions. The creation of structured databases containing contractual clauses, performance conditions, addenda, legal opinions, and practical results allows artificial intelligence to detect abusive recurrences, strategic omissions, inconsistencies in interpretation, and asymmetries in treatment between different contracting parties in similar situations. In the regulatory field, the systematic organization of resolutions, instructions, ordinances, and decrees, with metadata on legal basis, issuing agencies, express motivation, and validity, enables analyses that reveal the degree of interpretative uniformity between federative entities and regulatory agencies. The statistics applied to this set become a tool for continuous constitutional auditing, capable of verifying whether normative acts comply with legal frameworks and align with the system of precedents, avoiding contradictions, normative redundancies, or regulatory gaps. When fed with technical rigor and processed by explainable AI, these collections become part of an integrated institutional intelligence, in which contracts and norms not only produce legal effects but also feed back into the state’s normative memory, allowing public governance to learn from its own actions and anticipate recurring deviations
2.3. Artificial intelligence, traceability and prohibition of arbitrariness
The application of artificial intelligence in public administration is a topic of growing importance. In this context, the relevance of this technology in systematizing and categorizing the reasons underlying the decisions of different agencies in similar situations stands out. This organizational capacity not only allows for the identification of inconsistencies but also helps clarify interpretative inconsistencies. Furthermore, systematization favors the promotion of institutional consistency, which is highly desirable.
However, it is essential that the implementation of artificial intelligence in this area is guided by public and auditable criteria 8, respecting the constitutional principles in force. This guidance is especially important regarding the principles that guarantee legal certainty and adequate justification for administrative acts. Motivation, understood as the requirement of a rational basis, is closely related to the concept of traceability. An administrative act that presents adequate motivation is one whose reasons can be easily reconstructed, verified, and challenged.
It is worth noting that this requirement of motivation is not limited to judicial decisions. It also extends to administrative sanctioning acts, binding opinions, regulatory resolutions, and agreements signed between public entities. In this sense, the organization of administrative jurisprudence in structured, improved databases.
Powered by artificial intelligence and made publicly available, this represents an effective strategy. This approach not only ensures transparency but also establishes a model of administrative justice based on the coherence, rationality, and integrity of legal language.
Finally, it is imperative that technological advances in the public sector be accompanied by a firm commitment to the values that underpin the democratic rule of law. The responsible integration of artificial intelligence can thus significantly contribute to a more efficient, fair, and transparent public administration.
2.4. Integrative hermeneutics of constitutional law to understanding:
convergence between administrative and judicial jurisprudence through databases
The constitutional right to understand the content of public decisions restricting fundamental rights requires a hermeneutics that views the set of decisions as a whole—that is, a normative system properly structured, organized, classified, and capable of in-depth research in the technological age. In this sense, this hermeneutics, regardless of the current it purports to designate, is based on an unavoidable contemporary assumption: viewing the decision in its comprehensive and organized normative context. It is impossible to ignore that this context is part of the age of complexity, as has been stated from the outset 9.
Statistics, in its contemporary approach, transcends the mere measurement of quantitative phenomena. It establishes itself as a science that encompasses structure, inference, and decision-making, focusing on organizing information, identifying patterns, anticipating risks, and providing a rational basis for institutional choices.
legal context, this function acquires crucial importance, proving to be a vital element for public rationality, consistency in decisions and the integrity of governance.
Legal statistics are based on three fundamental pillars: first, the systematic and structured collection of relevant public data; second, the mathematical modeling of identifiable patterns in different contexts, ranging from regulatory to judicial; and finally, responsible inference, which must be auditable and publicly justifiable, regarding risks, trends, repetitions, and deviations. Thus, its object of study goes beyond numbers, encompassing institutional behavior, the language of decision-making, and the logic of institutions in uncertain situations.
In this sense, statistics transform the legal database into a space for systemic observation, enabling the identification of asymmetries, the prediction of conflicts, and the rationalization of state action. The relationship between statistics, databases, and legal language is, therefore, structural in nature. Without reliable, organized, and auditable data, legitimate inferences cannot be made; similarly, without institutionalized statistics, databases become mere technical repositories, devoid of analytical value.
Furthermore, without a standardized legal language, classifying, cross-referencing, and interpreting decisions becomes a challenge.
Within the justice system, statistics perform four central functions. First, a diagnostic function, which seeks to identify interpretative patterns, areas of instability, inconsistencies in decisions, and unequal treatment in similar cases. The preventive function, in turn, anticipates the emergence of conflicts, legal risks, or argumentative distortions, based on historical data and institutional patterns. Next, the strategic function underpins the management of case provisioning, the prioritization of agendas, and the structuring of coherent public responses. Finally, the restorative function provides objective support for reviewing dysfunctional practices and correcting institutional biases, also encompassing the perspective of external oversight.
For these reasons, it is essential that statistics be understood as a principle of democratic governance, especially in complex and sensitive legal environments. It strengthens the state’s capacity for introspection, allowing it to review its structures and promote actions based on standards of coherence, efficiency, and equity. The application of this rationality in the justice system is imperative in times of
complexity. To this end, courts, regulatory agencies, public prosecutors, audit courts, and internal control bodies must incorporate statistical tools guided by clear public purposes and ethical, auditable routines that are permanently aligned with the constitutional language.
Thus, legal statistics should not be seen merely as an auxiliary technique, but as an epistemological foundation that sustains institutional integrity. It does not replace argumentation, but enhances it; it does not supplant the norm, but rather the structure; and it does not automate justice, but rather anchors it in evidence, memory, and public accountability. By translating legal data into analytical language, statistics ground justice as an expression of collective intelligence.
3. Metric Transparency in the Age of Algorithms
3.1. The role of statistics in the legal world and in improving the justice system
It is important to note, from the outset, the conceptual difference between data and algorithms, although it is assumed that these concepts underlie the logic of this essay. The concepts taken fromArtificial Intelligence Risk Management Framework (AI RMF 1.0), prepared by the National Institute of Standards and Technology – NIST (2023), as well as the Regulation (EU) 2024/1689 of the European Parliament and of the Council, which deals with artificial intelligence within the European Union (EUROPEAN UNION, 2024). Both are based on essential premises inherent to digital governance 10.
The conceptual distinction between data, metadata, and algorithms constitutes a structuring element in contemporary debates on artificial intelligence, automated governance, and digital regulation. This distinction is essential for the proper interpretation of the obligations imposed on institutions seeking to ensure fair and equal access, comprehensible reading, and informational protection in the ethical use of data within the Brazilian legal, economic, and communication space—especially in digital environments such as social media, where data processing reaches massive proportions and amplified effects.
Data are digital records or representations—structured or unstructured—of any fact, act, occurrence, state, process, or event that can be captured and has relevance. Metadata, in turn, is data about the data, according to the context of its collection. In the European legal and technical context, it encompasses both raw data (such as the date and time of a purchase) and processed data (such as a user’s consumption history). In the RFM model, data corresponds to the observable elements that feed the metrics: number of purchases made, date of the last transaction, and total amount spent.
Algorithms, on the other hand, are finite sets of logical or mathematical rules and instructions used to process data, extract patterns, categorize subjects, or make automated decisions. An algorithm based on the RFM model must be explainable through predefined formulas, based on risk models contextualized by segment.
From a regulatory perspective, European Regulation (EU) 2023/2854 classifies algorithms as operations on data that must comply with fundamental principles such as proportionality, non-discrimination, necessity, and transparency. This classification implies recognizing that algorithms are not immune to oversight, especially in contexts where they impact fundamental rights, access to essential services, or the classification of standards, agreements, and decisions that directly affect these rights.
In the Brazilian context, the General Data Protection Law (LGPD – Law No. 13,709/2018) establishes fundamental guidelines for the processing of personal data, highlighting the principles of purpose, adequacy, necessity, free access, data quality, transparency, security, prevention, and non-discrimination. The LGPD also imposes obligations regarding the protection of confidentiality and information security, requiring both the controller and the processor to adopt effective technical and administrative measures to safeguard personal data against unauthorized access or incidents—whether accidental or unlawful—that may result in the destruction, loss, modification, improper communication, or dissemination of such information.
The distinction between data, metadata, and algorithms, therefore, is not merely bureaucratic: it is a structural element for the ethical and normative governance of digital systems. Data constitutes the raw material; algorithms, the instruments of transformation. Protecting the integrity of the process requires oversight of both: the origin, classification, and use of data, as well as the criteria and impacts associated with algorithms.
The structuring of documentation, rigorous data processing, and the auditability of digital systems, including algorithms, provide new paradigms for the legitimacy of public decisions.
The consolidation and strengthening of precedents are connected to the right to understanding, which derives from a constitutional system integrated into a dynamic, self- critical, and regenerative institutional culture. This culture must not only defend itself but also constantly renew itself and engage in dialogue with the State. In this context, transformative compliance in companies also emerges, emerging as the link between the technical structure and the institutional essence, representing the ethical intelligence that guides artificial intelligence, the integrity that organizes data, and the trust that underpins predictability, with regulatory autonomy and a new perspective on the organization’s identity.
As I mentioned, the complexity of decision-making, the lack of transparency, and the difficulty in accessing state decisions and acts stem from multiple factors. The legal database lacks integration with statistics, as it is an intelligent tool that enables a sophisticated reading of institutional memory. It has become clear that statistics reveal patterns and deviations, allow for the critical interpretation of trends, hidden flaws, fissures, inequalities, asymmetries, and diagnoses regarding normative and decision-making implications. In the current context, the improvement of statistical tools shows trends toward integration with artificial intelligence, thus fulfilling extremely important functions in the analysis, diagnosis, and anticipation of scenarios and risks in the institutional decision- making architecture. In these new scenarios, AI considerably expands the scale and scope of statistical impact, and technological advances increasingly enable the identification of invisible and sophisticated patterns. Methodologies exist that allow for continuous revisions as evidence emerges or undergoes transformation. In any case, it should be noted that statistics and artificial intelligence do not replace human judgment, but constitute auxiliary instruments and have auditable and variable methodologies 11.
3.2. How algorithms impact and structure legal databases
Regarding the concept of an algorithm in information technology, it’s crucial to remember that it’s a structured set of instructions designed to solve problems or perform tasks automatically. In the digital world, an algorithm operates like a recipe that guides computers in their decisions, based on data. Rather than adopting a random approach, the algorithm follows meticulously designed steps aimed at classifying, ordering, correlating, or predicting information. This dynamic proves crucial, for example, when an electronic legal system is able to identify analogous decisions, when a platform detects contractual risks, or even when a compliance program identifies inconsistencies in business operations.
Although developed by technology experts, algorithms are not neutral; on the contrary, they reflect human choices about what should be valued, what can be disregarded, and which paths deserve to be prioritized. Therefore, their application in legal spheres requires a commitment to accountability, transparency, and oversight. When properly designed and audited, algorithms have the ability to organize vast volumes of data, reduce the incidence of errors, and increase the consistency of decisions. However, when they operate without proper oversight or with biased data, they risk perpetuating inequalities, creating legal vulnerabilities, and compromising institutional integrity.
Algorithms currently constitute the dynamic axis that transforms legal databases into functional systems of institutional rationality. If the database is responsible for storing and organizing decisions, normative acts, contracts, and opinions, they are the algorithms that give form, intelligibility, and operability to this collection 12. Through them, it becomes possible to classify documents, extract linguistic and normative patterns, identify recurrences, assess argumentative coherence, and build inferences applicable to new cases or regulatory scenarios.
In contemporary legal databases, algorithms perform a variety of structural functions, such as semantic indexing and hierarchical organization of content, where classification algorithms categorically organize decisions according to subject matter, legal basis, adjudicating body, cited case law, type of request or legal thesis, ensuring accurate information retrieval, even in massive and heterogeneous collections.
Furthermore, detecting patterns and inconsistencies becomes a vital function, with clustering and anomaly detection algorithms facilitating the identification of inconsistencies between similar decisions, revealing areas of jurisprudential instability and locating interpretations outside the historical norm. Temporal traceability and predictive inference are also crucial, with time-series algorithms enabling the detection of decision-making seasonality, interpretative ruptures, and normative effects over time, which is essential for statistical inferences, resource allocation, and institutional impact simulations.
Explainability and legal inference in artificial intelligence are equally relevant; in more advanced systems, algorithms become logical inference mechanisms, used to suggest arguments, reconstruct legal grounds, predict procedural outcomes, or recommend contractual clauses, with all of this functionality dependent on the integrity of the database and the ethical governance of the applied algorithm.
Finally, filtering, cleaning, and anonymizing sensitive data through pre- processing algorithms is essential to eliminate duplication, correct inconsistencies, and ensure the anonymization of personal data before it is used for AI training or public analysis.
As a result, legal databases are transformed into algorithmic environments, ceasing to be mere passive repositories and becoming dynamic infrastructures for inference, institutional memory, and public or corporate intelligence. Without algorithms, there is only a mass of documents; with their application, there is structure, meaning, criticality, and decision-making potential. However, this transformation is only legitimate if the algorithms are: explainable, so that one can understand how they perform classifications, recommendations, or exclusions; auditable, so that errors and biases can be corrected; and controllable, so that they do not replace human judgment but strengthen it with empirical evidence.
3.3. How statistics operate preliminarily in the construction of the algorithm
Regarding the intersection between statistics and algorithms in the legal context, it is crucial it’s important to emphasize that statistical reasoning serves as the foundation for building any algorithmic system. Initially, statistics play three crucial roles: first, it guides data selection and curation, defining which variables will be considered, which will be excluded, how they will be normalized, and how they will be distributed. This process is vital to avoid distortions that could compromise the integrity of analyses, such as the overrepresentation of irrelevant categories or the underrepresentation of vulnerable groups. In legal data environments, this implies a careful selection between case law, contracts, opinions, or regulations, which will serve as primary sources for algorithmic learning.
Second, after defining the problem—be it predicting legal risks, classifying decisions, or identifying inconsistencies—statistics is responsible for mapping which interactions between variables should be modeled. This task can range from correlations and conditional probabilities to logistic regressions or other statistical models that will later be converted into algorithmic language. Thus, statistics establishes the inferential logic that will be adopted by the algorithm in question.
Finally, it’s worth noting that statistics also allows us to assess the adequacy of models in relation to the data type and the complexity of the proposed challenge— whether linear models, decision trees, classifiers, or probabilistic networks.
Furthermore, it provides performance metrics, such as accuracy and sensitivity, that guide the initial calibration process, ensuring that the algorithm learns consistently, avoiding pitfalls such as overfitting or bias.
In short, statistics is not limited to being a post-processing application of the algorithm; it is, in fact, the language that underpins its rationale. It defines the data to be used, the organizational logic, and the decision-making parameters. Without the statistical structure, the algorithm is reduced to mere code devoid of content: a mechanism devoid of criteria, a
a structure lacking epistemology. Furthermore, artificial intelligence itself, in its multiple manifestations, can be used to create new algorithms, especially in systems involving deep learning, self-tuning, and evolutionary programming. This reality represents a significant epistemological transformation, as it challenges the traditional conception that only humans are capable of consciously and deliberately designing algorithmic logic.
3.4. Artificial intelligence in the construction of algorithms: when the system learns to design logic
In the era of self-reflective artificial intelligence, computer systems transcend the mere execution of predetermined commands and enter a new stage of operational autonomy, in which they reconfigure their own algorithms based on iterative mechanisms of performance evaluation, environmental contextualization, and statistical debugging.
Thus, a paradigmatic shift is observed: the algorithm, once conceived as a rigid structure, becomes a malleable and contingent entity, shaped by the confluence of statistical inference, continuous learning, and a constant flow of data. We are facing a new epistemology of automated decision-making, which profoundly alters the foundations of control, predictability, and accountability in both the public and private spheres.
In the most sophisticated domains of machine learning, with particular emphasis on automated machine learning (AutoML), a structural shift in the role assigned to artificial intelligence is occurring. Instead of simply executing models previously designed by human agents, artificial intelligence now actively and autonomously intervenes in the engineering of inference systems. In these contexts, artificial intelligence performs functions such as careful variable selection, strategic model selection, statistical parameter optimization, and even the creation of new adaptive logic structures. This advancement is enabled by a triad of interdependent mechanisms that constitute the core of self-organized computational learning.
The first of these mechanisms is meta-learning, a process through which artificial intelligence learns to learn—that is, it acquires the ability to identify, based on accumulated experience, which algorithms perform best in given contexts, dynamically adjusting to the peculiarities of each database. The second mechanism is AutoML itself, in which artificial systems are designed to
design, test, and calibrate their own models, guided by statistical performance criteria and predictive efficiency metrics. Finally, Neural Architecture Search (NAS) emerges, a refinement modality in which neural networks are used to design other neural networks, optimizing the architecture of models through the intelligent reorganization of their layers and internal connections.
This new level of algorithmic autonomy poses unprecedented challenges to traditional control and accountability structures, as it shifts the decision-making center from human engineering to systems that iteratively improve based on data and statistics. Therefore, it becomes imperative to rethink the normative and epistemological foundations of technological governance, otherwise the institutional anchoring of decision-making processes mediated by artificial intelligence will be compromised.
In this scenario, the growing technical autonomy of artificial intelligence in building its own algorithms requires the formulation of new paradigms of control, responsibility, and traceability, capable of preserving the structuring principles of the rule of law, especially regarding accountability and technological governance. This evolution does not imply the abdication of human oversight or the weakening of transparency requirements. On the contrary, it intensifies the need for robust regulatory protocols that ensure the auditability of systems and the intelligibility of their automated decisions.
It is against this backdrop that cutting-edge regulatory instruments, such as the European Union’s AI Act—notably in its Articles 9, 17, and 25, which address risk management, internal governance systems, and continuous performance monitoring, respectively—and the NIST AI Risk Management Framework (2023), with emphasis on Section 3.1, establish guidelines that reaffirm the essential nature of rigorous documentation and algorithmic traceability mechanisms. These regulatory instruments recognize that, given the growing technical autonomy of artificial intelligence systems in building their own models, it is crucial to ensure the maintenance of audit trails that allow for the accurate and transparent reconstruction of the paths through which certain inferences, structures, or decisions were reached. Such audit trails should contain, in an accessible and technically validated format, information on the data used, the criteria for selecting and combining models, the adjusted parameters, and the testing methods employed, enabling external verification, regulatory oversight, and objective accountability. It is, in essence, about ensuring the reliability, validity and robustness of automated decisions, as required by the aforementioned frameworks, as an indispensable condition for its technical-legal legitimacy and the preservation of the structuring values of the Rule of Law 13.
From a legal perspective, this new topography of artificial intelligence demands the implementation of permanent safeguards, including: continuous human curation, which ensures responsible monitoring of the critical phases of training, adjustment, and deployment of models; external and cross-validation systems, which allow independent verification of the consistency and legitimacy of the generated algorithms; and explainability mechanisms, designed to enable the logical reconstruction of inferences and decisions produced automatically. These requirements are not merely technical, but express constitutional demands of legality, motivation, and control, with a particular impact on the public and regulatory spheres.
Ultimately, the use of artificial intelligence to build algorithmic structures cannot take place outside of a normative architecture anchored in public rationality, under penalty of erosion of institutional trust and the very legality of contemporary decision-making systems.
This finding gains particular relevance when one considers that, although the General Data Protection Law (LGPD) and the main international regulatory frameworks explicitly refer to automated decisions as a legally sensitive category, the requirement for auditability must be expanded—and deepened—with regard to algorithms that structure legal databases, even if they are not directly linked to individual decisions. The reason is conceptual: such algorithms, although they do not make decisions, shape the decision space. They determine what will be indexed, grouped, highlighted, or interpreted as recurring or atypical. In this way, they silently shape the cognitive horizons of legal, regulatory, and corporate agents, influencing the very construction of normativity.
In this sense, international reference standards, such as the European Union’s AI Act (2024, art. 17 and Recitals 10 and 12), the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021, arts. 35 to 41) and the NIST AI Risk Management Framework (2023), clearly establish that algorithmic systems intended for the structuring of institutional knowledge
—even if they do not exercise decision-making power—must be traceable, auditable, and subject to continuous oversight, especially human oversight. This guideline applies with particular acuity to legal databases, which classify case law, organize contracts, create decision-making categories, and consolidate interpretative foundations used in judicial, administrative, and regulatory settings.
Additionally, it is worth noting that the OECD Recommendation on Artificial Intelligence (2019, revised in 2024) establishes the guideline that algorithmic systems designed to organize data affecting the public sphere must comply with requirements of explainability, traceability, and responsible governance, even if they do not make final or binding decisions on individuals. The normative concern transcends the specific result of automated inferences, encompassing the structural configuration of the interpretative field in which such systems operate. Indeed, when automated computational logic is mobilized to structure institutional memory through classificatory filters, informational hierarchies, and taxonomic schemes applicable to legal data—contracts, precedents, normative foundations
—it directly impacts the formation of public rationality. And, whenever institutional rationality is produced, as a corollary of legality and administrative morality, the duty to provide reasoning, the possibility of critical reconstruction, and the presence of institutional mechanisms of control and accountability are imposed. It is therefore a question of ensuring that artificial intelligence systems do not operate in opaque zones of informational power, but are subject to a regime of transparency compatible with the structuring principles of the contemporary rule of law.
Thus, legal databases should be understood as cognitive infrastructures with a high normative density, whose governance demands transparency and auditability criteria comparable to those applicable to automated decisions. Algorithmic accountability, at this level, transcends mere access to information and is linked to the requirement to understand, review, and justify the logical structures that guide legal production and shape institutional action. Ultimately, it is about ensuring that public rationality remains anchored in accessible, verifiable foundations that are compatible with the tenets of the Democratic Rule of Law.
4. From Linear Thinking to Systemic Intelligence
Reflecting on the world of artificial intelligence and structured databases is a question that goes beyond the linear logic we typically employ. In this context, we encounter dynamic systems characterized by constant feedback and an inherent unpredictability. Decision-making, whether in the legal, political, economic, or administrative spheres, must therefore be guided by three fundamental principles. These principles include interdependence between the various elements involved, a solid statistical foundation supported by artificial intelligence tools, and participation in continuous cycles that encompass data, inferences, learning, and endless updating.
Edgar Morin reminds us that only a way of thinking that considers the interrelationships of elements and the contradictions that arise from them will be able to capture the essence of reality. From this perspective, artificial intelligence is not limited to being an isolated technology; it actually represents a profound reconfiguration of knowledge and power within an interconnected network. This understanding must be enriched by the recognition that statistics, in this context, presents itself not simply as a set of numbers, but as an intelligence that reveals patterns and trends crucial to our understanding of the world.
4.1. Stress as a Structural Matter
The intersection between transparency and privacy, as well as between predictability and the speed of change, requires us to recognize the importance of treating these elements not as opposites, but rather as integral parts of a complex system. Complex thinking, by its very nature, does not seek to promote a simplistic synthesis; rather, it invites us to embrace and live with the ambivalence that characterizes these relationships.
In this context, artificial intelligence must be governed by parameters that go beyond simple standards. It is essential that we consider epistemic, ethical, and technical dimensions, which are not intended to eliminate dissent, but rather to create an environment in which this dissent can manifest itself in an auditable and controllable manner. This approach is fundamental to guaranteeing the legitimacy of the processes involved and ensuring a plurality of voices.
Ultimately, the goal is to build a space where complexity is not only tolerated but celebrated. This space should allow for the coexistence of different perspectives, thus enriching our understanding of the dynamics at work. By doing so, we can foster a deeper and more comprehensive understanding of the issues that surround us.
4.2. The Decision as a Composite Object
In the current context of decisions influenced by artificial intelligence, it is crucial to emphasize that the act of decision-making goes beyond simply choosing between normative alternatives. At its core, it is a complex epistemological process that requires consideration of four main elements. These elements include a rigorous logical path, the development of robust structural documentation, the provision of accessible public explanations, and the definition of shared responsibility among the different agents involved.
I call this approach composite decision-making, in which data, algorithms, context, and values interact to form a singularly complex operational unit. This concept not only reflects modern demands for governance and transparency but also highlights the importance of an appropriate regulatory framework to regulate the use of emerging technologies in the decision-making process. It is crucial to ensure that the decisions made are not only effective but also ethical and fair.
Thus, the integration of these components becomes essential to strengthening public trust in institutions operating under this new decision-making logic. Only through this approach can we envision a future in which technology and ethics work hand in hand, contributing to a more equitable and well-informed society.
4.3. Ethics as a Form of Structure
In reflecting on the intersection between ethics and technology within complex thinking, it is crucial to emphasize that ethics should not be understood as a mere adornment or accessory to technology. On the contrary, ethics emerges as the condition of legitimacy that underpins the entire decision-making process. In this sense, it is essential that the decision-making structure respect three essential criteria: first, technical validity, which ensures the coherence and effectiveness of the actions performed; second, epistemic traceability, which provides verification and transparency of the paths that led to a decision.
certain conclusion; last but not least, ethical justification, which ensures that decisions are in accordance with moral and social precepts.
In such a context, complexity ethics proposes an approach that demands continuous reflexivity, articulated with a process of systematic self-correction. This approach not only promotes plural participation but also emphasizes the importance of attentively listening to marginalized voices, errors, and the excluded. Thus, the challenge is to build a decision-making space that, in addition to respecting, seeks to incorporate the diversity of perspectives and experiences. It is essential that this space recognize the importance of maintaining a dialogue that transcends the conventional limits of technology and rationality, allowing for a more harmonious coexistence between these diverse elements.
4.4. The Culture of Integration and the Right to Understanding
The need to cultivate an institutional culture focused on integration becomes increasingly pressing in the face of contemporary transformations. Harmonious alignment is crucial among the various elements that constitute the complex social web, encompassing the integration of data and rights, algorithms and values, statistics and narratives, as well as systems and individuals. This interconnection goes far beyond the simple right to information; it is the right to understanding. Each individual must have the ability to decipher the mechanisms that include or exclude them, that anticipate or govern them, especially in a context where systems based on artificial intelligence, judicial decisions, actions by public authorities, regulations, statistical analyses, databases, and democratic processes are applied.
In this scenario, promoting and strengthening an integrative culture should not be seen simply as an option, but as an ethical imperative. This dynamic is fundamental to ensuring harmonious and respectful coexistence among the diverse subjects and multiple instances that interact in an environment of constant technological evolution. Therefore, building a space where transparency and accountability become essential pillars is a challenge that cannot be postponed. This task requires the collaboration of all social actors, thus creating an environment conducive to dialogue and mutual understanding.
This space for exchange is vital for the consolidation of a future that not only respects human dignity but also affirms the fundamental rights of every individual.
The pursuit of this integration is a decisive step so that we can collectively move towards a more fair and inclusive scenario.
5. The Culture of Precedents as an Infrastructure for Rational, Traceable, and Coherent Decision- Making
Regarding the implementation of decision-making mechanisms that meet rigorous traceability and transparency criteria, seamlessly integrating with artificial intelligence and grounded in meticulously structured databases, it is imperative to recognize that such an endeavor is hampered by the pressing need to consolidate an institutional culture focused on precedents. This culture, which presents itself as an essential pillar, cannot be established by merely imposing norms or by isolated case law decisions; on the contrary, it requires a true epistemological and pedagogical revolution in the education, updating, and training of legal professionals.
Although theoretical works addressing the topic of precedents are highly sophisticated and eruditional, it must be stated that they are insufficient if the cognitive and symbolic structure of jurists does not undergo a significant transformation. The lack of a technical, methodical, and philosophical understanding of the precedent system not only hinders its practical assimilation but also condemns it to superficiality or manipulation in the service of momentary interests. In this sense, building a legal environment that values and implements precedents requires a deep and sustained commitment that transcends the mere adoption of practices and is rooted in the institutional culture of law.
5.1. Systemic Cultural Insertion
The integration of precedent culture into the Brazilian legal framework is an extremely important issue, requiring in-depth and comprehensive reflection that goes beyond the superficiality of the debate. This integration should not be approached in isolation, but rather should manifest itself in a structural and systematic manner, fostering the emergence of a new educational paradigm. To this end, this topic must be mandatory in law school curricula, with clear guidelines defined by the Ministry of Education. This step is crucial to ensuring robust and cross-disciplinary training, capable of encompassing the essence of the phenomenon of precedent.
Furthermore, it is essential that the ongoing and specialized training of judges, public defenders, members of the Public Prosecutor’s Office, and lawyers, both public and private, prioritize the culture of precedents. This training should be especially considered in educational institutions focused on the judiciary, accounting schools, and centers of legal excellence, which play a crucial role in preparing legal practitioners.
Furthermore, postgraduate programs and core disciplines that comprise legal education, including areas such as constitutional law, procedural law, general legal theory, and the various branches of administrative, criminal, civil, tax, and environmental law, must incorporate the precedent approach as a central axis for reflection and study.
Finally, it is vital that the culture of precedents be treated as a transdisciplinary discipline, fostering the integration of multiple areas of knowledge. It is necessary to encompass aspects such as argumentative logic, decision theory, legal statistics, judicial ethics, hermeneutic theory, and the fundamentals of artificial intelligence applicable to law. This articulation will create a network of knowledge, enriching contemporary legal practice. Therefore, adopting this model will provide a more solid and coherent education, in addition to encouraging a legal practice that values precedent decisions, ensuring greater legal certainty and predictability in social relations.
5.2. Precedent as a Framework and not as an Isolated Norm
Regarding the relevance of precedent in the contemporary legal context, it is important to emphasize that it is not limited to a mere decision-making norm of the past, but rather reveals itself as a dynamic structure of institutional articulation, capable of promoting systemic predictability and rational social control of decisions issued by the Judiciary. Thus, precedent emerges as a shared language, constituting the foundation for institutional learning within the judiciary. It is, therefore, the grammar that sustains intersubjective rationality within the legal system.
Without proper understanding and application of this grammar, the very use of artificial intelligence in the legal field could result in an unstable, incoherent, and, therefore, unfair system. This is because predictive models, databases, and traceability tools will be subject to decisions that, by their nature, may be ad hoc, contradictory, opaque, or even arbitrary. In this sense, the construction of a
A solid and reliable legal framework depends on strict adherence to precedents, which ultimately confer stability and legitimacy to the jurisdictional act, thus promoting an environment of trust and legal certainty that are essential for harmonious social coexistence.
5.3. Precedent Culture as the Foundation of Complex Justice
The importance of a culture of precedents in contemporary law deserves to be emphasized, as its adoption proves essential for building a justice system that aims not only to be efficient but also ethical and accessible. It is crucial to understand that judicial precedents form the basis for the development of consistent and predictable decisions, which goes beyond a mere normative whim. In fact, they are an indispensable condition for the consolidation of a fully functioning democratic state governed by the rule of law.
The absence of a robust precedent-setting culture undermines society’s trust in institutions, weakening the pillars of equal treatment and predictability in judicial decisions. On the other hand, adopting this culture allows the legal system to transition seamlessly between tradition and innovation. This dynamic is crucial to enabling interaction with emerging technologies, such as artificial intelligence and data analytics models, without compromising the fundamental principles that guide legal practice.
Furthermore, it is essential that the culture of precedents be seen as a symbolic infrastructure that not only supports the law but also projects it into the future. This perspective ensures that justice is aligned with democratic values and the fundamental rights enshrined in the Constitution. Therefore, building a legal framework that values predictability and transparency in judicial decisions is a decisive step toward building a more just and equitable society. In this context, the articulation between the tradition of precedents and technological innovation must occur harmoniously, always respecting the principles of due process and ethics that should guide all judicial deliberations.
6. Public Data Infrastructure, Interinstitutional Agreements and National Artificial Intelligence Culture
The upcoming decision-making transformation, driven by innovations in artificial intelligence, statistics, and structured database management, demands significant care. For this change to occur effectively, it is essential to create a robust institutional and intersectoral architecture, which requires not only substantial public investment but also effective collaboration between the government, universities, private institutions, oversight bodies, and civil society organizations.
This issue transcends mere technical aspects; it constitutes an essential public policy, which seeks to build a new national ecosystem focused on rational decision- making. In this scenario, inter-institutional cooperation must be valued, as true transformation depends on the capacity for dialogue and the integration of diverse knowledge and experiences. This will enable the construction of a future in which decisions are based on solid data and careful analysis.
Finally, it’s vital that there be a collective commitment that translates into concrete actions, not just mere promises. Only then will Brazil be able to reap the rewards of this new era, marked by the effective use of data and artificial intelligence.
6.1. Education as a Pillar of Algorithmic Sovereignty
Regarding our country’s information sovereignty, it is essential to emphasize that the ability to develop predictive models, create evidence-based public policies, and protect our own inference criteria is inextricably linked to an educational revolution grounded in data science and artificial intelligence. Therefore, there is a need to promote curricular inclusion that addresses disciplines such as statistics, data science, and artificial intelligence in courses related to the legal, administrative, economic, and social fields.
Furthermore, it is essential to encourage graduate studies and applied research, focusing on the analysis of public data, legal information banks, digital governance platforms, and algorithmic models that support decision-making. Likewise, funding and prioritizing transdisciplinary research lines at the national level are essential.
essential. These lines should integrate the fields of law, computing, ethics, political science, sociology, and computational linguistics.
Therefore, it cannot be considered an exaggeration to state that the construction of this educational framework is a sine qua non condition for advancing the strengthening of the State’s informational autonomy. This autonomy, in turn, is crucial to ensuring more efficient public management aligned with society’s contemporary demands.
6.2. Public Agreements and Shared Data Governance
Regarding the importance of interinstitutional agreements in strengthening the educational foundation, it is crucial to emphasize the importance of partnerships between public and private entities. These partnerships are crucial tools for accelerating the development of structured, integrated, and reliable databases. To this end, the policy to be developed must encompass several interconnected aspects, aiming to achieve effective transformation in the education sector.
First, a synergistic integration between judicial case law, administrative precedents, and judicial and extrajudicial agreements must be promoted. This integration will facilitate an analytical approach to normative and factual interconnections, enabling a deeper understanding of the dynamics involved. Simultaneously, the creation of public platforms is crucial to ensuring the traceability of administrative acts, public contracts, bids, and digital services. This initiative not only facilitates the monitoring of government actions but also contributes to the democratization of social control.
Furthermore, it is essential to encourage partnerships with the private sector and universities, aiming to develop open, auditable, and interoperable data platforms. This transparency and accessibility of information are essential for building a more efficient and participatory education system. Another relevant aspect is strengthening the investigative capacity of oversight and control bodies, which must be guaranteed through facilitated access to relevant data. Such information is essential for preventing fraud and combating mismanagement.
Finally, it is necessary to address the challenge posed by the underground market for parallel databases. To this end, a data policy based on principles of legality, ethics, transparency, and equity must be established, thus ensuring integrity and accountability in the actions undertaken. By articulating these guidelines, a significant advance in public governance, promoting a more integral and responsible educational environment, which will undoubtedly benefit society as a whole.
6.3. Data as Democratic Infrastructure
The growing relevance of data in public administration deserves in-depth reflection, since, in addition to being considered a technical resource, they play the role of fundamental democratic infrastructure. Their importance is evident in the creation of an essential link between social oversight and effective citizen participation. The systematic documentation of administrative information and decisions made is crucial for holding institutions accountable, in addition to being a determining factor in strengthening public trust.
In a contemporary scenario characterized by the rise of artificial intelligence, data quality goes beyond a simple representation of reality. Data becomes an active agent in the construction of institutional reality, shaping the contours of the justice we seek to achieve. In this context, data management is a fundamental pillar for promoting transparency and accountability. Therefore, it is essential that institutions adopt an ethical commitment to the handling and use of this data.
Therefore, it is necessary to broaden the reflection on the mechanisms that guarantee the integrity and veracity of the information collected. Only through this constant vigilance will it be possible to ensure that public administration meets the precepts of a truly democratic society, where access to information and institutional accountability are core values.
Conclusion: A new decision-making architecture
The emergence of a new decision-making framework stands out for its systemic, transparent, interdisciplinary, and, above all, traceable nature. It is crucial to understand that this framework is based on robust pillars, such as auditable artificial intelligence, the ethical application of statistics, and interoperability between structured databases. The cult of precedents also presents itself as an essential foundation of rationality in the decision- making process. However, it is important to emphasize that this architecture is not
restricted to a technical innovation; it is a civilizational challenge that requires a new ethical and cultural approach.
The expansion of forecasting and control tools, which is achieved through algorithms, predictive models, and statistical analysis, invokes a new ethos of responsibility that cannot be ignored. In this context, transparency emerges as an essential value, configuring itself as the infrastructure of legitimacy. This transparency must be understood in its entirety: it must be documented, comparable, legible, actionable, and robust, as advocated by the framework.CLeAR Documentation Framework 14. Documenting therefore reveals itself as synonymous with accountability, active listening, and humanization of the machine, becoming an imperative in this new algorithmic rationality.
It is also essential to consider that the Brazilian legal system, in particular, faces resistance to adopting the culture of precedents, which is still seen as mere theory, lacking effective practical substance. To achieve sustainable predictability and institutional integrity, this approach must be included in academic curricula, MEC programs, and the training of judges. Furthermore, it is urgent to strengthen structural partnerships between universities and the public and private sectors, with the goal of building large, integrated databases that accurately feed the State’s decision-making systems.
This movement must be realized both in the educational sphere, through the insertion of relevant content in the curricula and investments in postgraduate studies, and in institutional practice, promoting interoperability between administrative decisions, judicial and extrajudicial. Effective social control over contracts, bids, and public acts is also necessary. In this scenario, the concept of transformative compliance emerges as a fundamental element. This concept transcends mere observance of standards, proposing, in fact, a cultural reconfiguration of institutions. Among its principles are internal listening, ethical empathy, symbolic self-regulation, and the capacity for continuous transformation.
Structuring documentation, rigorous data processing, auditability of algorithms, and the legitimacy of precedents will only make sense if embedded within a dynamic, self- critical, and regenerative institutional culture. This culture must not only defend itself but also constantly renew itself. Therefore, transformative compliance emerges as the link between the technical structure and the institutional essence, representing the ethical intelligence that guides artificial intelligence, the integrity that organizes data, and the trust that underpins predictability.
We are, therefore, on the verge of a historic inflection point. Justice, public administration, and organizations will be compelled to operate with increased intelligence, continuous documentation, and connected accountability. The new decision-making architecture, far from representing merely a technological advancement, is, above all, a cultural transformation that requires a continuous commitment to evolution and innovation.
by:
Fabio Medina Osório 1.
References
ARRIETA, Alejandro Barredo et al. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI.Information Fusion, v. 58, p. 82–115, 2020. DOI:https://doi.org/10.1016/j.inffus.2019.12.012 . Accessed on: May 14, 2025.
BRAZIL.Constitution of the Federative Republic of Brazil of 1988. Brasilia, DF: Federal Senate, 1988.
BRAZIL.Law No. 13,105, of March 16, 2015. Code of Civil Procedure.Official Gazette of the Union: section 1, Brasília, DF, March 17, 2015.
BRAZIL.Law No. 13,709, of August 14, 2018. General Personal Data Protection Law (LGPD).Official Gazette of the Union: section 1, Brasília, DF, August 15, 2018.
CHMIELINSKI, Kasia et al. The CLeAR Documentation Framework for AI Transparency: recommendations for practitioners and context for policymakers. Cambridge, MA: Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, 2024. Available at: https:// shorensteincenter.org/clear-documentation-framework-ai-transparencyrecommendations- practitioners-context-policymakers/ . Accessed on: June 1, 2025.
EUROSTAT.European Statistics Code of Practice. Luxembourg: European Statistical System, 2017. Available at: https://ec.europa.eu/eurostat/documents/ 4031688/8971242/KS-02-18-142-PT-N.pdf . Accessed on: May 28, 2025.
INTERNATIONAL MONETARY FUND – IMF.Data Quality Assessment Framework (DQAF). Washington, DC: IMF, 2012. Available at:https://dsbb.imf.org/dqrs . Accessed on: May 28, 2025.
KERDVIBULVECH, Chutisant. Big data and AI-driven evidence analysis: a global perspective on citation trends, accessibility, and future research in legal applications.Journal of Big Data, v. 11, n. 180, 2024. DOI:https://doi.org/10.1186/s40537-024-01046-w . Accessed on: May 14, 2025.
KÜÇÜK, Dilek; CAN, Fazli.Computational law: datasets, benchmarks, and ontologies. arXiv preprint, arXiv:2503.04305, 2025. Available at:https://arxiv.org/abs/2503.04305 . Accessed on: June 1, 2025.
MORIN, Edgar.Method 6: Ethics. Barcelona: Editorial Seix Barral, 2015.
MORIN, Edgar.Introduction to complex thinking. Translated by Marcelo Pakman. 8th reprint. Barcelona: Editorial Gedisa, 2005.
MORIN, Edgar.The head is good: rethinking the reform, reforming the thinking. Translated by Paula Mahler. 1st ed. 5. reprint. Buenos Aires: New Vision, 2002.
NIST.AI Risk Management Framework. Gaithersburg: National Institute of Standards and Technology (NIST), 2023. Available at:https://www.nist.gov/itl/ai-risk- managementframework . Accessed on: May 20, 2025.
OECD.Recommendation of the Council on Artificial Intelligence. 2019. Revised in 2024. Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 . Accessed on: May 20, 2025.
ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT – OECD.
Recommendation of the Council on Enhanced Access to and Sharing of Data. Paris: OECD, 2019. Available at:https://legalinstruments.oecd.org/en/instruments/OECD- LEGAL-0463 . Accessed on: May 28, 2025.
PADIU, Bogdan; IACOB, Radu; REBEDEA, Traian; DASCALU, Mihai.To what extent have LLMs reshaped the legal domain so far? A scoping literature review. Information, Basel, v. 15, n. 11, art. 662, 2024. Available at:https://doi.org/10.3390/info15110662 . Accessed on: June 1, 2025.
EUROPEAN UNION.AI Act – Artificial Intelligence Act. Brussels: European Parliament, 2024. Available at:https://eur-lex.europa.eu . Accessed on: May 20, 2025.
EUROPEAN UNION.Regulation (EC) No 223/2009 of the European Parliament and of the Council of 11 March 2009 on European statistics.Official Journal of the European Union, L 87, p. 164–173, March 31, 2009. Available at:https://eur-lex.europa.eu/legalcontent/PT/ TXT/?uri=CELEX:32009R0223 . Accessed on: May 28, 2025.
UNESCO.Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO, 2021. Available at:https://unesdoc.unesco.org . Accessed on: May 20, 2025.
UNITED NATIONS.Fundamental Principles of Official Statistics. New York: United Nations StatisticsDivision, 2014.Available at: https://unstats.un.org/unsd/dnss/gp/ fundprinciples.aspx . Accessed on: May 28, 2025.
Footnotes
1. Full partner of Medina Osório Advogados. Former Minister of the Attorney General’s Office. PhD in Administrative Law from the Complutense University of Madrid (Spain). Master’s in Public Law from the Federal University of Rio Grande do Sul (UFRGS). Chairman of the Special Committee on Administrative Sanctioning Law of the Federal Council of the Brazilian Bar Association (2019–2022, 2022–2025, and 2025– ongoing). President of the International Institute for State Law Studies (IIED). Associate professor in the master’s and doctoral programs at the Federal University of Rio Grande do Sul School of Law.
2. In this context, the following constitutional provisions stand out: art. 5, items X, XII, XIV and XXXIII; art. 37, § 3, item II; art. 93, item IX; art. 102, § 2; and art. 103-B, § 3, all from the Constitution of the Federative Republic of Brazil.
3. Regarding the theory of precedents, excellent work has been produced in Brazil. See: FUX, Luiz; MENDES, Aluisio Gonçalves de Castro; FUX, Rodrigo. Brazilian precedent system: main characteristics and challenges. Electronic Journal of Procedural Law – REDP, Rio de Janeiro, v. 23, n. 3, p. 221– 237, Sep./Dec. 2022. Available at: https://www.redp.uerj.br. Accessed on: May 14, 2025.; TAUK, Caroline Somesom; SALOMÃO, Luis Felipe. Artificial intelligence in the Brazilian Judiciary: an empirical study on algorithms and discrimination.Diké – Journal of the Graduate Program in Law at the State University of Santa Cruz, Ilhéus, v. 22, n. 23, p. 2–32, Jan./Jun. 2023. Available at: https://periodicos.uesc.br/index.php/dike/article/download/3819/2419/ .
Accessed on: May 14, 2025.; DIDIER JR., Fredie. Brazilian system of mandatory judicial precedents and the institutional duties of the courts: uniformity, stability, integrity and coherence of jurisprudence.Magazine of the Public Prosecutor’s Office of the State of Rio de Janeiro, Rio de Janeiro, n. 64, p. 135–147, Apr./Jun. 2017.; and MARINONI, Luiz Guilherme.Mandatory precedents. 7th ed. rev., updated and expanded. São Paulo: Thomson Reuters Brazil, Courts Review, 2022.
4. Incidentally, this concept is consistent with the principles of purpose, transparency, and active accountability in the processing of personal data, as provided for in Article 6, items I, VI, and X, of the General Data Protection Law. Furthermore, it aligns with the guarantees granted to the data subject regarding the right to an explanation of automated decisions and to clear, facilitated, structured, and intelligible access to their personal data, as per Articles 9, 18, and 20 of the same law. These guidelines are also paralleled in Articles 15 to 22 of the European Union’s General Data Protection Regulation (GDPR), which require a legal basis, a legitimate purpose, and verifiable documentation, ensuring the data subject the right to access, review, contest, and understand automated decisions.
5. To better understand the growing functional complexity of contemporary law, see the article by KÜÇÜK, Dilek; CAN, Fazli.Computational law: datasets, benchmarks, and ontologies. arXiv preprint, arXiv:2503.04305, 2025. Available at:https://arxiv.org/abs/2503.04305 . Accessed on: May 25, 2025. This work demonstrates the possibility of systematizing technical-computational resources, with the aim of systematizing and optimizing the structuring of contemporary legal knowledge, including datasets, benchmarksand ontologies. Legal ontologies are authentic conceptual maps that influence machine programming for reading decisions to be classified and tracked, which impacts document organization itself. Datasets are the legal databases themselves, enabling intelligent analysis on large volumes of mass-produced data. Benchmarks are standardized tests that enable the auditability of artificial intelligence.
6. Check out the relevant work by PADIU, Bogdan; IACOB, Radu; REBEDEA, Traian; DASCALU, Mihai. To what extent have LLMs reshaped the legal domain so far? A scoping literature review. Information, Basel, v. 15, n. 11, art. 662, 2024. Available at:https://doi.org/10.3390/info15110662 . Accessed on: June 1, 2025. In this work, the authors analyze artificial intelligence models trained to evaluate large volumes of texts. To function properly, these models require normative databases and real databases with accurate document classifications. Among the aspects mentioned in the text, the document structuring of the databases stands out for the proper functioning of artificial intelligence, emphasizing the structured classification of normative acts and decisions. In the case of decisions, these must be classified according to certain standards that allow for accurate indexing, according to transparent semantic values, in order to ensure measurability and traceability. It is important to assess the institutional and epistemological quality of legal data based on the programming of the algorithms used to structure the databases.
7. KERDVIBULVECH, Chutisant.Big data and AI-driven evidence analysis: a global perspective on citation trends, accessibility, and future research in legal applications.Journal of Big Data, v. 11, n. 180, 2024. DOI: https://doi.org/ 10.1186/s40537-024-01046-w . Accessed on: May 14, 2025.
8. In this regard, we cannot forget the provisions of Article 20 of the General Data Protection Law. In this regard, see the article by ARRIETA, Alejandro Barredo et al.Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.Information Fusion, v. 58, p. 82–115, 2020. DOI: https:// doi.org/10.1016/j.inffus.2019.12.012. Accessed on: May 14, 2025., in this highly relevant work, the authors highlight the essential nature of artificial intelligence audit parameters connected to the explainability of the conclusions adopted, based on consistency, fidelity, usefulness for the recipient, stability (without arbitrary variations), and methodological transparency. “To summarize the most commonly used nomenclature, in this section we clarify the distinction and similarities among terms often used in the ethical AI and XAI communities. • Understandability (or equivalently, intelligibility) denotes the characteristic of a model to make a human understand its function – how the model works – without any need for explaining its internal structure or the algorithmic means by which the model processes data internally [18]. algorithm to represent its learned knowledge in a human understandable fashion [19, 20, 21]. This notion of model comprehensibility stems from the postulates of Michalski [22], which stated that “the results of computer induction should be symbolic descriptions of given entities, semantically and structurally similar to those a human expert might produce observing the same entities. Components of these descriptions should be comprehensible as single ‘chunks’ of information, directly interpretable in natural language, and should relate quantitative and qualitative concepts in an integrated fashion.” Given its difficult quantification, comprehensibility is normally tied to the evaluation of the model complexity [17]. • Interpretability: it is defined as the ability to explain or to provide the meaning in understandable terms to a human. • Explainability: explainability is associated with the notion of explanation as an interface between humans and a decision maker that is, at the same time, both an accurate proxy of the decision maker and comprehensible to humans [17]. • Transparency: a model is considered to be transparent if by itself it is understandable. Since a model can feature different degrees of understandability, transparent models in Section 3 are divided into three categories: simulatable models, decomposable models and algorithmically transparent models [5].”
9. MORIN, Edgar.Method 6. Ethics.Barcelona: Editorial Seix Barral, 2015; MORIN, Edgar.Introduction to complex thinking. Translated by Marcelo Pakman. 8th reprint. Barcelona: Editorial Gedisa, 2005; and MORIN, Edgar.It’s a good job: Rethinking the reform. Reform your thinking. 1st ed. 5th reprint. Buenos Aires: Nueva Visión, 2002.
Translated by Paula Mahler. It is no coincidence that Morin proposes education as an essential pillar for ethical transformation in the cognitive and pluralistic sphere, enabling individuals to better deal with uncertainty and doubt. Morin is a reference in complex institutional listening. In this sense, he emphasizes the importance of a language that allows for an ever-increasing understanding of its recipients. Complex thinking must integrate the whole and its parts, and in this regard, the decision-making phenomenon cannot be understood, if we interpret this concept in the field of law, without the technological dimension that connects publicity, transparency, the duty to substantiate decisions, and their systematization, structuring, and technological organization in databases accessible to society.
10. EUROPEAN UNION.Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence and amending Regulations (EU) 2022/2065 and (EU) 2022/1925 and Directive (EU) 2020/1828. Official Journal of the European Union, L 168, p. 1–254, 12 June 2024. Available at: https://eur- lex.europa.eu/legal-content/PT/TXT/?uri=OJ:L:2024:168:TOC . Accessed on: 1 June. 2025. AND NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Gaithersburg, MD: US Department of Commerce, Jan. 2023. NIST.AI.100-1. Available at: https:// doi.org/10.6028/NIST.AI.100-1.
11. In the international context, there are standards that regulate the use of statistics. See the following documents: UNITED NATIONS.Fundamental Principles of Official Statistics. New York: United Nations Statistics Division, 2014. Available at:https://unstats.un.org/unsd/dnss/gp/fundprinciples.aspx . Accessed on: May 28, 2025; EUROPEAN UNION.Regulation (EC) No 223/2009 of the European Parliament and of the Council of 11 March 2009 on European statistics. Official Journal of the European Union, L 87, p. 164–173, 31 March 2009. Available at:https:// eur-lex.europa.eu/legal-content/PT/TXT/?uri=CELEX:32009R0223 . Accessed on: May 28, 2025; EUROSTAT. European Statistics Code of Practice. Luxembourg: European Statistical System, 2017. Available at:https:// ec.europa.eu/eurostat/documents/4031688/8971242/KS-02-18-142-PT-N.pdf . Accessed on: May 28, 2025; ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT – OECD.
12. Recommendation of the Council on Enhanced Access to and Sharing of Data. Paris: OECD, 2019. Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0463 . Accessed on: May 28, 2025; INTERNATIONAL MONETARY FUND – IMF.Data Quality Assessment Framework (DQAF). Washington, DC: IMF, 2012. Available at: https://dsbb.imf.org/dqrs. Accessed on: May 28, 2025. In this sense, one can take as an example the obligation to protect the confidentiality of statistical data and the integrity of sources, which is highlighted in Article 5, §1, letter “e”, of Regulation (EC) No. 223/2009, which establishes that European statistics must respect “statistical confidentiality” as a fundamental principle. Convergently, Principle 6 of Fundamental Principles of Official Statisticsof the UN determines that individual data collected for statistical purposes must be strictly protected and used exclusively for those purposes. In this sense, theEuropean Statistics Code of Practiceestablishes “statistical confidentiality” as one of its 16 principles, requiring that informants’ data be protected from unauthorized access. These provisions demonstrate an international normative consensus that the integrity and protection of information are fundamental conditions for public trust in statistical systems.
13. Regarding the auditability of multiple artificial intelligence systems, and here I understand that the concept applies to legal databases, it is essential to consult the Artificial Intelligence Risk Management Framework (AI RMF 1.0) document, which was prepared by the National Institute of Standards and Technology (NIST), an agency affiliated with the United States Department of Commerce, and officially published in January 2023. It is an internationally recognized technical reference, the product of a multi- stakeholder public consultation, peer-reviewed, and inter-institutional scientific validation. This document proposed as a paradigm a framework guiding good practices in risk management for artificial intelligence systems across various public and private sectors in the United States and globally. Its official reference number is NIST.AI.100-1. The main objective of this framework is to provide international credibility to artificial intelligence systems, ensuring they have methodological pillars that confer safety, reliability, and accountability in their purposes and functions. To this end, this international framework promotes risk management and a scientifically based and structured framework to mitigate these risks.
14. The work CHMIELINSKI, Kasia et al.The CLeAR Documentation Framework for AI Transparency: recommendations for practitioners and context for policymakers. Cambridge, MA: Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, 2024. Available at:https://shorensteincenter.org/ cleardocumentation-framework-ai-transparency-recommendations-practitioners-context-policymakers/ .
Accessed on: June 1, 2025 — is a methodological tool developed by researchers at the Shorenstein Center at the Harvard Kennedy School to validate transparent and auditable artificial intelligence systems and, consequently, reliable databases. To this end, this framework supports the importance of structuring transparent, interpretable, and institutionally accountable documentation in the digital space. In this sense, CLeAR presents an algorithmic governance methodology based on interdependent pillars: (a) explicit description of the context and detailed circumstances of the system’s use; (b) formal identification of its operational, legal, and epistemic limitations, in order to identify risks in advance and establish limits and guarantees; (c) presentation of the reasonable motivation for automated decisions and inferences; (d) transparent mapping of the premises and assumptions incorporated into modeling and training; and (e) assessment of the risks involved, with respective mitigation strategies and contingencies. Its applicability to legal databases in Brazil is evident, as it allows validating not only the technical robustness of statistical models, but all the variables inherent to artificial intelligence systems and algorithms, enabling high levels of auditability.