Teisė ISSN 1392-1274 eISSN 2424-6050
2025, vol. 137, pp. 167–179 DOI: https://doi.org/10.15388/Teise.2025.137.11
Signe Skutele
PhD Candidate at the Department of Legal Theory and History of Law
Faculty of Law, University of Latvia
Raiņa bulvāris 19, Rīga, LV-1050
Phone: (+371) 29729522
E-mail: signe.skutele@lu.lv
Signe Skutele
(University of Latvia (Latvia))
Artificial intelligence (AI) has rapidly transformed public and private spheres, offering new opportunities for efficiency and innovation while raising complex legal, ethical, and social challenges. Its integration into the judiciary presents a key legal and human rights issue, with significant implications for judicial efficiency and modernization. The research aims to explore the potential of AI in improving judicial efficiency, without compromising the right to a fair trial. It examines the impact of the justice system digitalization, by defining the scope of the judicial efficiency and identifying the fundamental principles that should be clarified and promoted in the process. The research encompasses national and international legal frameworks, reports, and policy documents, as well as academic literature on the use of AI in the judiciary. While the application of AI in the judiciary is often associated with functional optimization – such as automated judgment drafting – the research also highlights the importance of addressing broader dimensions of efficiency, including procedural, institutional, and individual aspects. Thus, in this research, the author emphasizes the introduction of AI in the justice system as a broader and strategically targeted set of actions, focusing on the use of different AI systems to improve various elements of judicial efficiency.
Keywords: artificial intelligence, human rights, digitalization, judicial efficiency, right to a fair trial.
Signe Skutele
(Latvijos universitetas (Latvija))
Dirbtinis intelektas (DI) sparčiai pakeitė viešąją ir privačią sritis, suteikė naujų galimybių didinti darbo efektyvumą ir diegti inovacijas, tačiau kartu sukėlė sudėtingų teisinių, etinių ir socialinių problemų. Jo integravimas į teismų sistemą kelia svarbių teisinių ir žmogaus teisių klausimų, turinčių didelę įtaką teismų darbo efektyvumui ir modernizavimui. Tyrimo tikslas – ištirti DI potencialą gerinant teismų darbo efektyvumą, nepažeidžiant teisės į teisingą teismo procesą. Jame nagrinėjamas teismų sistemos skaitmeninimo poveikis, apibrėžiama teismų darbo efektyvumo apimtis ir nustatyti pagrindiniai principai, kuriuos reikėtų paaiškinti ir skatinti šio proceso metu. Tyrimas apima nacionalines ir tarptautines teisines sistemas, ataskaitas ir politikos dokumentus, taip pat akademinę literatūrą apie DI naudojimą teismų sistemoje. Nors DI taikymas teismų sistemoje dažnai siejamas su funkcinės veiklos optimizavimu, pavyzdžiui, automatizuotu sprendimų rengimu, tyrimu pabrėžiama, kad svarbu atsižvelgti į platesnius efektyvumo aspektus, įskaitant procedūrinius, institucinius ir individualius aspektus. Taigi šiame tyrime autorė pabrėžia DI įdiegimą teisingumo sistemoje kaip platesnį ir strategiškai orientuotą veiksmų rinkinį, sutelkdama dėmesį į įvairių DI sistemų naudojimą skirtingiems teisminio veiksmingumo aspektams pagerinti.
Pagrindiniai žodžiai: dirbtinis intelektas, žmogaus teisės, skaitmeninimas, teismų darbo efektyvumas, teisė į teisingą teismą.
________
* Preparation of the article was funded by the state research programme project “Vectors of social cohesion: from cohesion around the nation-state (2012-2018) to a cohesive civic community for the security of the state, society and individuals (2024-2025)”, No. VPP-KM-SPASA-2023/1-0002.
Received: 30/09/2025. Accepted: 12/12/2025
Copyright © 2025 Signe Skutele. Published by Vilnius University Press
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
An effective judiciary plays a significant role not only in implementing the rule of law and protecting individual rights, but also in strengthening public trust in the state power in a broader sense1, ensuring legal security, as well as promoting the sustainable social2 and economic3 development of the country4.
The importance of judicial efficiency is reflected in institutions and mechanisms tasked with evaluating, monitoring, and improving the performance of the judiciary5. At the national level the efficiency of the judicial system in the Republic of Latvia has been emphasized multiple times by the Judicial Council6.
International and national institutions have consistently highlighted the role of digitalization and Artificial Intelligence (hereinafter – AI) in improving judicial efficiency by accelerating processes, optimizing resources, and ensuring timely access to justice7. Several important regulatory acts and guidelines were also adopted in 2024, such as Regulation (EU) 2024/1689 of the European Parliament and of the Council (hereinafter – AI Act),8 and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (hereinafter – Framework Convention on AI)9. The AI Act and Framework Convention on AI constitute an important milestone in the development of the legal framework, providing a strategic framework for the responsible and effective integration of AI into the judicial system. In addition, courts are also adopting self-regulatory internal regulations, thereby highlighting the importance of ethical issues in the work of the court using AI tools.
The aim of the research is to critically assess how AI tools can be integrated into judicial processes without undermining fundamental rights. To achieve this, the study examines the definitions of AI and judicial efficiency, analyzes the impact of digitalization on judicial efficiency and public trust, and identifies both the scope and limitations of the regulatory framework concerning the use of AI in courts, as well as examples of good practices implemented in the judicial system. This research applies comparative and legal-dogmatic methods, supported by document and policy analysis. It examines regulatory enactments, policy planning documents and reports provided by international institutions on digitalization and AI in the judiciary, as well as sources of the legal doctrine and academic literature, to form the theoretical basis.
Given the multifaced nature of AI and the growing interest in both its opportunities and risks, researchers10 as well as national11 and international12 institutions have increasingly engaged in efforts to define it.
The development of AI, and consequently the evolving nature of its definition, is illustrated, for example, by the guidelines issued by the Organisation for Economic Co-operation and Development (OECD) – the 2019 AI Principles and the updated definition adopted in 2024.13 Comparing the definitions of AI systems included in the OECD AI Principles 201914 and 202415, significant differences can be found between them. Specifically, by changing and clarifying the purpose and functions of AI systems and emphasizing the element of adaptability, recent developments highlight the advancement of AI and the emergence of new technological solutions. These include dynamic interaction, data interpretation, the ability to self-adapt and learn without direct human involvement at all stages, self-modification, and the progression of inference processes.
Similar to the year 2024 revised OECD definition, the AI Act enshrines the concept of an ‘AI system’, identifying it as a machine-based system with three additional elements16. The first additional element states that an AI system is “designed to operate with varying levels of autonomy”, and therefore it can be either human-controlled, or else, partially or fully autonomous. Secondly, an AI system “can be adaptive after implementation” – in the sense that it can learn and adapt based on new information. Finally, “for explicit or implicit purposes, it infers from the information it receives how to generate outcomes [..] that can affect the physical or virtual environment”. Therefore, AI system can operate both on explicit tasks and implicit ones, analyzing data and results that are not a direct reflection of the input, but rather the content generated by it.
Such a largely harmonized definition can be found across key policy and regulatory instruments, including the AI Act, the Framework Convention on AI17, and the OECD AI Principles18.
As for the right to a fair trial, the principle is widely regarded as one of the cornerstones of human rights law, safeguarding every individual’s entitlement to a fair, independent, and impartial judicial process19. Moreover, the right to a fair trial applies to the entire judicial process, from the initiation of the case to the execution of the court judgment20. The right to a fair trial encompasses several core principles, among which, judicial efficiency plays a significant role. The European Court of Human Rights has also repeatedly emphasized this principle, mostly in relation to the ‘reasonable time’ requirement21.
Judicial efficiency refers to the ability of an institution to achieve its stated goals – fair judicial decisions – with the least possible resources, while not losing legitimacy or quality. Often, the efficiency of courts is, seemingly, too simplistically associated with three pillars – the duration of legal proceedings, the costs of legal proceedings, and the quality of decisions. Although the aforementioned aspects are important, efficiency should be assessed in a broader context.
In the author’s opinion, judicial efficiency has several aspects, which are interrelated. The first point, procedural efficiency, is characterized by the duration of the case and the provision of procedural rights. As a second aspect, institutional efficiency – or optimization of resource use – is characterized by the workload of judges, the qualifications of judges, their remuneration, and the involvement of court employees. Whereas, third criterion, that of functional efficiency, could be considered the ability to ensure a fair, high-quality decision. And, finally, individual efficiency is the possibility of individual involvement, accessibility or the opportunity for an individual to protect their rights, and general public trust in the court and public awareness of the activities of the court.
Particular and focused attention should be directed towards improving procedural, institutional, and individual aspects of efficiency – rather than narrowing the application of AI to mere functional optimization, such as the partial or complete replacement of judges. From the author’s perspective, a more effective approach is the strategic integration of AI into judicial organization, including case allocation, workload management, training of the court staff, and public information initiatives. This would simultaneously improve the efficiency of the courts, while reducing the risk of restricting the fundamental right to a fair trial.
Regarding the digitalization of judicial systems, Joss Smit, Strategic Advisor to the Dutch Council for the Judiciary, distinguished four stages of development22.
The first stage is analogue, where no digital technologies are used. In the second state, or the first wave of digitalization, data digitalization takes place. In this context, the author of this paper proposes two key directions. First, it is the digitalization of court documents, namely, the development and use of various digital collections and resources. The main target group in this case is judges and court employees. Second, it is information sharing and engaging in dialogue with the public, explaining the functions of the court to a wider circle of society. This is achieved both passively through the creation of databases of court rulings, and actively, by providing explanations of court rulings and judicial activities, e.g., press releases, explanatory materials, etc. The main target group in this case is the individual and society. Thus, the second stage could be characterized by the principle of streamlined court work and the principle of informing the public.
The third stage, or a second wave of digitalization, is manifested when not only data, but also the court process is being digitalized. It was apparently rapidly promoted in legal proceedings by the COVID-19 pandemic23, when legal proceedings had to be organized remotely. In the Republic of Latvia, within the framework of this stage, an e-case (e-lieta) platform is being developed to facilitate access to the court, the circulation of e-documents is being expanded, the prevalence of written proceedings is determined or a person’s participation in the court process is ensured via remote video conference. The third stage is characterized by the principle of court accessibility.
Finally, the fourth stage of development – the introduction of AI – represents the phase we are currently experiencing, albeit in varying forms and degrees across jurisdictions and institutional settings. Furthermore, considering its development, it can be stated that the introduction of AI significantly affects all the above-mentioned principles.
According to the EU Justice Scoreboard report on the progress of judicial reforms in 2023, eleven Member States had adopted, and eight more were planning reforms related to the introduction of information and communication technologies (ICT) in the judicial system, but five planned the use of AI as part of these reforms24. Given the rapid development of AI and its regulation, it is reasonable to assume that the number of reforms planned in 2025, as well as the range of Member States that envisage the integration of AI into their judicial system has increased significantly compared to previous data25. At the same time, a comparison of the EU Justice Scoreboard reports for 2024 and 2025 reveals no significant changes in digitalization developments during 2023–202426. Furthermore, the EU Justice Scoreboard report for 2024 omits reform updates27, likely due to the AI Act, which came into force on 1 August 2024 and introduced restrictions on the use of AI, including in the judicial system. Consequently, courts are refraining from hasty implementation of AI, thereby choosing a cautious and gradual approach that meets the requirements of legal certainty and the protection of human rights.
At the same time, the 2024 Evaluation report of the European Commission for the Efficiency of Justice (CEPEJ), which analyzes data from 2022, clearly indicates a correlation between the implementation of ICT and the duration of case processing. In countries with a higher ICT index, cases are processed on average twice as fast28. The report also identifies a notable trend in administrative cases29: with a low ICT index (below 2.5), the average case duration is 376 days. At a moderate level (2.5–5), the duration even increases, while a significant reduction is observed only at higher levels (5–7.5 and 7.5–10), where the duration drops to 200 and 128 days, respectively. Two key implications can be drawn from this. Firstly, the implementation of the system is likely to require adaptation time, and therefore the benefits of ICT implementation will only be apparent at a higher level. Secondly, the report demonstrates that implementing ICT alone is insufficient to improve court efficiency without proper integration and a coherent strategy.
Digitalization and the introduction of AI do not always corelate with the public’s perceptions of their necessity, as people tend to place greater trust in human-led legal processes30. Individuals expect to remain subjects, and not objects, in decisions that affect their lives. A particularly illustrative and notable discrepancy can be observed in the Republic of Latvia, which holds one of the leading positions in the field of digitalization according to the EU Justice Scoreboard and has introduced various digital solutions31. At the same time, public opinion surveys reveal a skeptical attitude towards digitalization, accompanied by criticism of judicial efficiency – particularly regarding transparency, costs, and the length of proceedings32.
In turn, the implementation of AI in courts in a practical spectrum can be analyzed by looking at the currently identified tools. For example, the platform “There’s an AI for that” has identified a total of 41 119 AI tools, corresponding to 13 040 tasks and 5 121 different jobs33. CEPEJ’s first report on AI in the judiciary notes that the Resource Centre on Cyberjustice and AI has identified 125 tools aimed at improving judicial efficiency and accessibility34. Six months later, this number already grew to 160, reflecting ongoing AI development trends in courts35.
In total, the Resource Centre has identified eight main categories for which the AI tools are designed: 1) Document search; 2) Online Dispute Resolution; 3) Prediction of Outcomes; 4) Decision support; 5) Anonymization; 6) Workflow automation; 7) Recording, transcription and translation; 8) Information and assistance services36. Judges and court clerks form the primary target group for most AI tools, with ‘decision support’, i.e., solutions that facilitate or automate decision-making in the justice system being the most common area of application. This is followed by triaging, allocation and workflow automation, document search systems and anonymization systems. Therefore, it can be concluded that AI can encompass all aspects of judicial efficiency – procedural, institutional, functional and individual – but the greatest attention is paid to improving functional efficiency.
Therefore, the author emphasizes the need to prioritize AI tools that address institutional, procedural, and individual aspects of judicial efficiency, rather than focusing solely on functional tasks like judgment drafting. Additionally, courts should be encouraged to consider, test, and adopt tools already used elsewhere to support these additional efficiency dimensions. This could not only shorten legal proceedings but also strengthen public trust in the courts and the judicial system.
With regard to AI and its application, a broad regulatory framework has now emerged, aiming to strike a balance between innovation and the protection of human rights.
The AI Act also applies to AI systems used in judicial proceedings, emphasizing that an AI system can “support the decision-making power of judges or judicial independence, but should not replace it”37. Taking into account the AI system classifications set out in the AI Act, those potentially used or banned in the judicial paradigm can be grouped into four categories – prohibited systems; high-risk systems; limited-risk systems; and low risk systems.
The AI Act expresis verbis defines prohibited AI systems in general terms, rather than specifically within the context of the judiciary. Consequently, the list of prohibited systems is set out in Article 5 of the AI Act.
High-risk AI systems are defined as “AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution”38. AI Act does not elaborate on what is meant by “assisting ... in researching and interpreting ... and applying ...”. To the extent that ‘assistance’ involves a specific interpretation, it is understandable that an AI system is classified as a high-risk system. This includes, for example, tasks such as interpreting facts, explaining legal provisions, or determining whether a particular situation complies with the Law.
However, one of the most important functional stages of judicial activity is the search of relevant sources – case law, legal doctrine, and professional literature, including materials from specialized fields like medicine or construction. Whether performing such a function would qualify as a high-risk AI system remains a matter of debate. In the author’s view, if an AI system performs any type of analysis – such as identifying keywords or synonyms, ranking, or calculating a percentage-based relevance score – it should be considered a high-risk system.
For the purpose of determining whether an AI system used in legal proceedings qualifies as high-risk, further guidance is provided in Recital 61 of the AI Act. First, a system is considered high-risk if it has a significant impact on the rule of Law, democracy, or fundamental rights. Second, this classification aims to address risks related to bias, error, and lack of transparency. Therefore, an AI system that may affect core legal principles and carries a risk of distortion or opacity falls within the high-risk category39.
Limited-risk systems, as defined in Article 6(3) and (4) of the AI Act, are those that do not pose a significant risk of harm or those which do not materially affect decision outcomes. They are designed to perform narrow tasks under meaningful human oversight and must fall within the activities listed in Article 6(3). Finally, low-risk systems are those intended for performing administrative support tasks, such as anonymization or internal communication40.
At the same time, the use of AI systems in legal proceedings carries risks, including limited explainability, bias, threats to judicial independence, confidentiality, data protection etc. Therefore, the AI Act promotes human-centred AI under risk-based supervision and introduces several restrictions.41
Human rights are also highlighted in other international instruments, most notably, the Framework Convention on AI. It affirms key principles such as human dignity, autonomy, transparency, accountability, equality, non-discrimination, privacy, data protection, and system reliability42.
Other institutions have likewise emphasized respect for human rights by issuing methodological materials and guidelines. Examples include the UNESCO Recommendation43, the OECD Recommendation44, and the CEPEJ’s European Ethical Charter45 as well as its report of the use of generative AI by judicial professionals in a work-related context46.
However, the strategies and procedures developed by the courts themselves that aim to enhance judicial efficiency through AI, while clearly upholding the primacy of human rights over innovation-driven automation, are particularly recognized as good practice.
For example, the AI strategy of the Court of Justice of the European Union47 highlights the use of AI for legal research, translation and interpretation to enhance accessibility and modernize justice. It also outlines key principles, risks and safeguards, while emphasizing human oversight, transparency, privacy, and other fundamental right48.
Similar AI strategies and procedures have been adopted globally, aiming to support judicial work without compromising individual rights, including the right to a fair trial49. An analysis of court-adopted procedures and guidelines reveals several key legal-policy aspects that significantly shape the AI use in the judicial system.
First, there are varying levels of awareness regarding the possibilities and risks of using AI tools. This includes recognizing that both judges (and court staff) and individuals, and other legal professionals may apply such tools, but they must understand the limits of AI use. Promoting awareness also helps define clear policy directions, while ensuring that AI is introduced into the judicial process as a structured and strategic improvement, rather than as an ad hoc reaction to technological change50.
Secondly, as the core principle is the independence of the judiciary and the non-substitution of judges, AI systems should not replace judges, court employees, individuals and other legal professionals. Non-substitution of judges concerns two essential elements of the right to a fair trial. First, each and every individual has the right to have their case decided by a human legal professional. But, secondly, these are the rights to the development of law, namely, the formation and change of case law, and the development of the legal doctrine51. In relation to the formation of case law, not only the amendment of case law can be observed, but also the development, clarification, and detailing52. Although AI is able to more or less apply specific factual circumstances to legal norms, judicial decision-making is not a mathematical activity, but is rather related to society and the processes within it. Changes in the structure of society also contribute to changes in the interpretation of Law; thus, case law, like Law itself, is dynamic.
Thirdly, information verification is recognized as an essential principle. Namely, the information provided by AI must be verifiable, correct, and relevant, it must not infringe on intellectual property rights, and it must also be traceable. Actually, this verification can only be carried out by an individual – i.e., by a human actor.
Finally, any use of AI must be aligned with the fundament right of the human, it must not infringe, restrict or deprive them in essence.
Consequently, the regulatory framework for AI includes a balanced approach, which simultaneously encompasses both the promotion of the use of innovative solutions in the judicial system and the assessment of the suitability of these solutions, with particular emphasis on ensuring the protection of human rights as a fundamental condition for any technological integration into public power.
1. A truly effective use of AI in the judiciary requires a strategic focus beyond mere functional optimization. AI tools hold clear potential to improve judicial efficiency, but their impact will be most valuable when strategically applied to procedural, institutional, and individual dimensions – not just to functional tasks. By targeting broader efficiency dimensions, the judicial system can enhance performance without compromising fundamental rights.
2. While there is a clear correlation between the use of ICT and certain aspects of judicial efficiency, the public perception of the judiciary may remain negative regardless of digitalization. Therefore, digitalization and AI integration – without a coherent and strategic framework – cannot by itself ensure meaningful progress in the justice system.
3. While the AI Act enables the use of AI tools in judicial settings, its full implementation depends on further guidance to resolve interpretive ambiguities. At the same time, proactive institutional policies and strategies setting out clear principles for AI use should be considered essential for responsible and rights-based adoption.
Resolution Res (2002)12 establishing the European Commission for the efficiency of justice. Council of Europe Committee of Ministers (2002) [online]. https://search.coe.int/cm?i=09000016804ddb99
Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series – No. 225. 5.09.2024 [online]. https://rm.coe.int/1680afae3c
European Convention on Human Rights as amended by Protocols No. 11, 14 and 15 supplemented by Protocols Nos. 1, 4, 6, 7, 12, 113 and 16. Council of Europe, regulation from August 1, 2021 [online]. https://www.echr.coe.int/documents/d/echr/convention_eng
The European Parliament, the Council and the Commissions regulation from the June 7, 2016. Charter of Fundamental Rights of the European Union. Official Journal of the European Union. 7 June 2016, 389–405.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No. 167/2013, (EU) No 168/2013, (EU) 2018/855, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union. 12 July, 2024.
Mākslīgā intelekta centra likums [Artificial Intelligence Centre Law] The law of the Republic of Latvia. March 20, 2025. Latvijas Vēstnesis, 55.
Covid-19 infekcijas izplatības seku pārvarēšanas likums [Law on the Management of the Spread of Covid-19 Infection.] The law of the Republic of Latvia. June 10, 2020. Latvijas Vēstnesis, 110A.
Par Digitālās transformācijas pamatnostādnēm 2021.-2027. gadam [On the Guidelines for Digital Transformation 2021-2027]. The Cabinet of Ministers of the Republic of Latvia. No. 490. July 7, 2021 [online]. https://likumi.lv/ta/id/324715
Informatīvais ziņojums “Par mākslīgā intelekta risinājumu attīstību” [Informative report “On the development of artificial intelligence solutions”] The Cabinet of Ministers of the Republic of Latvia. February 4, 2020 [online]. https://likumi.lv/ta/id/342405
The Constitutional Court of the Republic of Latvia. Judgment of 6 October 2023, 2003-08-01. Latvijas Vēstnesis, 138.
The Constitutional Court of the Republic of Latvia. Judgment of 7 February 2014, 2013-04-01. Latvijas Vēstnesis, 30.
KAPOPOULUS, Panayotis; RIZOS Anastasios (2023). Judicial efficiency and economic growth: Evidence based on European Union data. Scottish Journal of Political Economy, 71(1), 101–131 [online]. https://doi.org/10.1111/sjpe.12357
KRŪKLE, Ginta (2022). Judikatūras maiņa, tiesiskā noteiktība un tiesiskās paļāvības princips. [Overruling, legal certainty and the principle of legitimate expectations]. In: Latvijas Republikas Satversmei – 100. Latvijas Universitātes 80. starptautiskās zinātniskās konferences rakstu krājums. The Constitution of the Republic of Latvia – 100. Collection of research papers of the 80th International Scientific Conference of the University of Latvia. Riga: LU Akadēmiskais apgāds [publishing house], 310–318.
MARTINEZ, Rex (2019). Artificial Intelligencec: Distinguishing between types & definitions. Nevada Law Journal, 19(3), 1015–1042.
VIDAKI, Anastasia; PAPAKONSTANTINOU, Vagelis (2025). Democratic legitimacy of AI in judicial making. AI and Society [online]. https://doi-org.datubazes.lanet.lv/10.1007/s00146-025-02411-w
VĪDUŠA, Rudīte (2023). Tiesāšanās un mākslīgais intelekts – filozofisks skatījums. [Judiciary and the Artificial Intelligence – A Philosophical Perspective]. Augstākās Tiesas Biļetens [The Bulletin of the Supreme Court of the Republic of Latvia], 27, 151–152 [online]. https://www.at.gov.lv/bulletin-publication-files/AT_BILETENS27_WEB.pdf#page=151
WALLACE, Anne; GOODMAN-DELAHUNTY, Jane (2021). Measuring Trus and Confidence in Courts. International Journal for Court Administration, 12 (3), 1–17 [online]. https://doi.org/10.36745/ijca.418
ZHANG, Limao; PAN, Yue; WU, Xianguo; SKIBNIEWSKI, Miroslav (2021). Artificial Intelligence in Construction Engineering and Management. Singapore: Springer Singapore.
Delcourt v. Belgium [ECHR], No. 2689/65, [17.1.1970], ECLI:CE:ECHR:1970:0117JUD000268965.
H. v. France [ECHR], No. 10073/82, [24.10.1989], ECLI:CE:ECHR:1989:1024JUD001007382.
Kate Klitsche de la Grange v. Italy [ECHR], No. 12539/86, [27.10.1994], ECLI:CE:ECHR:1994:1027JUD001253986.
Hornsby v. Greece [ECHR], No. 18357/91, [19.3.1997], ECLI:CE:ECHR:1997:0319JUD001835791.
Ezeoke v. The United Kingdom [ECHR], No. 61280/21, [25.2.2025], ECLI:CE:ECHR:2025:0225JUD006128021.
BOSIO, Erica (2023). A Survey of Judicial Effectiveness: The Last Quarter Century of Empirical Evidence. Policy Research working paper, WPS 10501, World Bank Group [online]. http://documents.worldbank.org/curated/en/099330206262335739
Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions COM(2024)950 (2024). The 2024 EU Justice Scoreboard [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en
Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions COM(2025)375 (2025). The 2025 EU Justice Scoreboard [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en
Court of Justice of the European Union. Artificial Intelligence Strategy [online]. https://curia.europa.eu/jcms/upload/docs/application/pdf/2023-11/cjeu_ai_strategy.pdf
Decision of the Judicial Council of the Republic of Latvia (2021). Par darba grupas tiesu efektivitātei stiprināšanu izveidi. [On the establishment of a working group for strengthening the efficiency of courts.] No. 55 [online]. https://www.tieslietupadome.lv/lv/lemumi
European Commission for the Efficiency of Justice (CEPEJ) (2018). European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment. [online] https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c
European Commission for the Efficiency of Justice (CEPEJ) (2024). Evaluation report. General analyses. 2024 Evaluation cycle (2022 data) [online]. https://www.coe.int/en/web/cepej/special-file
European Commission for the Efficiency of Justice (CEPEJ) (2024). Use of Generative Artificial intelligence (AI) by judicial professionals in a work-related context [online]. https://rm.coe.int/cepej-gt-cyberjust-2023-5final-en-note-on-generative-ai/1680ae8e01
European Commission for the Efficiency of Justice (CEPEJ) (2025). 1st AIAB Report on the use of artificial intelligence in the judiciary based on the information contained in the resource centre on cyberjustice and AI [online]. https://rm.coe.int/cepej-aiab-2024-4rev5-en-first-aiab-report-2788-0938-9324-v-1/1680b49def
Latvijas Republikas Valsts Kontrole [State Audit Office of the Republic of Latvia] (2025). Mākslīgā intelekta ieviešana un izmantošana Latvijā [Introduction and use of artificial intelligence in Latvia] [online]. https://lrvk.gov.lv/lv/getrevisionfile/29802-kdDeizr20XEB2uoEZrVd5HxISMQ1zFcr.pdf
Mākslīgā intelekta centra likums [Artificial Inteligence Centre Law.] The materials on the law of the Republic of Latvia. No. 811/Lp14 [online]. https://titania.saeima.lv/LIVS14/SaeimaLIVS14.nsf/webSasaiste?OpenView&restricttocategory=811/Lp14
Organisation for Economic Co-operation and Development (OECD) (2021). State of implementation of the OECD AI Principles. Insights from national AI policies [online]. https://doi.org/10.1787/1cd40c44-en
Organisation for Economic Co-operation and Development (OECD) (2021). Framework and Good Practice Principles for People-Centred Justice [online]. https://doi.org/10.1787/cdc3bde7-en
Organisation for Economic Co-operation and Development (OECD) (2024). Survey on Drivers of Trus in Public Institutions. Results: Building Trus in a Complex Policy Environment [online]. https://doi.org/10.1787/9a20554b-en
Organisation for Economic Co-operation and Development (OECD) (2024). Recommendation of the Council on Artificial Intelligence [online]. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Organisation for Economic Co-operation and Development (OECD) (2024) Explanatory memorandum on the updated OECD definition of an AI System [online]. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_3c815e51/623da898-en.pdf
Resource centre cyberjustice and AI by CEPEJ [online]. https://public.tableau.com/app/profile/cepej/viz/ResourceCentreCyberjusticeandAI/AITOOLSINITIATIVESREPORT?publish=yes
Speech by President-elect von der Leyen in the European Parliament Plenary on the occasion of the presentation of her College of Commissioners and their programme, 27.11.2019 [online]. https://ec.europa.eu/commission/presscorner/detail/hr/speech_19_6408
SKDS (2024). Attieksme pret tiesām un uzskati par tiesvedības procesiem. Latvijas iedzīvotāju aptauja. [Attitudes towards courts and views on judicial processes. Survey of Latvian residents] [online]. https://www.ta.gov.lv/lv/media/4317/download?attachment
Stanford University Human Centre Artificial Intelligence (2025). Artificial Intelligence Index Report [online]. https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf
State courts of the Republic of Singapore (2024). In the State Courts of the Republic of Singapore. Registrar’s circular No. 9 of 2024. Issue of the Guide on the use of generative artificial intelligence tools by court users [online]. https://www.judiciary.gov.sg/docs/default-source/circulars/2024/registrar‘s_circular_no_9_2024_state_courts.pdf?sfvrsn=d038ec05_1
The Canadian Judicial Council (2024). Guidelines for the Use of Artificial Intelligence in Canadian Courts [online]. https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf
The Courts and Tribunals Judiciary (2025). Artificial Intelligence. Guidance for Judicial Office Holders [online]. https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version-website-version.pdf
There’s an AI for that [online]. https://theresanaiforthat.com/
Tieslietu padomes darbības stratēģija 2021.-2025. gadam. (2020) [The Strategy of the Judicial Council 2021-2025.] [online]. https://www.tieslietupadome.lv/lv/strategija
UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. [online] https://unesdoc.unesco.org/ark:/48223/pf0000381137
|
Signe Skutele is a PhD candidate and a teaching assistant at the University of Latvia, Faculty of Law. Her research interests include the evolution of the justice system, artificial intelligence, and the legal history of the Republic of Latvia. As an expert within a national research programm project, she examines the interaction between the judiciary and social cohesion, with a particular focus on public trust in the judicial system. Her research also explores the use of AI in judicial systems, with an emphasis on safeguarding the right to a fair trial. Signe Skutele yra doktorantė ir Latvijos universiteto Teisės fakulteto dėstytojo asistentė. Jos mokslinių tyrimų sritys – teisingumo sistemos evoliucija, dirbtinis intelektas ir Latvijos Respublikos teisės istorija. Kaip nacionalinės mokslinių tyrimų programos projekto ekspertė, autorė nagrinėja teismų sistemos ir socialinės sanglaudos sąveiką, ypač visuomenės pasitikėjimą teismų sistema. Savo tyrimuose ji taip pat nagrinėja dirbtinio intelekto naudojimą teismų sistemoje, ypatingą dėmesį skiria teisės į teisingą teismą apsaugai. |
1 WALLACE, Anne; GOODMAN-DELAHUNTY, Jane (2021). Measuring Trust and Confidence in Courts. International Journal for Court Administration, 12(3), 2–4 [online]. https://doi.org/10.36745/ijca.418; OECD (2024). Survey on Drivers of Trust in Public Institutions. Results: Building Trust in a Complex Policy Environment, 27 [online] https://doi.org/10.1787/9a20554b-en.
2 OECD (2021). Framework and Good Practice Principles for People-Centred Justice, 39–41 [online]. https://doi.org/10.1787/cdc3bde7-en; SKDS (2024). Attieksme pret tiesām un uzskati par tiesvedības procesiem. Latvijas iedzīvotāju aptauja [Attitudes towards courts and views on judicial processes. Survey of Latvian residents], 12 [online]. https://www.ta.gov.lv/lv/media/4317/download?attachment.
3 KAPOPOULUS, Panayotis; RIZOS Anastasios (2023). Judicial efficiency and economic growth: Evidence based on European Union data. Scottish Journal of Political Economy, 71 (1), 101–131 [online]. https://doi.org/10.1111/sjpe.12357; BOSIO, Erica (2023). A Survey of Judicial Effectiveness: The Last Quarter Century of Empirical Evidence. Policy Research working paper, WPS 10501, World Bank Group, 2–4 [online] http://documents.worldbank.org/curated/en/099330206262335739.
4 CEPEJ (2024). Evaluation report. General analyses. 2024 Evaluation cycle (2022 data), 109 [online]. https://www.coe.int/en/web/cepej/special-file
5 Article 1. Resolution Res (2002)12 establishing the European Commission for the efficiency of justice. Council of Europe Committee of Ministers (2002) [online]. https://search.coe.int/cm?i=09000016804ddb99; Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions COM(2024)950. (2024) The 2024 EU Justice Scoreboard [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en
6 Decision of the Judicial Council of the Republic of Latvia (2021). Par darba grupas tiesu efektivitātei stiprināšanu izveidi. [On the establishment of a working group for strengthening the efficiency of courts.] No. 55 [online]. https://www.tieslietupadome.lv/lv/lemumi; Tieslietu padomes darbības stratēģija 2021.–2025. gadam (2020) [The Strategy of the Judicial Council 2021–2025] [online]. https://www.tieslietupadome.lv/lv/strategija. See also materials and recommendations prepared by the working group on strengthening the efficiency of court, covering the duties of court presidents, the development of the assistant judge system, judicial workload issues and qualitative indicators of judicial decisions [online] https://www.tieslietupadome.lv/lv/tiesu-efektivitates-stiprinasanas-darba-grupa.
7 E.g., CEPEJ (2024). Evaluation report. General analyses. 2024 Evaluation cycle (2022 data), 145 [online] https://www.coe.int/en/web/cepej/special-file; Tieslietu padomes darbības stratēģija 2021.-2025. gadam (2020) [The Strategy of the Judicial Council 2021–2025], 3.5 [online]. https://www.tieslietupadome.lv/lv/strategija; CEPEJ (2025). 1st AIAB Report on the use of artificial intelligence in the judiciary based on the information contained in the resource centre on cyberjustice and AI, 3–4 [online]. https://rm.coe.int/cepej-aiab-2024-4rev5-en-first-aiab-report-2788-0938-9324-v-1/1680b49def
8 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/855, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union. 12 July, 2024.
9 Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series – No. 225. 5 September 2024 [online]. https://rm.coe.int/1680afae3c.
10 E.g., OECD has pointed to a rapid increase in the number of scientific publications on generative AI, indicating that if in 2000 the number of such publications was 1041, then in 2022 there were already 37 058, identifying 2018 as a turning point. OECD (2021). State of implementation of the OECD AI Principles. Insights from national AI policies, 37.-38 [online]. https://doi.org/10.1787/1cd40c44-en. See also: Stanford University Human Centre Artificial Intelligence (2025). Artificial Intelligence Index Report, 12 [online]. https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf; ZHANG, Limao; PAN, Yue; WU, Xianguo; SKIBNIEWSKI, Miroslav (2021). Artificial Intelligence in Construction Engineering and Management. Singapore: Springer Singapore, p. 14.; MARTINEZ, Rex (2019). Artificial Intelligence: Distinguishing between types & definitions. Nevada Law Journal, 19:3, 1016–1018.
11 E.g., in the Republic of Latvia, AI has been addressed in national policy documents, such as the year 2020 report on the development of AI solutions and 2021 Digital transformation guidelines, both issued by the Cabinet of Ministers of the Republic of Latvia. These documents include definitions of AI systems and emphasize the growing importance of AI. Par Digitālās transformācijas pamatnostādnēm 2021.-2027. gadam [On the Guidelines for Digital Transformation 2021 –2027]. The Cabinet of Ministers of the Republic of Latvia. No. 490. July 7, 2021 [online]. https://likumi.lv/ta/id/324715; Informatīvais ziņojums “Par mākslīgā intelekta risinājumu attīstību” [Informative report “On the development of artificial intelligence solutions”] The Cabinet of Ministers of the Republic of Latvia. February 4, 2020 [online]. https://likumi.lv/ta/id/342405.
12 See, e.g., OECD (2024). Recommendation of the Council on Artificial Intelligence [online]. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449; UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence [online]. https://unesdoc.unesco.org/ark:/48223/pf0000381137.
13 OECD (2024) Explanatory memorandum on the updated OECD definition of an AI System, 4 [online]. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_3c815e51/623da898-en.pdf.
14 The 2019 OECD AI Principles set out – “An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”.
15 By clarifying the definition, in 2024, an AI system was defined as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.
16 Article 3 para 1. AI Act.
17 Article 2. Framework Convention on AI.
18 Article 1. OECD (2024). Recommendation of the Council on Artificial Intelligence [online]. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
19 Article 6. European Convention on Human Rights as amended by Protocols No. 11, 14 and 15 supplemented by Protocols Nos. 1, 4, 6, 7, 12, 113 and 16. Council of Europe, regulation from August 1, 2021 [online]. https://www.echr.coe.int/documents/d/echr/convention_eng; Article 47. Charter of Fundamental Rights of the European Union. The European Parliament, the Council and the Commissions regulation from the June 7, 2016. Official Journal of the European Union. 7 June 2016, 389–405.
20 The European Court of Human Rights has established this in several of its judgments. E.g. – Delcourt v. Belgium [ECHR], No. 2689/65, [17.01.1970.] ECLI:CE:ECHR:1970:0117JUD000268965, paras 22–26; Hornsby v. Greece [ECHR], No. 18357/91, [19.03.1997.] ECLI:CE:ECHR:1997:0319JUD001835791, paras 39–41.
21 E.g., H. v. France [ECHR], No. 10073/82, [24.10.1989.] ECLI:CE:ECHR:1989:1024JUD001007382, para 58.; Kate Klitsche de la Grange v. Italy [ECHR], No. 12539/86, [27.10.1994.], ECLI:CE:ECHR:1994:1027JUD001253986, para 61.; Ezeoke v. The United Kingdom [ECHR], No. 61280/21, [25.02.2025.], ECLI:CE:ECHR:2025:0225JUD006128021, paras 43.-56.
22 At the seminar “Judiciary and Artificial Intelligence – A Philosophical Perspective”, organized by the European Judicial Training Network in 2023. See VĪDUŠA, Rudīte (2023). Tiesāšanās un mākslīgais intelekts – filozofisks skatījums. [Judiciary and the Artificial Intelligence – A Philosophical Perspective]. Augstākās Tiesas Biļetens [The Bulletin of the Supreme Court of the Republic of Latvia], 27, 151 [online]. https://www.at.gov.lv/bulletin-publication-files/AT_BILETENS27_WEB.pdf#page=151.
23 E.g., The Parliament of the Republic of Latvia adopted the Law on the Management of the Spread of COVID-19 infection, which expired on January 1, 2024. In addition to what was stipulated in the procedural laws, the Law strengthened the possibilities of considering cases in written proceedings and in videoconference format. Law on the Management of the Spread of COVID-19 Infection. The Law of the Republic of Latvia. June 10, 2020 [online]. https://likumi.lv/ta/id/315278/redakcijas-datums/2022/03/23.
24 Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions COM(2024)950 (2024). The 2024 EU Justice Scoreboard, 7 [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en.
25 Although, e.g., the State Audit Office of the Republic of Latvia’s report of May 6, 2025 on the implementation and use of AI identifies several significant systemic shortcomings in ensuring the use and further development of AI in public administration. See Latvijas Republikas Valsts Kontrole [State Audit Office of the Republic of Latvia] (2025). Mākslīgā intelekta ieviešana un izmantošana Latvijā [Introduction and use of artificial intelligence in Latvia] [online]. https://lrvk.gov.lv/lv/getrevisionfile/29802-kdDeizr20XEB2uoEZrVd5HxISMQ1zFcr.pdf. This is also evidenced by the process of the recently adopted AI Centre Law. Mākslīgā intelekta centra likums [Artificial Intelligence Centre Law]. The materials on the law of the Republic of Latvia. No. 811/Lp14 can be found at [online] https://titania.saeima.lv/LIVS14/SaeimaLIVS14.nsf/webSasaiste?OpenView&restricttocategory=811/Lp14.
26 See: The 2024 EU Justice Scoreboard, 35 [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en and The 2025 EU Justice Scoreboard, 33 [online] https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en.
27 The 2025 EU Justice Scoreboard, 33 [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en.
28 See Figure 6.20. CEPEJ (2024). Evaluation Report. General Analyses. 2024 Evaluation Cycle (2022 data) [online]. https://www.coe.int/en/web/cepej/special-file
29 See Figure 6.20. CEPEJ (2024). Evaluation Report. General Analyses. 2024 Evaluation Cycle (2022 data) [online]. https://www.coe.int/en/web/cepej/special-file
30 VIDAKI, Anastasia; PAPAKONSTANTINOU, Vagelis (2025). Democratic legitimacy of AI in judicial making. AI and Society [online]. https://doi-org.datubazes.lanet.lv/10.1007/s00146-025-02411-w.
31 See: The 2024 EU Justice Scoreboard, 35 [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en and The 2025 EU Justice Scoreboard, 33 [online]. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/upholding-rule-law/eu-justice-scoreboard_en.
32 SKDS (2024). Attieksme pret tiesām un uzskati par tiesvedības procesiem. Latvijas iedzīvotāju aptauja [Attitudes towards courts and views on judicial processes. Survey of Latvian residents], 13., 17. and 22 [online]. https://www.ta.gov.lv/lv/media/4317/download?attachment.
33 It is a platform/catalogue that aggregates various AI tools according to their use, functions and tasks. For judges, the “Job Impact Index” indicates a rather low figure – only 5% of work tasks can be replaced with the help of AI tools. (The figure is higher for administrative judges, specifically, at 10%). Data last accessed on 27 September 2025. See There’s an AI for that. Job Impact Index [online]. https://theresanaiforthat.com/job-impact/page/5/.
34 CEPEJ (2025). 1st AIAB Report on the use of artificial intelligence in the judiciary based on the information contained in the resource centre on cyberjustice and AI, 3 [online]. https://rm.coe.int/cepej-aiab-2024-4rev5-en-first-aiab-report-2788-0938-9324-v-1/1680b49def
35 See Resource centre cyberjustice and AI by CEPEJ [online]. https://public.tableau.com/app/profile/cepej/viz/ResourceCentreCyberjusticeandAI/AITOOLSINITIATIVESREPORT?publish=yes.
36 See Resource centre cyberjustice and AI by CEPEJ [online]. https://public.tableau.com/app/profile/cepej/viz/ResourceCentreCyberjusticeandAI/AITOOLSINITIATIVESREPORT?publish=yes.
37 See Recital 61 and para 8(a) of Annex III of the AI Act.
38 See para 8(a) of Annex III of the AI Act.
39 See Recital 61 of the AI Act.
40 See Recital 61 of the AI Act.
41 Speech by President-elect Ursula von der Leyen in the European Parliament Plenary on the occasion of the presentation of her College of Commissioners and their programme, 27 November 2019 [online]. https://ec.europa.eu/commission/presscorner/detail/hr/speech_19_6408.
42 See Articles 7–13. Framework Convention on AI.
43 UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence [online]. https://unesdoc.unesco.org/ark:/48223/pf0000381137.
44 OECD (2024). Recommendation of the Council on Artificial Intelligence [online]. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
45 CEPEJ (2018). European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment [online]. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c.
46 CEPEJ (2024). Use of Generative Artificial intelligence (AI) by judicial professionals in a work-related context [online]. https://rm.coe.int/cepej-gt-cyberjust-2023-5final-en-note-on-generative-ai/1680ae8e01.
47 Court of Justice of the European Union. Artificial Intelligence Strategy [online]. https://curia.europa.eu/jcms/upload/docs/application/pdf/2023-11/cjeu_ai_strategy.pdf.
48 See paras 3 and 4. Court of Justice of the European Union. Artificial Intelligence Strategy [online]. https://curia.europa.eu/jcms/upload/docs/application/pdf/2023-11/cjeu_ai_strategy.pdf.
49 See, e.g., State courts of the Republic of Singapore (2024). In the State Courts of the Republic of Singapore, Registrar’s circular No. 9 of 2024. Issue of the Guide on the use of generative artificial intelligence tools by court users [online]. https://www.judiciary.gov.sg/docs/default-source/circulars/2024/registrar’s_circular_no_9_2024_state_courts.pdf?sfvrsn=d038ec05_1; The Canadian Judicial Council (2024). Guidelines for the Use of Artificial Intelligence in Canadian Courts [online]. https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf; The Courts and Tribunals Judiciary (2025). Artificial Intelligence. Guidance for Judicial Office Holders [online]. https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version-website-version.pdf.
50 The author particularly highlights the “Guidelines for the Use of Artificial Intelligence in Canadian Courts” published by the Judicial Council of Canada.
51 KRŪKLE, Ginta (2022). Judikatūras maiņa, tiesiskā noteiktība un tiesiskās paļāvības princips. [Overruling, legal certainty and the principle of legitimate expectations]. In: Latvijas Republikas Satversmei – 100. Latvijas Universitātes 80. starptautiskās zinātniskās konferences rakstu krājums. The Constitution of the Republic of Latvia – 100 [-Year Anniversary]. Collection of research papers of the 80th International Scientific Conference of the University of Latvia. Riga: LU Akadēmiskais apgāds, p. 318.
52 One of the most notable examples of the evolving case law in the Republic of Latvia, in the author’s view, concerns the legal profession advocates or the bar. Specifically, in 2003 and 2014, the Constitutional Court of the Republic of Latvia assessed the number of attorneys, the population size, ethical standards and other factors. In 2003 it found the contested norm unconstitutional, and, in 2014, it reached the opposite conclusion, upon reviewing the same criteria. See: The Constitutional Court of the Republic of Latvia. Judgment of 6 October 2023, 01 August 2003. Latvijas Vēstnesis, 138. And The Constitutional Court of the Republic of Latvia. Judgment of 7 February 2014, 01 April 2013. Latvijas Vēstnesis, 30.