The Supreme Court of the Philippines has formally adopted a framework to guide the ethical and responsible use of artificial intelligence (AI) in the judicial system, aiming to enhance efficiency and expand access to justice while recognizing the technology’s limitations and risks.
Issued as a full court resolution on February 18 and made public on March 19, the framework emphasizes “human-centered augmented intelligence,” ensuring AI supports—but does not replace—human judgment and reason in the courts.
“The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice,” the SC resolution stated.
The framework provides a comprehensive guide for courts, grounded in three core ethical principles: fairness, accountability, and transparency. These principles aim to reinforce public confidence in the independence and impartiality of the judiciary.
The working group behind the framework was chaired by Senior Associate Justice Marvic Leonen, with Associate Justices Ramon Paul Hernando and Rodil Zalameda as vice chairs.
It was developed with input from judiciary members, legal experts, the academe, and subject matter specialists, and refined through consultations with the SC full court, the SC Management Information Systems Office, and the Office of the Chief Attorney.
The framework draws on global best practices, including the Council of ASEAN Chief Justices Governance Framework on AI for ASEAN Judiciaries and UNESCO Guidelines for the Use of AI Systems in Courts and Tribunals, and extends its ethical scope to include environmental responsibility and sustainability.
A permanent Committee on Human-Centered Augmented Intelligence will be established to advise on the design, development, and ethical deployment of AI tools in the judiciary. Its members will include stakeholders from the legal and technology sectors, focusing on judicial leadership, technical expertise, and technology ethics.
The framework applies to all levels of the judiciary, including justices, judges, court officials, employees, users, and vendors or third-party contractors. AI tools may only be used with authorization from the full court and implementation will proceed in phases, starting with pilot testing. Mandatory disclosure is required whenever AI tools are used in court work.
Users must indicate the AI tool, version, purpose, level of AI involvement, and human oversight, while taking responsibility for the outputs. This applies to functions such as transcription, translation, legal research, document summarization, automated processing, proofreading, and data redaction.
Key provisions include:
- AI tools assist human cognitive skills without replacing judgment.
- Tools must not harm stakeholders, violate rights, or undermine the rule of law.
- AI outputs cannot serve as the sole basis for adjudicatory decisions; humans remain responsible for legal reasoning and final judgments.
- AI development and use must avoid bias and discrimination; training programs will address algorithmic or automation biases.
- Privacy and data protection must be maintained at all stages; confidential or sensitive information may not be processed without express authority.
- Comprehensive risk assessments are required before use to prevent threats like data poisoning.
- Transparency is encouraged through stakeholder consultations, evaluation, and monitoring of AI tools.
- The SC will strengthen auditing, monitoring, and cybersecurity to reduce reliance on external parties and protect against attacks.
The initiative aligns with the SC’s Strategic Plan for Judicial Innovations 2022–2027 (SPJI), which seeks to build a technology-driven judiciary that is transparent, accountable, and accessible.
The framework ensures AI is used ethically by judges, court staff, and anyone interacting with the judicial system.
