Trusted AI emphasises the importance of these frameworks, aiming to mitigate risks while maximising benefits for humanity. This concept advocates for a collaborative approach between public and private sectors, creating innovative governance solutions that evolve with AI technologies. Such collaboration not only fuels innovation but also establishes safeguards to protect human rights and social values.
The AI act enshrines in EU law a definition of AI systems aligned with the revised definition agreed by the OECD:
'An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments'.
Understanding Trusted AI
Assurance is fundamental to Trusted AI, focusing on building confidence in AI systems through proper regulatory compliance and validation of outcomes. A comprehensive AI assurance framework, similar to auditing practices in other industries, is essential for verifying AI processes and ensuring decisions are transparent, explainable, fair, and accountable. This builds trust, prevents bias, and enhances the use of AI across various sectors.
Trusted AI has the potential to transform both public and private sector services by making them more connected, personalised, and human-centred in design, thus improving government-citizen interactions. On a global scale, initiatives like the EU's AI Act pioneer in establishing legal frameworks for AI, promoting a strategic, people-first approach. This approach ensures that AI advancements foster innovation, trust, safety, and the protection of fundamental rights by means of redress. Through such frameworks and measures, Trusted AI can lead the way for a future where AI's potential is realised, developed and used in ways that benefit society.
GRC Implications and Best Practices
The governance, risk management, and compliance (GRC) implications of Trusted AI are significant. Organizations must implement robust governance frameworks to oversee AI deployments, ensuring they comply with legal and regulatory standards and align with ethical guidelines. Risk management practices must evolve to address the unique risks posed by AI, including algorithmic biases, ensuring traceability in data pipelines and an oversight of AI usage in operational business flows. Compliance efforts must be proactive, with continuous monitoring and auditing of AI systems to ensure they adhere to evolving laws and regulations.Best practices for integrating Trusted AI within GRC frameworks include:
- Ethical AI Guidelines: Developing and adhering to ethical guidelines for AI use that encompass fairness, accountability, transparency, explainability and regulatory compliance such as GDPR (The General Data Protection Regulation).
- Transparency and Explainability: Ensuring AI systems are transparent in their operations and decisions and can be explained to stakeholders in understandable terms.
- Continuous Monitoring and Auditing: Implementing ongoing monitoring and auditing mechanisms to assess the performance and impact of AI systems, ensuring they remain compliant and ethical over time through remediation activities.
- Stakeholder Engagement: Engaging with stakeholders, including customers, employees, and regulators, to foster trust and gather feedback on AI deployments.
Trusted AI is evolving as a key component for Governance, Risk Management, and Compliance (GRC) frameworks. It represents a crucial paradigm shift in how AI technologies are developed, deployed, and governed. For the UK and EU, embracing Trusted AI means not only adhering to high standards of ethics and compliance but also unlocking the transformative potential of AI across sectors.
As organisations move forward on their AI journey, the focus on Trusted AI will continue to shape the landscape of technology usage, ensuring that AI serves the common good, respects human rights, and operates within the bounds of law and ethics. Trusted AI is complex, but with the right frameworks and commitments, it enables both public and private sector organisations to integrate a strategic imperative that can drive innovation, enhance trust, and ensure the responsible use of AI technologies while mitigating its associated risks.
Written by
Hugh Coughlan
CTO - Data and Applied Intelligence at Fujitsu