Trusted AI, Trustworthy AI, Responsible AI and many other similar terms are emerging recently.
It seems, that there is understanding coming, that AI, while promising much more than in earlier booms, still has problems coming up to light after:
- Discovery of biased data/algorithms/models.
- Research around adversarial attacks on AI systems.
- Recognition that there is a lack of understanding — what is AI, how ML models are working?
- Investigation uncovered some countries are using AI to control population behavior and to suppress civil rights.
To overcome and fight problems, companies and governance bodies are working to establish a framework around AI principles.
Let us go through some examples.
- Fairness as a Process, Fairness and Explainability by Design in the ML Lifecycle — this definition is the strongest, I have found among the all other frameworks.
Image is courtesy of AWS:
- Data protection, Privacy and Security — recommendations based on AWS Well Architected Framework applied to AI
- Explainability, Transparency — Amazon SageMaker Clarify provides tools to help explain how machine learning (ML) models make predictions.
The Linux Foundation AI & Data Trusted AI Committee
The Linux Foundation AI & Data Trusted AI Committee gathers many companies like IBM, Huawei, Tencent etc.,
Image is courtesy of https://lfaidata.foundation
LF & AI Data Foundation Trusted AI principles: Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security.
Microsoft describes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
The United States House of Representatives
The United States House of Representatives in Resolution 2231 is establishing a requirement of algorithmic accountability, addressing bias and discrimination, a risk-benefit analysis and impact assessment, and issues of security and privacy.
The European Commission
The European Commission sets 7 key requirements as follows: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, accountability
The Government of Canada
The government of Canada’s Algorithmic Impact Assessment tool defines similar areas of assessment.
Google AI principles define AI requirements as to be socially beneficial, avoid creating or reinforcing unfair bias, to be built and tested for safety, accountable to people, to incorporate privacy design principles, uphold high standards of scientific excellence, made available for uses that accord with these principles.
Summary - in the simple words
The most all-inclusive definition is provided, in my opinion, by The Linux Foundation AI & Data Trusted AI Committee — short explanation is below:
- Reproducibility — ability of an independent team to replicate in an equivalent AI environment.
- Robustness — stability, resilience, and performance of the systems through its lifecycle.
- Equitability — to avoid intended or unintended bias and unfairness.
- Privacy — AI systems to guarantee privacy and data protection throughout a system’s entire lifecycle
- Explainability — the ability to describe how AI works, i.e., makes decisions.
- Accountability — AI and people behind the AI to explain, justify, and take responsibility for any decision and action made by the AI.
- Transparency — entails the disclosure around AI systems to ensure that people understand AI-based outcomes, especially in high-risk AI domains
- Security and safety of AI should be tested and assured across the entire life cycle within an explicit and well-defined domain of use. In addition, any AI should be designed to also safeguard the people who are impacted.
I expect in the few years we will see an internationally adopted framework on the Principles of AI.
Hope, this is a useful piece of information :)