John Gakhokidze
3 min readNov 3, 2021

Trusted AI, Trustworthy AI, Responsible AI and many other similar terms are emerging recently.

It seems, that there is understanding coming, that AI, while promising much more than in earlier booms, still has problems coming up to light after:

  • Discovery of biased data/algorithms/models.
  • Research around adversarial attacks on AI systems.
  • Recognition that there is a lack of understanding — what is AI, how ML models are working?
  • Investigation uncovered some countries are using AI to control population behavior and to suppress civil rights.

To overcome and fight problems, companies and governance bodies are working to establish a framework around AI principles.

Let us go through some examples.

AWS

AWS defines principles of AI in several documents, such as Machine Learning Lens — AWS Well-Architected, AWS SageMaker Developer guide, and other AI services documentation.

Image is courtesy of AWS:

The Linux Foundation AI & Data Trusted AI Committee

The Linux Foundation AI & Data Trusted AI Committee gathers many companies like IBM, Huawei, Tencent etc.,

Image is courtesy of https://lfaidata.foundation

LF & AI Data Foundation Trusted AI principles: Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security.

Microsoft

Microsoft describes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

The United States House of Representatives

The United States House of Representatives in Resolution 2231 is establishing a requirement of algorithmic accountability, addressing bias and discrimination, a risk-benefit analysis and impact assessment, and issues of security and privacy.

The European Commission

The European Commission sets 7 key requirements as follows: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, accountability

The Government of Canada

The government of Canada’s Algorithmic Impact Assessment tool defines similar areas of assessment.

Google

Google AI principles define AI requirements as to be socially beneficial, avoid creating or reinforcing unfair bias, to be built and tested for safety, accountable to people, to incorporate privacy design principles, uphold high standards of scientific excellence, made available for uses that accord with these principles.

Summary - in the simple words

The most all-inclusive definition is provided, in my opinion, by The Linux Foundation AI & Data Trusted AI Committee — short explanation is below:

  1. Reproducibility — ability of an independent team to replicate in an equivalent AI environment.
  2. Robustness — stability, resilience, and performance of the systems through its lifecycle.
  3. Equitability — to avoid intended or unintended bias and unfairness.
  4. Privacy — AI systems to guarantee privacy and data protection throughout a system’s entire lifecycle
  5. Explainability — the ability to describe how AI works, i.e., makes decisions.
  6. Accountability — AI and people behind the AI to explain, justify, and take responsibility for any decision and action made by the AI.
  7. Transparency — entails the disclosure around AI systems to ensure that people understand AI-based outcomes, especially in high-risk AI domains
  8. Security and safety of AI should be tested and assured across the entire life cycle within an explicit and well-defined domain of use. In addition, any AI should be designed to also safeguard the people who are impacted.

I expect in the few years we will see an internationally adopted framework on the Principles of AI.

Hope, this is a useful piece of information :)

No responses yet