and many other similar terms are emerging recently.

It seems, that there is understanding coming, that AI, while promising much more than in earlier booms, still has problems coming up to light after:

  • Discovery of biased data/algorithms/models.
  • Research around adversarial attacks on AI systems.
  • Recognition that there is a lack of understanding — what is AI, how ML models are working?
  • Investigation uncovered some countries are using AI to control population behavior and to suppress civil rights.

To overcome and fight problems, companies and governance bodies are working to establish a framework around AI principles.

Let us go through some examples.

AWS

defines principles of AI in several documents, such as Machine Learning Lens — AWS Well-Architected, AWS SageMaker Developer guide, and other AI services documentation.

Image is courtesy of AWS:

The Linux Foundation AI & Data Trusted AI Committee

gathers many companies like , , etc.,

Image is courtesy of https://lfaidata.foundation

Trusted AI principles: Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security.

Microsoft

describes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

The United States House of Representatives

in Resolution 2231 is establishing a requirement of algorithmic accountability, addressing bias and discrimination, a risk-benefit analysis and impact assessment, and issues of security and privacy.

The European Commission

sets 7 key requirements as follows: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, accountability

The Government of Canada

Algorithmic Impact Assessment tool defines similar areas of assessment.

Google

AI principles define AI requirements as to be socially beneficial, avoid creating or reinforcing unfair bias, to be built and tested for safety, accountable to people, to incorporate privacy design principles, uphold high standards of scientific excellence, made available for uses that accord with these principles.

Summary - in the simple words

The most all-inclusive definition is provided, in my opinion, by — short explanation is below:

  1. ability of an independent team to replicate in an equivalent AI environment.
  2. stability, resilience, and performance of the systems through its lifecycle.
  3. to avoid intended or unintended bias and unfairness.
  4. AI systems to guarantee privacy and data protection throughout a system’s entire lifecycle
  5. — the ability to describe how AI works, i.e., makes decisions.
  6. — AI and people behind the AI to explain, justify, and take responsibility for any decision and action made by the AI.
  7. entails the disclosure around AI systems to ensure that people understand AI-based outcomes, especially in high-risk AI domains
  8. and safety of AI should be tested and assured across the entire life cycle within an explicit and well-defined domain of use. In addition, any AI should be designed to also safeguard the people who are impacted.

I expect in the few years we will see an internationally adopted framework on the Principles of AI.

Hope, this is a useful piece of information :)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store