Introducing the AI Impact Assessment Scale
As artificial intelligence (AI) continues to revolutionise our lives, we must not only embrace its potential but also recognise the need for proper governance and oversight. AI technologies present novel challenges, so stakeholders must proactively address potential risks and ethical considerations.
While Large Language Models (LLMs) race ahead and societies debate their impact, a pragmatic gap has emerged between aspiration and adoption. Working with customers, partners and industry specialists, we propose the AI Impact Assessment Scale (AIIAS) to bring consistency and transparency to the AI industry.
The Importance of Governance
AI governance ensures applications are designed and deployed responsibly, fairly and securely. Clear guidelines and frameworks promote accountability, transparency and ethical behaviour, preventing harm and building trust.
The Power of Self‑Regulation
Governments will play a crucial role, but industry‑led self‑regulation can be faster and more adaptive. By adopting ethical standards voluntarily, developers demonstrate commitment and maintain public confidence while policy catches up.
Why We Need a Standardised AI Assessment & Classification System
The AIIAS is a practical tool for self‑regulation. By providing a standardised way to assess and classify AI applications based on impact, exposure and risk of miscalculation, it empowers organisations to take responsibility for their creations.
Our proposal: organisations should openly communicate where their applications sit on the AIIAS, and users should be clearly informed of that classification.
The AI Impact Assessment Scale — Levels
Level | Impact | Typical Examples |
AI‑1 | Low | Chatbots, weather apps, shopping/streaming recommenders |
AI‑2 | Moderate | Content‑moderation systems, educational tutors |
AI‑3 | Significant | Surveillance, facial recognition, credit‑scoring systems |
AI‑4 | High | Autonomous weapons, deepfakes, data‑exploitation tools |
AI‑5 | Extreme | AI‑driven cyber‑warfare or force‑directed weapons |
Non‑Negotiable Foundations for All AI Applications
Transparency — communicate purpose, capabilities and limitations.
Accountability — operators are responsible for performance and unintended consequences.
Privacy & Security — protect user data with robust controls and monitoring.
Fairness & Non‑discrimination — minimise bias and audit regularly.
Human‑centric Design — prioritise user well‑being and allow human override.
A Call to Action
To safeguard the future of AI for the greater good, AI engineers, business leaders and influencers must unite around governance and self‑regulation. Adopting the AIIAS and the principles above will help create a responsible AI ecosystem that benefits everyone.
Relying on government alone is not the solution to accelerate AI innovation and adoption.
I welcome feedback and discussion on this complex topic.