MIT AI Risk Repository: Latest Update
The MIT AI Risk Repository catalogs over 3,000 real-world AI incidents and organizes key risks into two taxonomies—causal and domain-specific. It highlights major concerns including AI safety failures, socioeconomic harms, discrimination, privacy breaches, malicious misuse, misinformation, and unsafe human interactions with AI.
MIT AI Risk Repository: Latest Update
Summary of https://airisk.mit.edu
This research paper and its accompanying materials create the AI Risk Repository, a comprehensive resource for understanding and addressing risks from artificial intelligence.
The repository includes a database of over 3,000 real-world AI incidents, along with two taxonomies classifying AI risks: a causal taxonomy (by entity, intent, and timing) and a domain taxonomy (by seven broad domains and 23 subdomains).
Based on the AI Risk Repository, here are the top 10 AI risks, presented in bullet points, and categorized by their domain, with emphasis on their frequency in the source documents:
- AI System Safety, Failures & Limitations: This domain is the most frequently discussed, and includes these top risks:
- AI pursuing its own goals in conflict with human goals or values: This risk is mentioned in 46% of the documents.
- Lack of capability or robustness: A frequently discussed risk, mentioned in 59% of the documents
- Socioeconomic & Environmental Harms: This domain is also frequently discussed and includes:
- Power centralization and unfair distribution of benefits, mentioned in 37% of the documents
- Increased inequality and decline in employment quality, mentioned in 34% of the documents
- Discrimination & Toxicity: A frequently discussed domain including:
- Unfair discrimination and misrepresentation: This risk is mentioned in 63% of the documents.
- Privacy & Security: This domain includes:
- Compromise of privacy by obtaining, leaking, or correctly inferring sensitive information: This risk is mentioned in 61% of the documents.
- Malicious Actors & Misuse: This domain includes:
- Cyberattacks, weapon development or use, and mass harm: This risk is mentioned in 54% of the documents.
- Misinformation: This domain includes:
- False or misleading information: Mentioned in 39% of the documents.
- Human-Computer Interaction: This domain includes:
- Overreliance and unsafe use: This risk is mentioned in 24% of documents.
It is important to note that while these risks are frequently discussed in the source documents, other risks which are discussed less frequently, such as AI welfare and rights, and pollution of the information ecosystem and loss of consensus reality, may also be of significant importance.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.