MIT: The AI Agent Index
The MIT AI Agent Index is a public database that catalogs agentic AI systems—tools capable of planning and executing tasks with minimal human oversight—by detailing their technical components, applications, and risk management practices. It reveals that most systems are developed in the USA, mainly by companies in software engineering, and while many projects offer open code and documentation, information on safety policies and external evaluations remains limited.
MIT: The AI Agent Index
Summary of https://arxiv.org/pdf/2502.01635
The AI Agent Index is a newly created public database documenting agentic AI systems. These systems, which plan and execute complex tasks with limited human oversight, are increasingly being deployed in various domains.
The index details each system’s technical components, applications, and risk management practices based on public data and developer input. An analysis of the data shows ample information on agentic systems' capabilities and applications. However, the authors found limited transparency regarding safety and risk mitigation.
The authors aim to provide a structured framework for documenting agentic AI systems and improve public awareness. It sheds light on the geographical spread, academic versus industry development, openness, and risk management of agentic systems.
The five most important takeaways from the AI Agent Index, with added details, are:
- The AI Agent Index is a public database designed to document key information about deployed agentic AI systems. It covers the system’s components, application domains, and risk management practices. The index aims to fill a gap by providing a structured framework for documenting the technical, safety, and policy-relevant features of agentic AI systems. The AI Agent Index is available at https://aiagentindex.mit.edu/.
- Agentic AI systems are being deployed at an increasing rate. Systems that meet the inclusion criteria have had initial deployments dating back to early 2023, with approximately half of the indexed systems deployed in the second half of 2024.
- Most indexed systems are developed by companies located in the USA, specializing in software engineering and/or computer use. Out of the 67 agents, 45 were created by developers in the USA. 74.6% of the agents specialize in either software engineering or computer use. While most agentic systems are developed by companies, a significant fraction are developed in academia. Specifically, 18 (26.9%) are academic, while 49 (73.1%) are from companies.
- Developers are relatively forthcoming about details related to usage and capabilities. The majority of indexed systems have released code and/or documentation. Specifically, 49.3% release code, and 70.1% release documentation. Systems developed as academic projects are released with a high degree of openness, with 88.8% releasing code.
- There is limited publicly available information about safety testing and risk management practices. Only 19.4% of indexed agentic systems disclose a formal safety policy, and fewer than 10% report external safety evaluations. Most of the systems that have undergone formal, publicly-reported safety testing are from a small number of large companies.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.