U.S. Copyright Office: Copyright and Artificial Intelligence
The report explains that only works with enough human creative input are eligible for copyright protection. While AI-generated content lacks sufficient human authorship, using AI as a tool or modifying its output can be copyrighted if human expression is evident. The office maintains that existing copyright law is adequate for addressing these issues, emphasizing the central role of human creativity.
U.S. Copyright Office: Copyright and Artificial Intelligence
This report from the U.S. Copyright Office examines the intersection of copyright law and artificial intelligence (AI), specifically focusing on the copyrightability of AI-generated works. The report analyzes different levels of human involvement in AI-generated content, considering factors such as prompts, expressive inputs, and modifications.
It concludes that existing copyright law is sufficient to address these issues, emphasizing the crucial role of human authorship.
- Copyright law does not protect AI-generated works, unless there's enough human input. This isn't just about effort, but about creative input.
- The "black box" nature of AI systems is a core issue. Even developers often don't know how AI models generate their outputs, making it difficult to claim human authorship over those outputs.
- Prompts, even detailed ones, usually don't provide enough control for copyright because the AI interprets and executes them in unpredictable ways. The system fills in the gaps, and the user's control is indirect. It is difficult to demonstrate sufficient closeness between "conception and execution".
- Iterative prompting (revising prompts and re-submitting) does not equate to copyrightable authorship. It is like "re-rolling the dice" and does not demonstrate control over the process.
- "Authorship by adoption," where someone claims ownership of an AI output just because they chose it, is generally not recognized. The act of selecting an AI-generated output from many options is not considered a creative act.
- Expressive inputs, like a user's own artwork used as a starting point for AI generation, can be protected. The copyright would cover the human expression that is perceptible in the final output.
- Modifying AI-generated content can create copyrightable material. This includes creative selection and arrangement, or making sufficient changes to the AI output.
- AI is a tool, and using it does not negate copyright, if there is sufficient human creative contribution.
- There is a concern that an increase in AI-generated outputs will undermine the incentive for humans to create.
- Many countries agree that copyright requires human authorship. But, there is ongoing discussion regarding how to apply this to AI-generated works.
- There is debate on whether a sui generis right is necessary. Most commenters opposed it, noting that AI systems do not need incentives to create.
- The Copyright Office is monitoring technological and legal developments to determine if conclusions need revisiting.
The report also explores international approaches to AI and copyright, noting a general consensus on the need for human authorship. Finally, it evaluates policy arguments for legal changes, ultimately recommending against legislative alterations.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.