Harvard Business School: Why Most Resist AI Companions
Research indicates that despite AI companions offering benefits like constant availability and non-judgment, people resist forming genuine relationships with them because they believe AI lacks the core emotional depth and mutual caring required for true interpersonal connections.
Harvard Business School: Why Most Resist AI Companions
Summary of Read Full Report
This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships.
Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection.
- People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding.
- This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships.
- The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief.
- Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance.
- The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.
Related Articles
Gemini 3.1 Pro and the Case for Model-Agnostic Agentic Infrastructure
Google's Gemini 3.1 Pro doubled its reasoning benchmarks overnight. Here's why that makes model-agnostic agentic infrastructure more critical than ever.
Google Gemini 3.1 Pro, ChatGPT Ads, and Why Organizations Need to Own Their AI Infrastructure
Google launches Gemini 3.1 Pro with advanced reasoning while OpenAI rolls out ads in ChatGPT. These two moves reveal a growing tension in enterprise AI: who controls the intelligence layer, and whose interests does it serve?
ChatGPT Now Has Ads — And It Should Change How You Think About AI Infrastructure
OpenAI has started showing ads inside ChatGPT responses. This marks a turning point: organizations relying on consumer AI tools are now subject to someone else's monetization strategy. Here's why owning your AI infrastructure matters more than ever.
Gemini 3.1 Pro Just Dropped — Here's What It Means for Organizations Running Their Own AI
Google's Gemini 3.1 Pro launched today with 1M-token context, native multimodal reasoning, and agentic tool use. Here's why model releases like this one matter most to organizations that own their AI infrastructure — and why locking into a single provider is the costliest mistake you can make.
See the ibl.ai AI Operating System in Action
Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.
View Case StudiesGet Started with ibl.ai
Choose the plan that fits your needs and start transforming your educational experience today.