Harvard Business School: Why Most Resist AI Companions
Research indicates that despite AI companions offering benefits like constant availability and non-judgment, people resist forming genuine relationships with them because they believe AI lacks the core emotional depth and mutual caring required for true interpersonal connections.
Harvard Business School: Why Most Resist AI Companions
Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445
This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships.
Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection.
- People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding.
- This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships.
- The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief.
- Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance.
- The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.