Baruch College: Not all AI is Created Equal – A Meta-Analysis Revealing Drivers of AI Resistance Across Markets, Methods, and Time
The meta-analysis reveals that while consumers generally show a slight aversion to AI (Cohen’s d = -0.21), resistance is context-dependent—stronger for embodied forms like robots and high-risk domains—and evolves over time, with negative evaluations decreasing, especially in settings with greater ecological validity.
Baruch College: Not all AI is Created Equal – A Meta-Analysis Revealing Drivers of AI Resistance Across Markets, Methods, and Time
Summary of https://www.sciencedirect.com/science/article/pii/S0167811625000114
Presents a meta-analysis of two decades of studies examining consumer resistance to artificial intelligence (AI). The authors synthesize findings from hundreds of studies with over 76,000 participants, revealing that AI aversion is context-dependent and varies based on the AI's label, application domain, and perceived characteristics.
Interestingly, the study finds that negative consumer responses have decreased over time, particularly for cognitive evaluations of AI. Furthermore, the meta-analysis indicates that research design choices influence observed AI resistance, with studies using more ecologically valid methods showing less aversion.
-
Consumers exhibit an overall small but statistically significant aversion to AI (average Cohen’s d = -0.21). This means that, on average, people tend to respond more negatively to outputs or decisions labeled as coming from AI compared to those labeled as coming from humans.
-
Consumer aversion to AI is strongly context-dependent, varying significantly by the AI label and the application domain. Embodied forms of AI, such as robots, elicit the most negative responses (d = -0.83) compared to AI assistants or mere algorithms. Furthermore, domains involving higher stakes and risks, like transportation and public safety, trigger more negative responses than domains focused on productivity and performance, such as business and management.
-
Consumer responses to AI are not static and have evolved over time, generally becoming less negative, particularly for cognitive evaluations (e.g., performance or competence judgements). While initial excitement around generative AI in 2021 led to a near null-effect in cognitive evaluations, affective and behavioral responses still remain significantly negative overall.
-
The characteristics ascribed to AI significantly influence consumer responses. Negative responses are stronger when AI is described as having high autonomy (d = -0.28), inferior performance (d = -0.53), lacking human-like cues (anthropomorphism) (d = -0.23), and not recognizing the user's uniqueness (d = -0.24). Conversely, limiting AI autonomy, highlighting superior performance, incorporating anthropomorphic cues, and emphasizing uniqueness recognition can alleviate AI aversion.
-
The methodology used to study AI aversion impacts the findings. Studies with greater ecological validity, such as field studies, those using incentive-compatible designs, perceptually rich stimuli, clear explanations of AI, and behavioral (rather than self-report) measures, document significantly smaller aversion towards AI. This suggests that some documented resistance in purely hypothetical lab settings might be an overestimation of real-world aversion.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.