Lingqing Wang presented research titled “Explainable AI for Daily Scenarios from End-Users’ Perspective: Non-Use, Concerns, and Ideal Design” during the CHI ’25 conference, held April in Yokohama, Japan. The study focuses on how everyday users perceive and interact with explainable AI (XAI), emphasizing adoption barriers and design preferences.
Abstract Overview
According to the paper, Wang and colleagues investigated authentic user attitudes toward XAI in daily contexts. Findings reveal that end users are not readily accepting of XAI. The study tested how 87 people responded to AI explanations in everyday scenarios and revealed that comprehensibility was the most important property. By contrast, commonly recommended features such as contrastivity often had negative effects.
Key Findings and Implications
The study outlines that AI systems should be designed with human values at the center, noting that explanations affect individuals and society in non-neutral ways. It recommends a “reverse engineering” approach to align technical design with user goals, offering guidance for building AI tools that people can understand and trust in daily life.
Significance at CHI ’25
Presented as part of the CHI ’25 proceedings, this work underscores a shift in XAI research toward human-centered design. The study contributes to ongoing efforts to bridge the gap between algorithmic explainability and practical, everyday usability.

