Authors
Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, Justin D Weisz
Publication date
2021/5/6
Book
Proceedings of the 2021 CHI conference on human factors in computing systems
Pages
1-19
Description
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making …
Total citations
2021202220232024247714052
Scholar articles
U Ehsan, QV Liao, M Muller, MO Riedl, JD Weisz - Proceedings of the 2021 CHI conference on human …, 2021