
{“ARInfo”:{“IsUseAR”:false},”Version”:”1.0.0″,”MakeupInfo”:{“IsUseMakeup”:false},”FaceliftInfo”:{“IsChangeEyeLift”:false,”IsChangeFacelift”:false,”IsChangePostureLift”:false,”IsChangeNose”:false,”IsChangeFaceChin”:false,”IsChangeMouth”:false,”IsChangeThinFace”:false},”BeautyInfo”:{“SwitchMedicatedAcne”:false,”IsAIBeauty”:false,”IsBrightEyes”:false,”IsSharpen”:false,”IsOldBeauty”:false,”IsReduceBlackEyes”:false},”HandlerInfo”:{“AppName”:2},”FilterInfo”:{“IsUseFilter”:false}}
Meet Qiaosi Wang, also known as Chelsea, a Ph.D. candidate specializing in Human-Centered Computing at Georgia Tech, where she works under AI-ALOE Director Ashok Goel in the Design & Intelligence Lab (DILab) within the GVU Center. Chelsea’s research focuses on the intersection of Human-AI interaction, Cognitive Science, and Computer Supported Cooperative Work (CSCW). Her work with AI-ALOE revolves around the Mutual Theory of Mind framework, which draws inspiration from the human innate ability to infer what’s happening in others’ minds (referred to as “Theory of Mind”). This framework aims to improve mutual understanding between humans and AI during their interactions.
The Mutual Theory of Mind Framework in Human-AI Interaction
By Qiaosi Wang
I’m a researcher at AI-ALOE, where I focus on improving humanAI interactions, cognitive science, and computer-supported cooperative work. My research aims to develop the Mutual Theory of Mind framework, which helps humans and AI better understand each other during their conversations.
Many researchers at AI-ALOE are dedicated to creating personalized and adaptive AI agents for online education. These AI agents play various roles, such as helping students understand the agent’s abilities, assisting confused students, and reaching out to socially isolated students.
To do this effectively, AI agents need to understand the complexities of human thought, similar to how humans use Theory of Mind to make guesses about each other’s thoughts based on words and actions. My work on the Mutual Theory of Mind framework guides the design of more personalized, ethical, and human-centered AI in online education.
My journey into this research began in 2019 when we deployed an AI agent named Jill Watson to answer questions about courses at Georgia Tech. We noticed that students had varying perceptions of Jill’s capabilities, which led us to the idea of having Jill detect and clarify these perceptions. This concept led to our CHI 2021 paper, which showed that we could predict how students perceived Jill based on the length and sentiment of their questions. This work laid the foundation for the Mutual Theory of Mind, and we’ve been exploring human-AI interactions based on this framework ever since.
Outside of my research, I enjoy outdoor activities like hiking and bouldering to clear my mind, but sometimes my cat, Gouda, keeps me at home, which I don’t mind at all!