Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)
2025
We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)
2025
We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.