Hi! I am a third-year undergrad in the school of EECS, Peking University.
I am advised by Prof. Yizhou Wang and Wentao Zhu. I spent a wonderful summer at Johns Hopkins University, where I was fortunate to be mentored by Prof. Tianmin Shu.
I am a member of the experimental Zhi Class.
I aspire to do interesting, insightful, and fundamental research. My current research interests lie in developing socially intelligent agents and exploring science of LMs.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
‘Language Models Represent Belief of Self and Others’ is presented at ICML 2024
Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)
2025
We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)
2025
We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.