Hi! I am a fourth-year undergrad in the school of EECS, Peking University.
I worked on the intersection of cognitive science and language models. I hope to transform AI from a tool of interaction into a partner in symbiosis with humans.
I am fortunate to have worked with Professors Heng Ji, Tianmin Shu and Yizhou Wang. I am also deeply greatful to my amazing mentors, Wentao Zhu and Chi Han, for their invaluable guidance and support along the way.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
‘AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling’ will be presented at NeurIPS 2025 as Spotlight!
‘Language Models Represent Belief of Self and Others’ is presented at ICML 2024
Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Shunchi Zhang*, Tianmin Shu (* equal contribution)
Spotlight, Annual Conference on Neural Information Processing Systems (NeurIPS), 2025
We introduce AutoToM, an automated agent modeling method for scalable, robust, and interpretable mental inference. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Shunchi Zhang*, Tianmin Shu (* equal contribution)
Spotlight, Annual Conference on Neural Information Processing Systems (NeurIPS), 2025
We introduce AutoToM, an automated agent modeling method for scalable, robust, and interpretable mental inference. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.
Wentao Zhu, Zhining Zhang, Yizhou Wang
International Conference on Machine Learning (ICML) , 2024
We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.