Zhining Zhang | 张芷宁

Hi! I am a fourth-year undergrad in the school of EECS, Peking University.

I worked on the intersection of cognitive science and language models. I hope to transform AI from a tool of interaction into a partner in symbiosis with humans.

I am fortunate to have worked with Professors Heng Ji, Tianmin Shu and Yizhou Wang. I am also deeply greatful to my amazing mentors, Wentao Zhu and Chi Han, for their invaluable guidance and support along the way.


Experience
  • Peking University
    Peking University
    B.S. in Computer Science
    Sep. 2022 - Present
  • Johns Hopkins University
    Johns Hopkins University
    Visiting Student
    June. 2024 - Sep. 2024
Honors & Awards
  • Zhi-Class Scholarship
    2024
  • Peking University Scholarship
    2023
  • Award for Scientific Research, Peking University
    2023
News
2025

AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling’ will be presented at NeurIPS 2025 as Spotlight!

Sep 21
2024

Language Models Represent Belief of Self and Others’ is presented at ICML 2024

Jul 24
Publications - Not able to set up "Selected Pubs" yet :-) (view all )
AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling
AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling

Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Shunchi Zhang*, Tianmin Shu (* equal contribution)

Spotlight, Annual Conference on Neural Information Processing Systems (NeurIPS), 2025

We introduce AutoToM, an automated agent modeling method for scalable, robust, and interpretable mental inference. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.

AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling

Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Shunchi Zhang*, Tianmin Shu (* equal contribution)

Spotlight, Annual Conference on Neural Information Processing Systems (NeurIPS), 2025

We introduce AutoToM, an automated agent modeling method for scalable, robust, and interpretable mental inference. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.

Language models represent beliefs of self and others
Language models represent beliefs of self and others

Wentao Zhu, Zhining Zhang, Yizhou Wang

International Conference on Machine Learning (ICML) , 2024

We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.

Language models represent beliefs of self and others

Wentao Zhu, Zhining Zhang, Yizhou Wang

International Conference on Machine Learning (ICML) , 2024

We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.

All publications