Zhining Zhang | 张芷宁

Hi! I am a third-year undergrad in the school of EECS, Peking University.

I am advised by Prof. Yizhou Wang and Wentao Zhu. I spent a wonderful summer at Johns Hopkins University, where I was fortunate to be mentored by Prof. Tianmin Shu.

I am a member of the experimental Zhi Class.

I aspire to do interesting, insightful, and fundamental research. My current research interests lie in developing socially intelligent agents and exploring science of LMs.


Experience
  • Peking University
    Peking University
    B.S. in Computer Science
    Sep. 2022 - Present
  • Johns Hopkins University
    Johns Hopkins University
    Visiting Student
    June. 2024 - Sep. 2024
Honors & Awards
  • Zhi-Class Scholarship
    2024
  • Peking University Scholarship
    2023
  • Award for Scientific Research, Peking University
    2023
News
2024

Language Models Represent Belief of Self and Others’ is presented at ICML 2024

Jul 24
Publications - Not able to set up "Selected Pubs" yet :-) (view all )
AutoToM: Automated Bayesian Inverse Planning and Model Discovery for Open-ended Theory of Mind
AutoToM: Automated Bayesian Inverse Planning and Model Discovery for Open-ended Theory of Mind

Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)

2025

We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.

AutoToM: Automated Bayesian Inverse Planning and Model Discovery for Open-ended Theory of Mind

Zhining Zhang*, Chuanyang Jin*, Mung Yao Jia*, Tianmin Shu (* equal contribution)

2025

We introduce AutoToM, an automated Bayesian Theory of Mind method for achieving open-ended machine ToM. Leveraging an LLM as the backend, AutoToM combines the robustness of Bayesian models and the open-endedness of Language models, offering a scalable and interpretable approach to machine ToM.

Language models represent beliefs of self and others
Language models represent beliefs of self and others

Wentao Zhu, Zhining Zhang, Yizhou Wang

International Conference on Machine Learning (ICML) , 2024

We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.

Language models represent beliefs of self and others

Wentao Zhu, Zhining Zhang, Yizhou Wang

International Conference on Machine Learning (ICML) , 2024

We investigate belief representations in LMs: we discover that the belief status of characters in a story is linearly decodable from LM activations. We further propose a way to manipulate LMs through the activations to enhance their Theory of Mind performance.

All publications