While humans are inherently social creatures, the challenge of identifying when and how to assist and
collaborate with others - particularly when pursuing independent goals - can hinder cooperation. To
address this challenge, we aim to develop an AI system that provides useful feedback to promote prosocial
behaviour - actions that benefit others, even when not directly aligned with one's own goals. We introduce
ProToM, a Theory of Mind-informed facilitator that promotes prosocial actions in multi-agent systems by
providing targeted, context-sensitive feedback to individual agents. ProToM first infers agents' goals
using Bayesian inverse planning, then selects feedback to communicate by maximising expected utility,
conditioned on the inferred goal distribution. We evaluate our approach against baselines in two
multi-agent environments: Doors, Keys, and Gems, as well as Overcooked. Our results suggest that
state-of-the-art large language and reasoning models fall short of communicating feedback that is both
contextually grounded and well-timed - leading to higher communication overhead and lower success rates.
In contrast, ProToM provides targeted and helpful feedback, achieving a higher success rate, shorter task
completion times, and is consistently preferred by human users.