Moderation plays an indispensable role in governing user-generated content and structuring our online experience and life. Most moderation systems follow a punitive logic to punish users in such forms as content removal and account suspension. However, they tend to share one common drawback — They do little to help punished users. Punished users, who sometimes receive a brief explanation from the platform, are left on their own to deal with the punishment, such as understanding how they have violated certain rules and figuring out how to improve their future behavior. This creates enormous challenges for punished users, especially newcomers who do not fully understand platform policies, to be socialized into online communities. This proposal aims to design a social learning mechanism that could provide support to punished users. The mechanism will encourage exchange of ideas between punished users and community members, so that the former can gain opportunities to better evaluate their past behaviors against community norms and platform policies, and to exchange ideas with old-timers. The proposed social learning approach could strengthen existing moderation paradigm by leveraging restorative values and in turn contribute to the wellbeing of online communities.