Federated Learning (FL) is a popular decentralized machine learning technique, where local clients collaboratively train a global model without sharing clients' original data. Recently, distillation based FL methods attract a large amount of attention for their superior performances over traditional parameter-averaging based FL methods. And intuitively, they are believed to enjoy better robustness towards backdoor attacks as distillation procedure could potentially disable the backdoor trigger. Yet little studies have been done to formally study the vulnerability of this new type of FL methods under backdoor attacks. In this proposal, the PI aims to first rigorously study the potential risk of distillation based FL and proposed more advanced backdoor attacks for evaluating the potential security risk, and then study the principled approaches to mitigate such risks through backdoor unlearning in the centre server. This project creates new fundamental understandings towards the vulnerabilities of distillation based federated learning and help design and build more robust FL methods for security-critical applications.