Anomaly detection is a common task for protecting cybersecurity in the real world. Neural network-based anomaly detection methods achieve the state-of-the-art performance by exploiting non-linear relationships of the data. However, the adversarial robustness of these algorithms, i.e., whether a small perturbation on the test sample could fail the anomaly detector, has not yet attracted enough attention. This project proposes to understand this adversarial robustness in anomaly detection via devising algorithms that could exploit this weakness and develop principled ways to mitigate the potential risk. It also proposes solutions for handling discrete or categorical data which is common in real-world applications. The proposed research could lead to much more reliable anomaly detection algorithms for cybersecurity in practice.