Through interdisciplinary and collaborative innovation, we are driving the future of the Information Age.
Explore our interdisciplinary programs that prepare versatile leaders for the global economy.
Learn how we are advancing discovery at the intersection of information, technology, and society.
Discover how you can connect and engage with our community to drive the college forward.
Automated machine learning (AutoML) is emerging as a new ML paradigm that automates the pipeline from raw data to deployable ML models and enables ordinary users to readily develop, deploy, and use ML techniques. Yet, in contrast to its surging popularity, the security implications of AutoML are largely unexplored. The goals of this proposal are to thoroughly investigate the potential security risks of AutoML and to develop rigorous yet easy-to-use mitigation to curb such risks without compromising the benefits of AutoML. To achieve these objectives, first, we will empirically and analytically explore the risks of AutoML in terms of being vulnerable to malicious attacks (Type-I), being exploited as new attack vectors (Type-II), and being used to augment existing attacks (Type-III); further, we will uncover the key factors contributing to such risks by investigating the fundamental design principles and practices of AutoML techniques; then, leveraging such insights, we will design new principles, methodologies, and tools to mitigate the aforementioned risks; finally, we will implement all the proposed techniques and system designs in the form of a prototype testbed, which provides a unique research facility for investigating the security and usability of AutoML.