US companies are increasingly adopting AI hiring systems to intelligently automate talent acquisition tasks such as job advertising, resume filtering and evaluation, candidate selection and tracking, and interviewing. However, while new AI tools are disrupting the HR function, US business executives voice concern about the risks of (un)intended bias and the drastic impacts of such biases on job seekers, particularly those from minoritized populations (McKinsey, 2020). The purpose of this study is to use Gilliland’s (1993) procedural and distributive justice in selection systems model to examine minoritized technology job seekers’ perspectives on fairness and equity in AI hiring systems. Procedural justice rules are used to measure the applicant’s fairness judgments about each interaction with a hiring company that occurs before, during, and after a personnel selection procedure. Distributive justice involves the applicant’s considerations about equity in the outcomes of the hiring decision. Undergraduate students in computer and information science majors who are members of a minoritized group (women, racial and ethnic minorities, international students) and actively seeking an internship or career opportunity will participate in focus groups and weekly journaling. Understanding diversity imbalances in the technology workforce and the potential harms caused by AI hiring systems is critical for directing equitable policies for the design and use of these systems, determining potential types and sources of injustice embedded in the predictions made by machine learning algorithms, and identifying the impacts of these systems on minoritized technology job seekers hiring experience (Yarger, Payton & Neupane, 2020).