Important decisions are increasingly automated, delegated to algorithms susceptible to bias and other flaws. In order to guard against problems associated with automated decisions, it is often suggested that there should be a “human-in-the-loop”—i.e., some form of human review—at least in the case of high stakes decisions. But human decision-making is susceptible to its own biases and flaws, as well as to external influence. Today this includes a range of automated influences, such as targeted advertising, recommender systems, AI assistants, and digital nudges. While existing discussions tend to frame questions about these decision processes in binary terms—automated or not—this project aims to understand the normative implications of increasingly blended forms of human-machine decision-making. What ethics and policy questions are raised by decision-making systems that benefit from the strengths of both human and machine deciders, but also are subject to the weaknesses of each? How can ethics and policy guide a world in which individually and socially important decisions are reached by blended human-machine deciders?