Obedience to Authority: The Legacy of Milgram’s Research
In 1961, as the world watched the trial of Nazi officer Adolf Eichmann, Yale psychologist Stanley Milgram designed an experiment to answer a profound question.
When an authority tells ordinary people to hurt others, why do they listen?
His research dug to the heart of human nature. In the aftermath of World War II, Milgram wanted to understand how institutional authority could override individual conscience. The resulting study would become one of psychology’s most influential and debated works.
The answer is a yes.
Milgram’s findings shook the scientific world. Given the right conditions, most ordinary people would harm someone else if a person in authority told them to do so. This wasn’t just a onetime discovery. His team ran the experiment again and again, changing different factors each time. The results kept pointing to the same disturbing truth: when an authority figure gives an order, we listen.
The Experiment’s Elegant Design
Milgram’s experimental design showed exceptional methodological precision in isolating the variable of authority compliance. The study recruited participants through ostensibly neutral newspaper advertisements for a memory research project. Researchers systematically assigned subjects the role of ‘teacher’. A participant assumed the role of ‘learner’. This design was notable for three key innovations: the incremental nature of ethical transgression, the clear operational definition of obedience, and the standardization of authority presence.
The experimental apparatus employed calibrated psychological anchors through its voltage labeling system. The shock generator’s 30 switches progressed from 15 to 450 volts, with systematic qualitative designations: ‘Slight Shock’ (15–60 volts), all the way to ‘Danger: Severe Shock’ (375–420 volts), and culminating in a simple ‘XXX’ designation (435–450 volts). This progression created a quantified moral gradient that participants could navigate incrementally.
The experimenter, dressed in a gray lab coat, had four specific prods to use if participants hesitated.
“Please continue.”
“The experiment requires that you continue.”
“It is absolutely essential that you continue.”
“You have no other choice. You must go on.”
Most people know the headline: 65% of people in Milgram’s experiment went all the way to the maximum shock. But that single number doesn’t tell the full story. Milgram ran 18 different versions of his experiment, each revealing something new about why people obey.
The most striking findings came when he changed how close people were to their actions. When participants had to physically force someone’s hand onto the shock plate, only 30% complied. When orders came over the telephone instead of from someone in the room, compliance dropped to 20.5%. But here’s the most revealing variation: when two other participants (actually actors) refused to continue, 90% of people joined their rebellion.
The further removed people were from seeing the impact of their choices, the more likely they were to obey.
Critics raised important questions about the study. Did participants see through the deception? Was the laboratory setting too artificial? Did the experimenter’s reassurances do more than just give orders? Did they also remove people’s sense of responsibility?
But real-world evidence supports Milgram’s insights. Hospitals found nurses following questionable orders from doctors. Companies discovered employees enforcing harmful policies. Modern research in medical ethics and corporate behavior keeps confirming what Milgram showed: when authority speaks, personal ethics often go quiet.
Modern Implications and Ethical Challenges
Modern authority has developed far beyond Milgram’s lab coat experimenter, and in ways that make his findings even more relevant. Today’s authority isn’t just more complex as it’s fundamentally different in three crucial ways:
First, it’s invisible. Milgram’s participants could see their authority figure and read his body language. Today’s workers face an invisible mesh of algorithms, policies, and distributed decisions. A content moderator doesn’t argue with an AI system that flags content, or question the dataset it was trained on. They click ‘approve’ or ‘deny’ and move on.
Second, it’s automated. Traditional authority required human intermediaries who might show doubt or hesitation. Digital systems don’t hesitate. When an Amazon warehouse system sets a quota, it doesn’t care if a worker is tired or the weather is bad. The authority is often absolute and unquestioning, making it harder to resist than any human authority figure.
Third, it’s abstracted. Milgram’s participants heard screams through a wall. Today’s tech workers see spreadsheets, metrics, and KPIs. When Facebook engineers tweak the news feed algorithm, they don’t see the radicalization of users. Instead, they see ‘engagement metrics.’ This abstraction creates a psychological distance far more effective than Milgram’s wall.
These changes have created what psychologists call ‘distributed authority’. This is where so many decision points scatter responsibility that no single person feels accountable. At Volkswagen, no single engineer fully owned the emissions cheat. Each one just wrote their piece of code, following specifications handed down through layers of management and automated testing systems.
This distribution of responsibility creates a new psychological mechanism. It is not just obedience to authority, but the dissolution of authority itself into a system where everyone and no one is responsible. It is both the exercising and submitting to authority simultaneously.
Milgram showed how authority could make people act against their conscience. Modern systems have gone further — they’ve created structures where conscience struggles to find purchase at all. There’s no logical moment of moral choice, no single order to resist or obey. Just a continuous flow of small decisions, each seemingly insignificant, but collectively powerful.
Where we are today
Milgram showed us something specific. The more distance we put between people and the consequences of their actions, the easier it becomes for them to do harm. This matters because modern organizations, especially tech companies, are masters at creating this distance. We build systems that separate decisions from their effects, often without realizing it.
The next step isn’t to write better policies or give more ethics training. People must see the impact of their choices from all levels. When a programmer writes code that affects millions of users, they should see how it changes real lives. When a CEO approves of a thinking model, they should see how it can be misused. When a content moderator makes a decision, they should understand its ripple effects.
Milgram’s research wasn’t just about obedience — it was about how systems can be designed to either reveal or hide their true impact. That’s the lesson we still haven’t learned.