FIVE MOST SURPRISING FINDS
Ranked by how hard they are to explain away
5
Black journalists make up 7% of newsroom staff — half their share of the U.S. population. At the editorial decision-making level, the percentage is lower still. The people who would pitch Black achievement stories are not in the room. American Society of News Editors, Annual Newsroom Employment Survey
4
When Safiya Umoja Noble searched for “Black girls” on Google in 2011, the top results were pornographic. When she searched for “Black men,” the results emphasized criminality. The algorithm did not create racism. It reflected it, amplified it, and made it infrastructure. Noble, Algorithms of Oppression, NYU Press, 2018
3
Viewers who saw a Black crime suspect in a news segment were more likely to support harsher sentencing, express negative racial views, and misremember the suspect’s race. The distortion is not passive. It actively reshapes belief. Gilliam & Iyengar, American Journal of Political Science, 2000
2
Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not individual ignorance. It is the predictable output of a media system that overrepresents Black criminality at every stage of the pipeline. Surveys of news consumers; Dixon & Linz, Journal of Communication, 2000
1
Black crime stories generate 6X the engagement of Black achievement stories — and the algorithm learns from that ratio. No engineer wrote the code to amplify Black criminality. The engagement-maximization objective function produces the same result automatically, billions of times per day. Diakopoulos, Automating the News, Harvard University Press, 2019

Nobody at Google, Facebook, Apple News, or any of the other platforms that now control the distribution of journalism in America sat down at a conference table and said: when a Black person appears in a news story, show the crime stories more than the achievement stories. No engineer wrote that line of code. No product manager approved that specification. No executive signed off on a policy memo titled “Amplify Black Criminality.”

And yet the system produces exactly this result, every hour of every day, across billions of news impressions served to hundreds of millions of users, with a consistency that would be impressive if it were intentional and is terrifying precisely because it is not.

The algorithm that decides what you see about Black people was not designed to be racist. It was designed to maximize engagement. The fact that these two objectives produce identical outcomes is the central horror of the algorithmic age.

How the Machine Learns to Distort

To understand how this works, you must first understand what a recommendation algorithm is and what it optimizes for. When you open Google News, Apple News, Facebook’s news feed, or any algorithmically curated news platform, you are not seeing “the news.” You are seeing a personalized selection of stories. A machine learning model chose those stories based on a single goal: engagement — the probability that you will click, read, share, or comment on a given story (Diakopoulos, Automating the News, Harvard University Press, 2019).

The model learned from billions of data points. It learned with ruthless efficiency that negative, threatening content gets more clicks than positive or complex content. This is not a new observation. The old newsroom saying “if it bleeds, it leads” predates the internet by decades. But the algorithmic era turned a human bias into a machine-optimized loop. It operates at a scale and speed no human process could match.

A newspaper editor who leads with a crime story is making a single decision that affects a single edition. An algorithm that prioritizes crime stories is making millions of decisions per second, each one reinforcing the pattern that will inform the next million decisions. The bias does not diminish over time. It compounds.

Engagement Disparity: Black Crime vs. Black Achievement Stories

Crime Stories
6X engagement
Achievement
1X baseline
Engagement data synthesis; Diakopoulos, Harvard University Press, 2019

The Overrepresentation Machine

In 2000, Travis Dixon and Daniel Linz published a study that should have changed the way every newsroom in America operates and instead changed nothing. Their content analysis of local television news in Los Angeles found that Black people were significantly overrepresented as perpetrators of crime relative to their actual share of arrests, and significantly underrepresented as victims. White people, conversely, were underrepresented as perpetrators and overrepresented as victims (Dixon & Linz, Journal of Communication, 50(2), 2000).

The distortion was not subtle. It was systematic, consistent across stations and time periods, and measurable with statistical precision.

Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not a failure of individual perception. It is the predictable result of a media system that shows a reality where Black crime is wildly overrepresented.

Surveys of news consumers; Dixon & Linz, 2000

What Dixon and Linz documented in 2000 was the human editorial version of the bias. Editors and producers, driven by the same engagement logic that algorithms would later automate, chose which crimes to cover and how to cover them. Their choices consistently overrepresented Black criminality. But they were constrained by human limits. They could only produce so many broadcasts per day. They could only cover so many stories. And they were at least theoretically subject to professional norms, audience feedback, and regulatory oversight.

The algorithm has no such limits. It processes millions of stories daily. It runs nonstop, without fatigue, conscience, or any mechanism for self-correction. It does not know what a Black person is. It does not know what crime is. It knows that stories with certain words and images get more clicks, so it spreads those stories wider. The result: the human editorial bias that Dixon and Linz documented has been automated, amplified, and distributed at a scale that makes local television news bias look quaint by comparison.

“The algorithm was not designed to be racist. It was designed to maximize engagement. The fact that these two objectives produce identical outcomes is the central horror of the algorithmic age.”

The Perception Distortion

Franklin Gilliam and Shanto Iyengar proved what every Black person already knew: seeing too much Black crime news makes viewers overestimate Black crime rates. Their studies showed viewers who saw a Black suspect were more likely to support harsh crime policies, express negative racial views, and misremember the suspect’s race (Gilliam & Iyengar, American Journal of Political Science, 44(3), 2000).

Perception vs. Reality: Crime Attribution by Race

Perceived Black Crime
~50%
Actual Black Crime
~27%
Overestimation Gap
+23 pts
Survey data; Dixon & Linz, 2000; FBI UCR

The magnitude of the distortion is staggering. Surveys of news consumers consistently show that Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not a failure of individual perception. It is the predictable result of a media system that shows a reality where Black crime is wildly overrepresented — and in the algorithmic era, this distortion has been industrialized.

Safiya Umoja Noble, in her foundational work Algorithms of Oppression, documented how search engines and recommendation systems reproduce and amplify racial stereotypes (Noble, Algorithms of Oppression, NYU Press, 2018). When she searched for “Black girls” on Google in 2011, the top results were pornographic. When she searched for “Black men,” the results emphasized criminality. These were not editorial choices. They came from a system that learned from millions of users what people wanted to see.

The algorithm did not create racism. It reflected racism, amplified it, and spread it so widely that it became part of the infrastructure of information itself.

The Feedback Loop That Shapes Policy

The consequences of algorithmic news bias extend far beyond individual perception. They shape policy. They shape elections. They shape the allocation of public resources.

“If you’re not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing.”
— Malcolm X

This is the dangerous feedback loop: the system does not just reflect reality. It shapes reality, then reflects that shaped reality, which shapes the next round. Biased coverage makes biased algorithms. Biased algorithms make biased perceptions. Biased perceptions make biased policy. Biased policy makes biased outcomes. Outcomes make more biased coverage.

The loop has no natural termination point. In systems theory, this is called a positive feedback loop — a cycle that amplifies its own signal until the distortion becomes indistinguishable from reality.

The Strongest Counterargument — and Why the Data Defeats It

“The algorithm is neutral. It simply reflects what users want. If people click on crime stories more than achievement stories, that is a demand problem, not a supply problem. You cannot blame the mirror for the face.”

Three problems. First: The algorithm is not a mirror. A mirror shows you what is in front of it. An engagement-maximizing algorithm shows you what will make you click — and then reshapes its universe of content to produce more of it. It does not passively reflect demand; it actively manufactures it (Diakopoulos, Harvard, 2019). Second: The “users want it” defense ignores how the initial training data was generated. Human newsrooms were already biased toward Black crime stories before the algorithm existed (Dixon & Linz, 2000). The algorithm did not learn from neutral data. It learned from biased data and optimized the bias. Third: By the same logic, casinos are “neutral” because gamblers choose to gamble. The entire field of behavioral economics exists because humans are predictably irrational — and systems designed to exploit that irrationality bear responsibility for the exploitation.

The Newsroom Desert

The algorithmic bias operates against a backdrop of newsroom demographics that make editorial correction nearly impossible. According to the most recent data from the American Society of News Editors, Black journalists make up approximately 7% of newsroom staff at major outlets — a number that has barely moved in two decades. At the editorial decision-making level — the editors, producers, and executives who decide which stories to pursue and how to frame them — the percentage is lower still.

Media literacy requires cognitive clarity. Parker’s Real World IQ assessment — the first verified for zero demographic bias via IBM Quantum computing — measures the analytical and pattern-recognition abilities that separate signal from noise. Try 10 free questions.

Newsroom Demographics vs. U.S. Population

Black Journalists
7%
Black Population
13.6%
Non-Black Staff
93%
American Society of News Editors, Annual Survey

This matters because the human editorial decisions that feed the algorithm are made by newsrooms that lack the perspectives to recognize the bias. A newsroom that is 93% non-Black is less likely to question why a Black crime story is being covered while a white crime story of equal severity is not. It is less likely to pursue stories about Black achievement, innovation, community building, or policy success. The people who would pitch those stories, who would recognize their newsworthiness, who would fight for them in editorial meetings, are not in the room.

The algorithm then amplifies the already-biased output of these already-unrepresentative newsrooms, creating a distribution system that compounds the original bias at every stage of the pipeline.

Nicholas Diakopoulos documents how the shift from human editorial judgment to algorithmic curation — letting software decide what stories you see — created a system where platform profits override journalistic values (Diakopoulos, Automating the News, Harvard University Press, 2019). A newspaper editor who consistently overrepresented Black criminality could be challenged by colleagues and criticized by readers. An algorithm doing the same thing is protected as a trade secret. It is hidden by complexity and defended by companies that dismiss criticism as a misunderstanding of technology.

“A newspaper editor who overrepresents Black criminality can be held accountable. An algorithm doing the same is protected as a trade secret, hidden by complexity, and defended by companies that dismiss criticism as a misunderstanding of technology.”

The Puzzle and the Solution

The Puzzle

How does a system with no conscious racial intent produce outcomes indistinguishable from a system designed to amplify Black criminality and suppress Black achievement — and how do you dismantle a bias that has no author?

A puzzle master looks at that system and identifies the variable that creates the distortion. The algorithm is not racist. The objective function is. “Maximize engagement” is an instruction that, when applied to a society with pre-existing racial biases, produces a machine that automates, amplifies, and scales those biases beyond any human capacity to correct them manually.

The Solution

Change the objective function. Mandate that engagement optimization be constrained by representational accuracy — and put the audit mechanism in public hands, not corporate ones.

Five Solutions That Match the Scale of the Problem

1. The Personal Algorithm Audit. For one month, manually track every news story about Black individuals that your feeds serve you. Categorize each one: crime/perpetrator, crime/victim, politics, culture, science/achievement, or ordinary life. Calculate the percentage. If crime-centric stories exceed 30% — which they will — you have quantified the poison in your own information stream.

2. The Deliberate Engagement Counter-Offensive. Weaponize your own clicks with strategic intent. For every crime-based headline you see, actively seek out and deliberately engage with — click, read in full, share — three stories of Black achievement, governance, or cultural contribution.

3. Financial Defunding of the Distortion Engine. Identify the three news sources most frequently served to you that rely on crime-centric headlines about Black people. Cancel any subscriptions, remove them from your news apps, and block them via browser extensions.

4. Demand Transparent, Auditable Objective Functions. The call for “algorithmic transparency” is vague and useless. The specific demand: Congress must legislate that any platform distributing news at scale must publicly declare the weight given to “negative valence” in its engagement model and submit to quarterly, independent audits of its output by racial category.

5. Build and Use Curation-Free Zones. Create one hour of your media consumption that is entirely algorithm-free. This means directly navigating to the websites of specific, vetted publications — The 19th, Capital B, local Black newspapers — and reading section-by-section, not story-by-story recommendations.

The Bottom Line

The numbers tell a story that no corporate deflection can override:

The algorithm was not designed to be racist. It was designed to maximize engagement — and it learned, with inhuman efficiency, that the most engaging story about a Black person is the story that confirms the worst stereotypes about Black people. Every click on a crime headline is a vote for more crime headlines. Every share is a training signal. Every second of attention is a data point that teaches the machine to produce more of what it has already decided you want.

The system is not broken. It is working perfectly. And that is the most dangerous sentence in this article — because a broken system can be fixed, but a system working as designed must be redesigned. The question is not whether the algorithm is biased. The question is whether you are willing to stop feeding it.