Automating InequalityUnequal distribution of resources, opportunities, and rights within a society.: How High-Tech Tools Profile, Police, and Punish the Poor is a groundbreaking work by political scientist and technologist Virginia Eubanks. Published in 2018, the book examines the rise of automated decision-making systems in public service sectors such as welfare, housing, and child protection. Eubanks argues that these technologies disproportionately target and penalize poor and marginalized communities, reinforcing systemic inequality rather than alleviating it. The book is a cornerstone in the emerging field of algorithmic justice and is highly relevant to contemporary debates in digital governance, surveillance, and critical criminology.
Key Points
Automating Inequality by Virginia Eubanks

Sebastiaan ter Burg from Utrecht, The Netherlands, CC BY 2.0, via Wikimedia Commons
Main Author: Virginia Eubanks
First Published: 2018
Country: United States
Discipline: Digital Justice, Welfare Policy, CriminologyThe scientific study of crime, criminal behavior, prevention, and societal reactions to deviance within and beyond the criminal justice system.
Key Ideas: Data-driven discrimination, digital welfare state, algorithmic bias
Foundation for: Debates on predictive algorithms, social inequality, and automated surveillance
Main Arguments
1. Digital Poorhouses: Eubanks introduces the concept of the „digital poorhouse“ to describe how new technologies replicate and intensify longstanding forms of institutional discrimination against the poor. By automating welfare eligibility, housing applications, and risk assessments in child protective services, the state increasingly treats vulnerable populations as data problems to be managed rather than as people with rights and needs.
2. Case Studies of Structural ViolenceHarm caused by social structures that systematically disadvantage certain groups.: The book is structured around three detailed case studies: Indiana’s failed attempt to automate welfare applications, Los Angeles’ use of a predictive risk model in child welfare, and Allegheny County’s system for predictive child maltreatment surveillance. These examples show how digital systems can be inaccurate, opaque, and discriminatory—leading to wrongful denials of benefits or unwarranted state intervention in families.
3. Algorithmic Bias and Social Sorting: Eubanks critiques the assumption that automated systems are neutral or objective. She argues that they encode historical bias, reflect unequal power structures, and exacerbate existing inequalities by using predictive models based on flawed or incomplete data. This leads to a form of automated profiling that disproportionately affects the poor, especially women and people of color.
4. Techno-Solutionism and Political Accountability: The book criticizes the belief that complex social problems like poverty can be solved through technical means alone. Eubanks emphasizes that these systems often shield decision-makers from public accountability, eroding democratic oversight and public trust. Rather than solving inequality, technology often obscures and amplifies it.
Techno-Solutionism vs. Algorithmic Justice: Eubanks’ Critique of Silicon Valley Ideology
Virginia Eubanks’ Automating Inequality fundamentally challenges the prevailing narrative promoted by major technology corporations such as Google, Meta, and OpenAI—namely, that algorithms and artificial intelligence can solve complex social problems more efficiently and fairly than human decision-makers. This belief, often referred to as techno-solutionism, assumes that technological innovation is inherently beneficial and politically neutral. In sharp contrast, Eubanks offers a techno-realist and justice-oriented critique that exposes the dangers of automating public services without addressing the underlying social inequalities these systems are meant to manage.
The Promises of Techno-Solutionism
Big Tech companies often present digital technologies as powerful tools to improve government services, including welfare administration, predictive policing, or educational planning. Their promises include:
- Efficiency: Algorithms are said to process cases faster and more accurately than human bureaucrats.
- Neutrality: Machines are assumed to be free of personal bias, offering “objective” decisions based purely on data.
- Cost-effectiveness: Automated systems can reduce administrative overhead and target public funds more precisely.
- Scalability: Algorithms can be deployed across large populations, enabling personalized, data-driven governance.
These narratives reinforce a view of technology as apolitical and impartial—an instrument that can optimize society while transcending the messy limitations of human judgment.
Tech Companies and the Automation of Public Services
1. Infrastructure and Cloud Providers
Amazon Web Services (AWS): Hosts cloud infrastructure for state governments, including health and human services. AWS supports data warehousing and identity verification tools used in welfare automation.
Microsoft: Provides platforms like Dynamics 365 for digital case management and eligibility tracking, e.g., in US child protective services and UK welfare reform initiatives.
Google / Alphabet: Through its subsidiary Sidewalk Labs, Google promotes “smart city” infrastructure where public service management is automated through sensors, analytics, and behavioral modeling.
2. Predictive Analytics and Risk Scoring
Palantir: Supplies predictive analytics software (e.g., Gotham) to agencies such as the U.S. Immigration and Customs Enforcement (ICE) and child welfare departments. Used for risk assessment and fraud detection in public assistance.
IBM: Developed tools for predictive policing and automated eligibility assessments in unemployment and benefits systems (e.g., in Indiana’s failed automated welfare eligibility project highlighted by Eubanks).
Northpointe (now Equivant): Known for the COMPAS algorithm, originally used in criminal justice but conceptually similar to risk-scoring tools adopted in welfare settings.
3. SurveillanceSystematic monitoring of people’s activities, behaviors, or communications. and Profiling Tools
Thomson Reuters: Offers CLEAR, a powerful investigative platform used for background checks by welfare fraud units and law enforcement. Criticized for enabling surveillance of marginalized populations.
LexisNexis Risk Solutions: Provides social service agencies with identity verification and fraud detection services, including behavioral analysis based on credit and location data.
4. Data Management and Case Systems
Oracle: Delivers software solutions for Medicaid eligibility, benefits disbursement, and child support enforcement. States like Texas and Florida rely on Oracle infrastructure to manage high-volume caseloads.
SAS Institute: Specializes in analytics tools for detecting irregularities in public benefits systems, often used in overpayment investigations and performance auditing.
5. General-Purpose AI
OpenAI: While not yet a core actor in welfare automation, models like ChatGPT and Codex are being tested for administrative assistance, eligibility pre-screening, and chatbot-based guidance in public agencies. Raises concerns about bias, opacity, and unregulated use in high-stakes domains.
Eubanks’ Counter-Position: Algorithms Reproduce and Obscure Inequality
Eubanks directly counters this optimistic narrative by showing that automated decision-making in public services often deepens structural inequalities rather than remedying them. Her ethnographic research reveals that:
- Automated systems are not neutral, but reflect the biases and assumptions of the institutions and designers that create them.
- Digital tools tend to target the poor and marginalized, turning welfare systems into mechanisms of surveillance and exclusion.
- Rather than eliminating bias, algorithms often mask discrimination behind a façade of objectivity, making it harder to contest unjust outcomes.
- Automated systems typically disempower vulnerable populations, stripping them of the ability to negotiate, explain, or appeal decisions that impact their lives.
In this view, automation is not a technological fix for poverty, but a political instrument that reinforces social hierarchies under the guise of innovation. Eubanks argues that we are witnessing not a democratization of welfare through technology, but its bureaucratic hardening through predictive analytics and risk scoring.
Two Visions of the Future
At its core, the disagreement between Silicon Valley and Eubanks is not just about the capabilities of machines—it is about competing visions of society:
| Silicon Valley (Techno-Solutionism) | Virginia Eubanks (Techno-Realism) | |
|---|---|---|
| Social problems are primarily technical in nature and can be solved through better algorithms. | Social problems are political and structural, requiring democratic engagement and redistribution—not automation. |
|
| Data and AI produce neutral, rational outcomes free from human bias. | Data systems encode and reproduce historical injustices, often invisibly. | |
| Efficiency and scalability are the primary goals of modern governance. | Equity, accountability, and dignity should be the core values of social programs. | |
| Technology empowers citizens by improving service delivery. | Technology often disciplines and punishes the poor by imposing rigid, opaque rules. |
Implications for Policy and Democracy
Eubanks’ intervention reminds us that technology is never neutral: it is always embedded in political, economic, and cultural contexts. When used uncritically, algorithms risk amplifying the very problems they claim to solve—entrenching poverty, racial discrimination, and bureaucratic opacity under a digital guise. Her work urges policymakers to resist technocratic fixes and instead invest in deliberative, inclusive, and human-centered approaches to public administration.
In this light, the book serves as both a warning and a call to action: a warning against the seductive promises of algorithmic governance, and a call for a justice-based technological future where the needs of marginalized communities come before the imperatives of efficiency and control.
Theoretical Relevance
Surveillance Studies: Eubanks’ work contributes to surveillance scholarship by expanding the analysis of digital control beyond national security or policing into the realm of everyday bureaucratic practices. Her notion of the digital poorhouse builds on earlier concepts such as Oscar Gandy’s “panoptic sort.”
Oscar Gandy – The Panoptic Sort (1993)
In The Panoptic Sort: A Political Economy of Personal Information, communication scholar Oscar H. Gandy Jr. examines the systematic collection, classification, and use of personal data by governmental and corporate institutions. The “Panoptic Sort” refers to an algorithmic sorting system that evaluates individuals based on their data profiles to determine access to resources, control, and surveillance. Gandy argues that this form of informational discrimination reinforces existing social inequalities and produces new forms of exclusion. His work is considered foundational in the field of Surveillance Studies and has significantly influenced debates around algorithmic justice, digital power structures, and automated social control.
Critical CriminologyA perspective that examines power, inequality, and social justice in understanding crime and the criminal justice system.: The book aligns with critical criminological perspectives by exposing how social control and punishment are embedded in institutional routines. It shows how digital tools reframe social welfare as a system of risk management, control, and exclusion.
Social InequalityThe unequal distribution of resources, opportunities, and privileges within a society. and Technocracy: Eubanks links the rise of automated decision-making to broader neoliberal transformations. These systems depersonalize and depoliticize poverty, shifting responsibility from structural reform to individualized risk prediction and punishment.
Impact and Reception
Automating Inequality was widely praised for its empirical depth and moral clarity. It contributed to a growing awareness of algorithmic injustice and inspired both academic research and policy debates. Eubanks‘ work has been influential in social work, public administration, data ethics, and criminology. The book also helped to popularize a critical vocabulary—such as “automated bias,” “digital welfare state,” and “techno-determinism”—now central to discussions of digital governance.
Some critics have pointed out that the book is more journalistic than theoretical, and that its case studies are US-centric. However, its warnings about algorithmic harm and the erosion of rights in digital welfare regimes are increasingly echoed in European and global contexts.
Connections to Other Theories
- Oscar Gandy – The Panoptic Sort: A foundational work on algorithmic sorting and discrimination, which Eubanks builds upon.
- Ruha Benjamin – RaceA socially constructed category used to differentiate groups based on perceived physical or cultural traits. After Technology: Extends the discussion of algorithmic bias by linking it explicitly to racism and structural inequality.
- Jonathan Simon – Governing Through CrimeActs or omissions that violate criminal laws and are punishable by the state.: Eubanks’ work complements Simon’s thesis by showing how crime control logics extend into welfare systems via data technologies.
References
- Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
- Gandy, O. H. (1993). The Panoptic Sort: A Political Economy of Personal Information. Boulder: Westview Press.
- Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.
- Simon, J. (2007). Governing Through Crime. Oxford: Oxford University Press.


