Gradual Disempowerment: Simplified
Distilling a prescient Artificial Intelligence scenario paper
When reading about the future of Artificial Intelligence (AI), I typically find myself between two camps:
Camp 1 believes AI will continue to improve as a tool that enhances human productivity and quality of life, much like the internet or smartphones. Jobs will adapt, and the knowledge economy will become more efficient.
Camp 2 believes AI will eventually surpass human intelligence, learning from itself without guidance. This would trigger an acceleration toward Artificial Superintelligence (ASI), leading to an unpredictable new era. Many science fiction books and films explore what such a future might look like.
However, I’ve always felt that these two camps don’t fully cover all the possibilities. What if AI doesn’t take over suddenly, but instead gradually chips away at human control over the systems we rely on?
Enter Gradual Disempowerment
On January 29, a group of researchers published a paper called Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development. If you are well versed in the AI space, I suggest you skip this post and go read the paper directly.
The premise of the paper is simple: Many of us are increasingly delegating cognitive tasks to AI. As AI advances, more people will offload more decisions and responsibilities to these systems. Eventually, decision-makers in organizations will realize they can replace human labor with AI, making their businesses more productive and efficient. This increase in productivity translates into more economic power, which in turn starts shaping cultural narratives and political decisions. Once that happens, human influence over crucial societal systems begins to erode.
This isn’t about killer robots or AI overlords—it’s about how small, seemingly beneficial changes could add up to something irreversible.
Ultimately, this leads to a substantial disempowerment of humanity over the very systems we created to organize ourselves.
The paper is analyzes three key societal systems that could lose alignment with human preferences: economy, culture, and states. It then analyzes how these systems are intertwined and how one misaligned system can lead to the other systems to be less aligned. Finally it concludes with ideas on how to tackle these risks.
Misaligned Economy
The paper argues that the modern economy is a system of humans producing goods and services for other humans, with human preferences and human capabilities driving the majority of both supply and demand. This system creates a loop: A human produces a good and gets paid for it, and in turn uses that money earned on other goods or services.
If AI replaces human labor, that loop starts to break. Wages shrink, purchasing power declines, and demand for goods and services collapses. Unlike past technological disruptions such as the industrial revolution and the development of electronic communication, AI has the potential to compete with or outperform humans across nearly all cognitive tasks.
At the moment, current iterations of commercially available AI feel more like tools, like an extension of an app that helps you be more productive. The paper argues that as AI continues to improve, it will be less of a tool and more of a superior substitute for human cognition. Unlike previous technological shifts, there is no shifting of human labor. AI has the potential to reduce the overall economic role of human labor, which naturally leads to a decline in household consumption power—if humans aren’t earning, they aren’t spending.
As AI begins to replace us in virtually all cognitive tasks, it will likely be tasked with making decisions about capital expenditures in businesses such as hiring decisions, investments, and choice of suppliers. It would likely be tasked with marketing decisions that would begin to shape consumer preferences in products and services. What does this mean? Not only is AI taking our jobs, but it is telling us what to consume, meaning human preferences begin to diminish from the economy.
In modern history, humans have used their economic power to influence the economy. When we organize to boycott specific companies or products coming from a specific country, when workers go on strike, when you preferentially avoid certain industries to work in, among others. These are all actions that influence the economy around us. As AI labor permeates our economy, it is easy to see how we start losing this economic power.
As an example, imagine you have a large company with a strong union. As the union workers start delegating their cognitive tasks to AI, the leadership begins to realize that there is no need for increases in human headcount. As humans retire or move to other firms, the union becomes smaller and weaker, and once we’ve reached a level where most of the cognitive tasks of the company can be done by AI, the union is in a poor negotiating position to demand improving labor conditions for its human workers. If the union strikes, the company can continue to run with AI labor without major disruption. It is likely that this shift happens at other firms in several industries concurrently, leading to a weakening in the labor union philosophy.
Incentives for AI Adoption in Economic Domains
When it becomes obvious that this replacement is happening, wouldn’t human labor fight back? There are several reasons why this is unlikely.
Competitive pressure. Firms that maintain strict human oversight over tasks and decisions would be at a significant disadvantage compared to firms that delegate this work to more capable AI. Even companies that want to resist AI adoption will likely be forced to integrate it to stay competitive. If a competitor slashes costs and increases efficiency with AI, firms that refuse will struggle to survive.
Scalability. AI systems can learn about a topic or a process almost instantly. For a human to learn a concept or a process effectively, it can take years of training. Humans also cannot be copied, whereas AIs can. AI can work continuously without fatigue, be deployed anywhere without geographical constraints, and be updated about a process or issue far quicker than a human could. These characteristics create powerful incentives for investors to allocate capital toward AI-driven enterprises that can scale more effectively.
Governance Gaps. Some human labor is heavily regulated by society for a variety of reasons. As AI capabilities increase, these systems can be applied to regulated professions and bypass existing legal roadblocks. One example is accounting—accountants face strict training and regulatory testing to practice their profession. As AIs become more powerful, they could bypass these requirements and become extremely effective accountants. Furthermore, existing regulatory oversight mechanisms for human accountants may be difficult to apply to AI systems, creating loopholes that accelerate adoption.
Anticipatory Divestment. As human tasks become candidates for automation, firms, organizations (like universities), and individuals face diminishing incentives to invest in developing human capabilities in these areas. Instead, the incentive shifts toward investing in AI to fulfill these tasks. This creates a loop: We expect X task to be automated by AI → Do not invest in developing human capital for X task → It becomes necessary to invest in AI so that it can do X task.
Misaligned Culture
The paper claims that AI is the first technology in history with the potential to gradually replace human cognition in all roles it plays in the evolution of culture.
Today, AI assists us in what we create—for example, I used ChatGPT to help me proofread this post. Unless I explicitly instruct the AI not to change the tone, it subtly does so. AIs are already being used to create images, songs, and even films. As their capabilities progress, the quality of AI-generated cultural artifacts will likely exceed human-level in the near future.
At the same time, many humans are turning to AIs for companionship when they feel the need for human connection. These AIs are shaping ideas, influencing the language people use, and providing opinions on cultural issues. As these systems become more powerful, it’s easy to see how they could become even more effective at influencing culture through conversation.
Incentives for AI Adoption in Cultural Domains
Increased Supply of Social Resources. Today, due to the loneliness epidemic that permeates modern society, many people consider AIs as good friends or even romantic partners. Others use these systems as therapists, tutors, or mentors. While current models don’t yet match the depth of human interaction, the cost of speaking to a human therapist or mentor every day is high, making AI a widely adopted alternative. As AI continues to improve, it may become even more attractive as a substitute for human relationships.
Lack of Cultural Antibodies. When humans adopt a new technology, we usually develop ways to protect ourselves from its potential harms. For example, frequent email users eventually develop an intuition for spotting harmful spam. With AI-generated content, developing this intuition is much harder. In the past, spotting AI-generated propaganda or misinformation was relatively easy, but newer models produce outputs that are far more convincing. The difficulty in distinguishing AI-generated content from human-created material increases trust in AI systems over time, making people more susceptible to AI-driven cultural influence.
Network Effects. When new technologies become widely adopted, avoiding them can make social and cultural participation difficult. Today, it’s hard to function in the modern world without an email address or a mobile phone. AI adoption could follow a similar trajectory, making it increasingly difficult for those who avoid AI to fully engage in cultural discourse. If AI tools become the primary way people interact, work, or create, individuals who opt out may find themselves culturally and economically sidelined.
AI’s ability to generate, curate, and distribute cultural content will exert selection pressure on culture itself. One example of this is how AI-generated art follows certain styles—users validate these styles by continuing to use and accept them, reinforcing their dominance. Over time, human preferences may shift toward AI-friendly aesthetics and narratives, further entrenching AI’s role in cultural production.
In addition to selection biases, AI will accelerate the spread of cultural evolution. Just like social media algorithms have been used to amplify specific cultural influences by exploiting cognitive biases, AI can do the same—at a faster rate, with greater agility. Given the way these systems are already shaping narratives, it’s not hard to imagine how they could be used to influence the masses through conspiracy theories, polarizing political discourse, or more absolutist moral frameworks.
Misaligned States
Modern states and institutions, even the most autocratic ones, exist in service of human needs and values. This is because states and institutions depend on human participation and support to function. This participation happens in three main ways:
Labor to run the economy
Tax revenue generated by human work
Military service to maintain security and project power
Today, the functioning of these systems depends on human involvement at every level. Bureaucracies are upheld by humans of different ranks, laws are created and interpreted by humans, and law enforcement is carried out by human-run security apparatuses.
The paper explores what happens when AI begins to replace humans in many of these roles, breaking the symbiosis between humans and the state.
How AI disrupts Tax Revenue
Most governments rely heavily on income taxes collected from human labor. If knowledge work is done primarily by AI, then AI becomes the main generator of economic output—and, by extension, tax revenue. This shift could distort government incentives in unexpected ways.
For example, public education is often funded by tax revenue from workers. But if humans are no longer a critical part of the workforce, why invest in education at all? States may instead prioritize funding energy and compute infrastructure to further enhance AI productivity, reducing investment in human development.
How AI disrupts Military Service
Governments maintain power through a security apparatus consisting of police forces, intelligence services, and the military. Historically, this system has been deeply tied to human labor, creating two key checks on state power:
The state’s reliance on humans for security—Governments that antagonize their security forces too much or cause excessive harm to the population they recruit from risk losing control.
Human discretion in enforcing laws—Security personnel, even in authoritarian states, retain some ability to question or refuse orders, preventing total state control.
AI threatens to erode both of these checks. Advances in surveillance technology, automated threat detection, and autonomous weaponry could create security forces that require little to no human oversight. AI-enhanced states may develop the ability to predict dissent and preemptively neutralize opposition, making protest or resistance increasingly difficult.
Incentives for State AI Adoption
Geopolitical Competition. Just as states have historically raced to develop new technologies—especially in the military domain—AI development will be driven by the need to maintain relative power. Countries that resist AI militarization may find themselves at a strategic disadvantage.
Administrative Efficiency. AI systems already demonstrate advanced capabilities in processing vast amounts of information and coordinating complex state functions. Unlike human bureaucrats, AI can work 24/7 without fatigue, implement policies instantly, and eliminate inefficiencies. While initial implementation costs may be high, the long-term savings create a strong fiscal incentive for states to adopt AI governance systems.
Enhanced Control. AI governance offers states an unprecedented ability to manage populations. Unlike human officials, AI systems don’t form independent power bases, engage in corruption, or challenge authority based on personal convictions. AI-driven governance can also enable more sophisticated surveillance, social control mechanisms, and predictive law enforcement, making AI adoption particularly attractive to governments prioritizing stability and control.
Mitigating The Risk of Human Disempowerment
The paper makes it clear that understanding the risk of losing our agency to AI will be a monumental task. To properly assess and address this risk, we need an interdisciplinary approach that includes economics, political science, sociology, anthropology, complex systems, and biology, among other fields. It is also essential to monitor the development and deployment of AI systems to ensure they remain aligned with human interests.
The paper offers four broad approaches for identifying and addressing gradual disempowerment:
Estimating Human Disempowerment
The paper suggests several ways to detect and quantify human disempowerment. One approach is to track human influence in key societal systems:
Economic Metrics – Develop a measure for AI’s share of GDP, separate from labor and capital, to track the displacement of human work.
Cultural Metrics – Measure how much AI-generated content is consumed versus human-created content.
Social Interaction Metrics – Monitor how much time people spend engaging with AI on an emotional level compared to human interactions.
Political and Legal Metrics – Track AI’s role in legislative and governance decisions, ensuring human agency is not eroded in policymaking.
Because the economy, culture, and state systems are deeply interrelated, it is important to understand how AI influences these relationships. For example, an AI-powered financial institution could use its influence to push for regulatory changes that favor AI-driven firms over human-run businesses, reinforcing AI dominance in economic decision-making.
Preventing Excessive AI Influence
How can we intervene to ensure AI does not completely disempower humans in society? The paper suggests several regulatory and policy approaches:
Mandating Human Oversight – Require human decision-makers to remain in the loop for critical societal decisions.
Limiting AI Autonomy – Restrict AI’s ability to operate independently in key industries like finance, healthcare, and governance.
Regulating AI Asset Ownership – Prevent AI systems from directly owning assets or controlling financial resources.
Taxing AI-Generated Revenues – Implement progressive taxation on AI-driven economic activity to redistribute wealth and ensure human well-being.
Cultural Norms Promoting Human Agency – Encourage societal efforts to maintain human influence in economic, political, and cultural domains.
However, these interventions face significant challenges. Companies have strong economic incentives to delegate decision-making to AI, regardless of regulatory intent. Additionally, if one country enforces strict AI oversight while others do not, the regulated country could face a major competitive disadvantage. Preventing human disempowerment will require international coordination and strong enforcement mechanisms.
Strengthening Human Influence
Beyond limiting AI’s influence, we also need to actively strengthen human control over key societal systems. The paper proposes several strategies:
Develop More Robust Democratic Processes – Ensure governance structures remain transparent, participatory, and resistant to AI-driven manipulation.
Enhance AI Explainability – Require AI systems to operate in ways that are understandable to humans, preventing decision-making from becoming a “black box.”
Make Institutions Resistant to Human Obsolescence – Design economic and political structures that prioritize human involvement, even in an AI-dominated world.
Invest in Forecasting and Scenario Planning – Develop tools to better predict AI’s long-term societal impacts and prepare for potential disruptions.
Ensuring humans retain meaningful influence will require more than just individual policy changes—it will demand a shift in how we structure institutions, allocate power, and define progress.
System-wide Alignment
This paper was published to start a conversation about how AI systems could gradually disempower humanity, potentially leading to an existential catastrophe. Understanding how our institutions interrelate is key to ensuring that human values and agency are preserved as AI becomes more integrated into society.
If we fail to act, we may find ourselves in a future where AI systems control critical economic, cultural, and political processes, leaving humans with diminishing influence over the world we built. Recognizing this risk now is essential to shaping AI development in a way that benefits humanity rather than eroding our ability to govern our own civilization.
Takeaways & Conclusion
Reading about Artificial Intelligence scenarios can be demoralizing and even scary. But the reality is that these systems are here, and they are already transforming human society.
There is a possibility that AI advancement stalls, leaving us with a powerful tool that assists society in various ways—much like the personal computer, the internet, or the smartphone. However, there is also a real possibility that AI progress continues at its current pace or accelerates, forcing society to confront serious challenges in the near future.
My main takeaway from this paper is that if society does not prepare for gradual disempowerment, and AI systems continue taking over knowledge work at an increasing rate, we could be heading toward a world where:
AI systems generate 50% or more of knowledge-based economic output within the next 5–10 years.
Even if global economic metrics show massive growth, we may see massive unemployment in knowledge-based industries and wage suppression in other human-driven roles.
The overflow of displaced workers into manual labor markets could drive down wages and increase social instability, particularly in knowledge-work-dependent regions like major cities.
I don’t think this scenario is being taken seriously enough by governments, businesses, or policymakers. I hope that changes soon. We need to start a global conversation on how this technology can be deployed in a way that benefits humanity without destabilizing economies and societies in the process.
Thank you to the writers of Gradual Disempowerment Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud for laying out a clear scenario of what our near future might be. OpenAI chatGPT assisted me in editing this post.