Program Areas

Program Areas

A movement is needed to ensure AI serves human­ity and democracy

Image by Pete Linforth from Pixabay

Five years ago, I witnessed Beijing’s crackdowns devastate my hometown of Hong Kong. We lost free speech, political rights and freedom from fear. Friends were jailed for years because of peaceful advocacy and I fled to Britain as a political refugee.

I learned a valuable lesson: When power is concentrated and unchecked, democracy dies. Power takes different forms – some exercised by governments within national remits, others by corporations whose influence expands across jurisdictions and scales infinitely. We should be concerned about both, especially the latter, as corporate power is more subtle, yet equally influential.

That’s why artificial intelligence’s development and its evolving narratives put me on edge. Arms-race rhetoric has dominated nearly all related discussions and a race-to-the-bottom mentality treats concerns over power concentration as mere noise interfering with economic growth. We are stepping into a future where the safety of the most consequential technologies – artificial general intelligence (AGI) and artificial superhuman intelligence – relies on the goodwill and self-regulation of a few leading companies. These companies will only amass more power as AI development dramatically accelerates once these models can automate AI research and development.

Democracy dies when power becomes over-concentrated. Before AI can outperform most humans at remote work, we need to raise concerns and push for political changes that enact protections and guardrails to safeguard democratic systems.

That’s why I believe AI should serve not only humanity but also democracy. With great power comes great responsibility – we need a Democracy AI movement to ensure democracy thrives in the age of AGI and that frontier AI companies are held accountable.

Artificial intelligence should serve not only humanity but also democracy

We need to act fast. Anthropic’s chief executive, Dario Amodei, has predicted that AI models may surpass human capabilities “in almost everything” within two to three years. Even the less optimistic experts mostly agree that AGI is inevitable.

For now, we are still operating under a patchwork of voluntary commitments and nascent laws, driven by growth-centric and arms-race narratives. AI’s incredible capacities can indeed unlock unimaginable economic growth and scientific discoveries, leading to a far more resource-abundant society. AI-driven technological breakthroughs in the military have enabled Ukraine to fend off much larger Russian armies. In the medical field, the latest technology has also helped develop diagnostic tools that save many lives.

But what if this growth and revenue, at the end of the day, serve only a minority of tech giants? What if companies are incentivized to engage in predatory behavior when governments fail to intervene because they lack access to advanced models? What if the AI systems deployed to billions are coded to serve their creators and stand ready to stage coups?

The sheer amount of influence amassed will be unimaginable: from hoarding resources and media control to manipulating public opinion and the information we receive. Continuing to let top AI companies release their products to the public without independent regulatory testing and safety certification is tantamount to willfully endangering democracy. The lack of transparency, accountability and public oversight regarding the most influential AI systems represents the inverse of what democratic societies stand for.

Public oversight and citizen participation is urgently needed

AI should not be a topic that only a small group of skilled technicians can discuss. Advanced AI development should not be sealed in a black box that only very few (or no) independent auditors can examine. Without public oversight and citizen participation in these leading companies’ decision-making processes, we risk allowing these tools to serve only commercial interests.

We need to build a movement to defend democracy against an accelerating slope of power concentration. As the most powerful stakeholders in the age of AGI, companies must respond to people’s concerns and hopes, while the government should establish greater statutory power for broader oversight. A report produced by a coalition of AI research institutes found that over 80 percent of respondents agreed on leveraging AI to safeguard democracies and strengthen legal frameworks to better regulate AI usage. Another study shows that most of the public believes governments or regulators should have a suite of safety powers.

Power needs to reach the people where they are. That’s how democracy thrives in the face of the biggest technological revolutions.

Trusted citizen representatives should be included on corporate boards to balance incentive structures. Frontier AI companies should install mechanisms and maintain responsiveness to address people’s concerns, which can be collected through public petitions or partnerships with credible polling centers. It should be mandatory to share model capabilities and specifications with multiple independent and governmental stakeholders for testing, preventing potential power grabs and systems losing control.

We also need to establish parameters and guiding principles to ensure that AI development serves democratic values. We should enact chip export controls for countries where censorship is mandated by state power, invest more in narrow and localized models to mitigate environmental impacts, legalize regulations requiring safety tests before product rollout and actively require companies to work on minimizing system biases.

Values-based and rights-based language should be front and center when discussing AI safety and governance. We should increase data portability and interoperability on AI platforms to facilitate market competition, constantly align models’ behaviors with the Universal Declaration of Human Rights and build remedial mechanisms for AI misuse, ensure stringent data protection, compensate people for their extracted works and invest more in machine interpretability research to better implement all necessary safety measures.

Democratic systems constantly face challenges. Remaining dormant does not lead to survival – engaging in vigorous discussion and learning through errors does. A New York Times op-ed by Yuval Harari, Tristan Harris and Aza Raskin on how humanity lost to the first AI – social media – has inspired me to raise alarms about our next battle. While social media made society more depressed, addicted and divided, the consequences of mismanaging AGI can be far worse.

The consequences of mismanaging artificial general intelligence can be dramatic

I am not a technology sceptic. I believe in technological advancement and the benefits it brings to humanity. But defending democracy is even more critical: We need to change how power is structured.

A more cautious approach does not mean losing to rivals like the People’s Republic of China. As Professor Victor Shih aptly points out, Beijing’s leadership is even more concerned about AGI’s emancipating power. The Chinese model began devising AI “brakes” from the beginning because concerns about AI affecting regime stability loom large – Beijing will likely drag its feet approaching the next AI breakthroughs. Coupled with chip export controls and other measures, I am convinced that holding leading AI companies accountable will not compromise our advantages.

Ample research has shown the dangers of letting our future depend on the goodwill of a few companies – it’s time for advocacy. Democratic activists, free speech advocates, campaigners for environmental and racial justice, AI safety researchers and many others should unite and raise our concerns with one voice.

Democracy AI can be the project and alliance that generates enough momentum for change. Progress will come from new narratives, cross-sector consensus and organized political pressure. When legislators are under pressure to act, we will have a safer and more democratic future.

This article was first published at asia.nikkei.com. Republished here with kind permission. The copyright remains with the author and/or the original publisher.

Nathan Law
Nathan Law is a former Hong Kong legislator and democracy activist, living in exile.