Person with subtle AI circuitry overlay, symbolizing AI's growing influence on human life and perception.
AI's subtle influence is everywhere. This image symbolizes its pervasive reach. This blog explores the fight for control in the age of AI.

You Are Already Living in an AI-Controlled World Without Realizing It

AI Wars: A Lesson from Science Fiction

Watching Battlestar Galactica led me to a profound realization: a potential war between humans and machines wouldn’t be a straightforward “us vs. them” battle. The series portrays humanity’s conflict with the Cylons, AI-driven machines with divergent goals. While some Cylons aim to destroy humans, others develop independent thought and even ally with them. This complexity underscores a critical truth: AI, like humanity, will never be a unified entity.

As Kate Crawford explores in Atlas of AI, these systems are not neutral. They are shaped by human biases, interests, and power dynamics. Since AI is created by people—who are themselves divided by ideology, politics, and self-interest—it’s unlikely machines would ever unite under a single cause, let alone one as extreme as humanity’s eradication. If humans struggle to find common ground, why would the AI we build be any different?

The Hidden Hand Guiding AI: Who Decides Its Limits?

AI is no longer a futuristic concept—it’s actively shaping global conversations, policies, and how we interact with information. Every time we engage with AI-driven tools, they subtly influence our perceptions, steering discussions in specific directions while restricting others. But who sets these boundaries, and to what end?

The rules governing AI’s capabilities determine the flow of information consumed by millions. This isn’t just a technical issue—it’s a matter of power, control, and the future of free expression.

For instance, DeepSeek, a Chinese-developed chatbot, adheres strictly to government guidelines, avoiding topics like China’s political system and offering scripted responses on Taiwan’s sovereignty. Similarly, U.S.-based chatbots like Meta AI and Google’s LaMDA avoid sensitive subjects such as conspiracy theories, hate speech, and violent extremism.

While these restrictions are often justified as safeguards against harm, they raise a pressing question: who gets to decide what is false, dangerous, or unacceptable? And what happens when those in power use AI’s boundaries to manipulate public opinion, filter reality, or advance specific agendas?

The Fight for AI Neutrality: A Clash of Ideologies

AI is not an impartial tool. Governments, corporations, and research institutions mold it to align with their values and priorities, turning it into a reflection of human division rather than an objective force.

OpenAI’s evolution illustrates this tension. Founded to democratize AI and prevent its concentration in a few hands, the organization later restricted access to its most advanced models, citing concerns over misuse and competition. What began as a mission to serve the public interest became a tightly controlled, profit-driven enterprise.

This shift has sparked debate within the AI community. Critics argue that AI should remain accessible to all, while others prioritize safeguards against misuse. Even Elon Musk, a co-founder of OpenAI, has accused the organization of abandoning its original vision. This ideological struggle mirrors the central conflict in Battlestar Galactica: should AI serve centralized authority, or should it be shaped by diverse voices?

The Reality of AI Control: Shaping the Future

The control of AI is a pivotal issue. The boundaries defining acceptable AI behavior today will determine the future of free speech, knowledge access, and digital autonomy.

As a tool of power, AI cannot be left to foreign systems. Nations must develop their own capabilities to democratize its use. In the next global conflict, those without AI won’t just lag behind—they’ll be at the mercy of those who control it.

Democratizing AI is gaining momentum. The Action AI Summit, held in Paris in February 2025, brought experts and nations together to address this urgent need. While progress is being made, it’s still in its early stages.

Initiatives like decentralized AI development, AI literacy programs, inclusive AI design, and transparent AI governance are underway. These efforts aim to empower individuals, promote critical thinking, and ensure AI aligns with human values.

Conclusion

AI is more than technology—it’s a product of human biases and power structures. As Atlas of AI highlights, these systems reinforce existing inequalities, influencing how information is controlled and consumed. This aligns with findings from the NSF report on algorithmic bias, which reveals how biases in data and societal structures seep into AI, leading to unjust outcomes.

The challenge isn’t merely technical; it’s about who controls AI and whose interests it serves. As AI becomes central to global power, the fight for ethical governance, transparency, and digital autonomy will define not just the future of technology, but the very fabric of free thought and societal equity.

For your reading:

Can AI Be Taught Morality, or Will It Write Its Own Rules?


Discover more from Canvas to Cosmos

Subscribe to get the latest posts sent to your email.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *