The AI Surge: Should Humans Step Aside?

The AI Surge: Should Humans Step Aside?

For many, the world of Artificial Intelligence feels like a distant frontier—an alien concept shrouded in mystery and beyond our reach. Yet the stark reality is that AI is advancing at breakneck speed, regardless of whether we choose to engage with it. Progress surges forward, fueled by relentless investment and relentless innovation. The pressing question is: will we remain passive observers on the sidelines, or will we boldly step up to shape the future alongside this transformative force? Or, will we relinquish the torch of our future to an entity we barely understand?

As we enhance Generative AI and edge closer to a world dominated by Artificial General Intelligence (AGI), where countless devices are intricately interconnected, concerns about an AI takeover and the security challenges it poses are increasingly becoming central to mainstream discourse. The more advanced AI becomes, the more urgent it is for us to confront the risks it introduces, ensuring that our innovation doesn’t outstrip our ability to safeguard society.

In a captivating discussion between podcaster Dwarkesh Patel and AI security expert Paul Christiano of the National Institute of Standards and Technology, a thought-provoking proposal emerged: a seamless transition of power to AGI, one that adopts the same beliefs and values we hold dear. While many believe that an AI takeover is unlikely until the physical world is fully intertwined with machines, it is increasingly evident that AI will soon possess the intellect to tackle complex problems and the connectivity to govern our world. Although the exact timeline remains uncertain, one critical question looms large: how will we respond as that day approaches?

Elon Musk has emphatically stated that when raising a “genius child,” the paramount task for parents is to instill “good values.” This approach places a premium on ethical character development over merely nurturing intellectual prowess, ensuring that the child can discern “good” from “bad” choices with at least 99% accuracy—while fervently hoping they never encounter that daunting 1% decision. A compelling strategy for cultivating a moral AGI system is to distribute its development across a diverse array of contributors rather than concentrating it in a single individual. By fostering collective development, we can shape AGI with predominantly “good values” and dramatically diminish the impact of “bad values,” ultimately nurturing a more compassionate and principled AI.

However, morality itself is a labyrinthine puzzle, ever-shifting and evolving. Two primary ethical frameworks often come into play in this discourse: deontology and consequentialism. Deontology asserts that moral actions must adhere to specific rules, duties, and obligations, while consequentialism assesses actions based on their outcomes. According to consequentialism, a morally right action is one that maximizes benefit or happiness for the greatest number of people. When exploring the morality of Artificial Intelligence, it’s essential to understand not only these frameworks but also the historical & modern context that informs them. Our past & present holds critical insights that can shape the moral foundation of AGI systems.

Learning from the Past: Historical Insights into AI Morality:

To develop morally sound AGI systems, it is essential to grasp the sources from which these systems will draw their inputs. They will inevitably be shaped by the long, intricate history of Homo sapiens. This history is riddled with moral contradictions. For example, as noted by Yuval Noah Harari, compelling evidence suggests that Homo sapiens may have extinguished their own human relatives, the Neanderthals. Another glaring example is slavery: prior to 1865, it was not only legal but rampant in the U.S. Even esteemed philosophers like Aristotle openly defended slavery as a natural institution. Given this morally checkered past, how can we expect AGI systems to uphold a consistent moral framework?

Profit vs. Compassion: The Ethical Dilemmas of AI in Industrial Context:

Another pivotal factor that AGI systems may consider in shaping their approach to human morality is the treatment of other species, including intelligent machines and animals. As Paul Christiano astutely points out, if AI is engineered solely for profit with no emotional connection, why would a highly intelligent AI adopt a different perspective? When advanced AI observes industrial practices—such as the exploitation of animals like chickens, cows, and pigs for profit—it may discern little in terms of moral integrity. For instance, the average chicken is confined to a mere 8×10 inch space, while cows are kept in a perpetual state of pregnancy, with their male calves frequently sent to slaughter. This stark lack of compassion raises profound questions about the moral compass that AGI systems might adopt.

As we stand on the precipice of a future dominated by AGI, we have yet to fully prepare ourselves, and society is undeniably unready for the monumental power shift it could entail. The role of individuals in such a super-intelligent world remains shrouded in uncertainty. Will we retreat, prioritizing family and creative pursuits? Or will AGI perceive us as a threat, treating us in ways reminiscent of how we have treated other species? What moral code will steer AI-based systems? Moreover, will human capabilities remain ahead of the curve, enhanced by Brain-Computer Interface (BCI) systems like Neuralink? These pressing questions illuminate the uncertainties that surround our place in an AGI-driven world.

What do you believe is the most pressing challenge in the AI handover? Share your thoughts and join the conversation about our technological future.

For your reading:

By Ankita Pujar & Dr. Robin Garg


Discover more from Canvas to Cosmos

Subscribe to get the latest posts sent to your email.