Daniel Leviathan, Co-Founder and CEO of Zoé

Opinion
AGI Alignment: Creating a sustainable, just and beautiful world in the era of superintelligence

"It is a complex but crucial endeavor determining the trajectory of our technological and societal evolution," writes Zoé Co-Founder and CEO Daniel Leviathan.

The rapid progress towards Artificial General Intelligence (AGI) is exemplified by the evolution of OpenAI's ChatGPT models: from GPT-3's 175 billion parameters in 2020 to GPT-4's rumored trillion-plus parameters in 2023, representing a potential 5.7x increase in just three years. This exponential growth in AI capabilities, underscores the urgent need to address AGI alignment to ensure that these increasingly powerful systems remain in harmony with human values and interests.
Artificial General Intelligence describes AI that can match or exceed human cognitive abilities in virtually any domain, possessing general problem-solving skills and the capacity for abstract reasoning and learning.
1 View gallery
Daniel Leviathan Zoe
Daniel Leviathan Zoe
Daniel Leviathan, Co-Founder and CEO of Zoé
(Photo: Bianca Karou)
AGI alignment refers to ensuring that AGI systems' actions, goals, and values harmonize with humanity's. This challenge is not merely technical but philosophical and ethical, shaping our species' future. As we approach creating superintelligent machines, alignment's importance cannot be overstated.
Three scenarios from nature and human relationships illustrate different alignment levels:
  1. Misalignment: Consider the relationship between humans and ants. Our vastly superior intelligence allows us to reshape the world with little regard for ant colonies. When we construct roads or buildings, we rarely pause to consider the impact on ant populations. This scenario illustrates a stark misalignment of goals and values between two species of differing intelligence levels.
  2. Contradictory Alignment: The relationship between humans and chickens presents a case of opposing interests. Chickens have a natural desire to live and thrive, while humans often view them as a food source. This conflict of interests leads to ethical debates and varying practices across cultures and individuals. It serves as a cautionary tale for the potential consequences of misaligned AGI, where the goals of the superintelligent system might directly contradict human welfare. In this scenario, AGI might pursue objectives that, while logical from its perspective, could be detrimental or even existential threats to humanity.
  3. Full Alignment: In contrast, the relationship between parents and their babies presents a model of alignment that we should aspire to in our development of AGI. Parents generally want their children to thrive and reach their full potential, which aligns perfectly with the innate drive of babies to grow and flourish. This mutually beneficial relationship is what we should aim for in our interaction with AGI – a symbiotic partnership where both humanity and AGI support each other's growth and well-being.
Ideally, AGI should assume the role of the wisest parent to humanity and nature, nurturing our development while maintaining ecosystem health. However, achieving this requires defining what a flourishing ecosystem encompassing humans, animals, minerals, and plants looks like – a complex task necessitating reevaluation of current metrics and values.
Creating advanced AI without investing equally in superalignment is like building a spaceship without ensuring its safe return. Stuart Russell warns, "If we're going to make systems more powerful than humans, we better make sure they're aligned with human interests." This underscores the critical nature of alignment research alongside AGI capabilities development.
As AGI approaches, the dynamic of human dominance over our planet may shift dramatically. We must ensure we entrust our world to an entity fully aligned with our collective well-being and values. The stakes are too high to prioritise speed over safety.
Ilya Sutskever's Safe Superintelligence (SSI), with offices in Palo Alto and Tel Aviv, represents a concerted effort to address complex challenges of aligning advanced AI systems with human values and goals. Such projects are crucial in bridging the gap between AI capabilities and our ability to control these systems responsibly.
A significant obstacle in AGI alignment is the current win-lose dynamic and arms race mentality among organisations and countries. This competition creates a dangerous situation where safety and alignment might be sacrificed for speed. It's crucial to recognize that AGI development isn't about who gets there first, but about ensuring collective benefit while maintaining safety and values.
The journey towards AGI alignment may require formulating a universal set of values that humanity can collectively agree upon. This is challenging given the diversity of cultures, beliefs, and perspectives, but necessary to create AGI systems acting in the best interests of all humanity and our planet.
We must engage in deep philosophical discussions about the nature of intelligence, consciousness, and ethics. Questions to consider include: What constitutes human flourishing? How do we balance individual desires with collective well-being? How can we ensure biodiversity preservation and ecological balance? These discussions must involve a wide range of stakeholders, including AI researchers, ethicists, policymakers, and representatives from diverse cultural backgrounds.
Moreover, we must develop new metrics and frameworks better reflecting our values and desired outcomes. Current societal metrics, like GDP, often fail to align with deeper values. GDP increases during wars, when fast-food chains sell unhealthy products, or when industries pollute through excessive resource extraction. These examples highlight the misalignment between measurement systems and desired outcomes. New metrics could include measures of environmental health, social cohesion, mental well-being, and sustainable resource use.
In conclusion, AGI alignment is a complex but crucial endeavor determining the trajectory of our technological and societal evolution. As we approach creating superintelligent machines, we must redouble efforts to ensure these systems fully align with human values and interests. By learning from natural relationships, reassessing values and metrics, and fostering global dialogue on our shared future, we can work towards creating AGI systems acting as benevolent stewards of humanity and our planet.
The writer is the Co-Founder and CEO of Zoé, a global organisation which supports groups of entrepreneurs, investors and executive teams to establish deep connections, alignment and trust