- Velocity
- Posts
- Sam Altman: A Cautionary Tale
Sam Altman: A Cautionary Tale
Why we can't afford monopolies determining the path to AGI.
The Firing of Sam Altman from OpenAI reveals the crucial problem with monopolising AI distribution
I’m writing this a few hours after the startling news that Sam Altman has been ejected from the board of OpenAI. It seems like just yesterday that Sam was presenting at the OpenAI dev conference, trumpeting the company’s latest developments to a global audience and capturing more hearts and minds. Altman carried himself as a charismatic oracle, a sort of high priest of AI evangelism. Greg Brockman has also released an explosive public statement stating that he has quit OpenAI. Whether or not Sam was ejected for the ‘right’ or ‘wrong’ reasons (which is yet to be seen), their abrupt exit, though cloaked in corporate opacity, is demonstrative of the risks that come with centralised monopolies becoming dominant in the AI space.
The crucial issue is the centralization of decision-making authority over technologies that are quickly becoming increasingly integral to our daily lives. I have no doubt that Sam’s firing will have significant ramifications for the direction OpenAI takes, as changes in leadership usually have far-reaching implications. It is therefore inconceivable that a small board of individuals has such an ungodly amount of authority over the collective path to AGI, with little to no transparency regarding the way decisions are made. Those who hold preeminence in global AI distribution also hold the keys to the very way in which we perceive, interpret and engage with reality.
Many people are underestimating how quickly these systems will become integrated as indispensable extensions of human capability, our invisible yet essential appendages. Sam’s firing should serve as a stark warning of the perils inherent in allowing the reins of revolutionary technology to rest in the hands of a select few. The centralization of such colossal power – power over technologies weaving themselves into the fabric of our existence – is not just troubling; it's a harbinger of potential tyranny.
If we end up with a ‘unipolar’ AI ecosystem, the potential for some kind of abuse of power is immense. The potential for a poor personal decision to have massive, world-changing ramifications is also immense, particularly once AI systems become more deeply integrated into human workflows and our collective consciousness. Additionally, from a security perspective, the over-reliance on a single or a few centralized AI systems creates the potential for single points of failure in various sectors. A final and perhaps more pressing issue is that Microsoft owns 49% of a ‘non-profit’ company, although not possessing a board representative at this time. The convergence of profit-driven motives and altruism will always blur the lines between commercial gain and the common good.
The monopoly issue in AI is a societal and ethical imperative, and Sam’s recent firing strengthens the case for Open Source initiatives to pull ahead. Today, we are at a moral crossroads. By fostering a more decentralized and open AI ecosystem, we can mitigate the risks associated with concentrated power and control. How we navigate this issue is core to the future of humanity itself.