this video is sponsored by the online personal information removal service in cogne protect your data and personal information from data Brokers and search sites use my link in the description to get 60% off in Cogan's annual plan In 1967, the novel Lord of Light by Roger Zelazny was published. In it, a crew of space travelers arrive at a new planet. They are from Earth.
Earth has long been destroyed—perhaps from war or perhaps from being completely depleted of its resources. It isn’t totally clear. This new planet is now their home.
On it, they begin a new society. Overtime, as the planet becomes more populated, the original colonists devise a caste system of upper- and lower-class individuals, putting themselves at the top. In order to maintain their power and roles, the original colonists use and hoard advanced technologies, preventing the rest of the civilization from knowing or accessing it.
Technological advancement is completely restricted within the rest of the population. As a result, from their perspective of all lower-class citizens, the original colonists take on the appearances and roles of deities. Their powers and abilities appear God-like.
Of course, however, this is not the truth. It is not divinity. It is technology.
Eventually, one individual sets out to rebel against the godlike regime and reveal what is true so to allow everyone to have access to what the so-called gods have and to live better lives. In the book, this individual is a part of a group that is known as the Accelerationists. *** Right now, in the real world, technology is advancing at rates that cannot be fathomed by the human mind.
Generations so close to each other have never lived in realities so far apart. What used to be the concern of centuries or decades is now of years. Consider the fact that it took roughly eight times fewer years for humankind to go from the invention of glasses to the invention of the microscope (about 400 years) compared to what it took for humanity to go from the discovery of the structure of DNA to being able to literally edit human DNA (about 60 years).
Consider the fact that the first electronic programmable computer was invented in the mid 1940s, and, around that time, the number of transistors on microchips (which directly effects the processing power and speed of computer devices) was a around one to five transistors. By the 1970s, that number had risen to roughly 4,000 transistors. By the 90s, roughly a million.
By the 2000s, roughly 40 million. By the 2020s, well over 40 billion—soon approaching hundreds of billions and beyond. This all might just sound like numbers, but it is in fact the power of a god-like force building at a compounding rate that has long since departed from human comprehension and control.
We all seemingly now sit in the technocapitalist vehicle that we have built over the course of modern history. And it is rapidly speeding up. There appears to be no obvious way out of this vehicle.
Far more unsettling, there appears to be no obvious consensus about its direction. Where is it going? Where does it end?
Does it end? Will it be good for us? Do we even have a say?
In 2008, Benjamin Noys, professor of critical theory at the University of Chichester, wrote a blogpost. In it, he defined a term that would go on to refer to an incredibly dark, strange, and complex political philosophy that has become increasingly popular throughout the 2010s and 2020s. The term was accelerationism (which was inspired by the fictional group in Zelazny’s novel, Lord of Light).
Fundamentally, accelerationism is defined as the belief that increasing the speed of capitalistic and technological advancement and intensifying the problems that come as a result to the point of some form of societal collapse or significant transfiguration is a necessary and good thing. For an accelerationist, the sort of dizzying, uncontrollable speed of technology and societal challenges, of which we are increasingly experiencing in the modern world, should be leaned into and further accelerated. The current system is doomed to fail, so we should simply cause it to fail sooner.
Why we should do this and why this is a good thing largely depends on who you ask and which variant of accelerationism the person you ask subscribes to. The origins of accelerationism are widely attributed to the English philosopher and professor, Nick Land. In 1995, at the University of Warwick, Land along with several other individuals formed what become known as the Cybernetic Culture Research Unit, or the CCRU.
The CCRU was an experimental and rebel academic group with one of its main goals being, according to one of its key members, Sadie Plant, “to undermine the cheery utopianism of the 90s … We wanted a more open, convoluted, complicated world, not a shiny new order. ” And on that goal, they largely delivered. Land and the other members of the CCRU weaved together ideas from philosophy, science-fiction, cybernetics, and the occult to create theories and ideas surrounding culture, technology, psychology, reality, and, most importantly, how these domains interact in subversive feedback loops.
Overtime, the CCRU would devolve more and more into occult-like and even satanic-like tendencies, which can likely be attributed to, at least in part, the heavy substance use and mental breakdowns amongst members—particularly Land. Soon, the group, as an organized unit, would dwindle to an end. But the ideas it developed and disseminated would survive and go on to evolve into and inspire various forms of accelerationism.
Accelerationism is unique for several reasons. One particularly notable one is that it can be found on both the extreme right and extreme left of the political spectrum. Two worldviews that couldn’t be much further apart from one another can seemingly meet, with some level of agreement, on this idea.
Of course, as one might imagine, there are substantial differences as to why these opposing groups embrace the philosophy. The version of accelerationism typically associated with the left, which was largely inspired by the CCRU member Mark Fisher, is the view that accelerating technology and capitalism to near or complete breaking points is a good thing because there are better alternative versions of the world that are only possible on the other side of this world’s collapse. The term collapse here does not necessarily need to mean a total, apocalyptic-like destruction—though for some individuals, it might—but rather, an end to the current capitalist world as we know it, resulting in a post-capitalist paradigm.
Some individuals on this side of the philosophy hold the belief that technologies like AI and automation will ultimately take tedious work off the human plate, expand equality and freedom for all, and increase the overall quality of the human experience. Technology, in this view, is essentially the capitalist vehicle needed to reach a post-capitalist society. We are on a long, hard road but we are heading for a favorable destination.
We should therefore accelerate toward it, even if this means we might violently sideswipe some things along the way. The sooner we arrive, the sooner things get better. Individuals on the right side of the philosophy, like Land, don’t believe in this post-capitalist utopian vision.
For them, things are much, much bleaker. “The notion that self-propelling technology is separable from capitalism is a deep theoretical error,” said Land. Instead, these individuals believe that through the rapid development of capitalistic and technological forces, the entire global system will overcome its limitations and achieve its full potential, which will ultimately result in a technological consumption (or destruction) of humanity and perhaps the world as a whole—a dystopian-looking scenario with transhumanist or post-humanist rulers, massive inequality, and corporatized control.
This all, however, is important and good to the natural order and development of existence. At the very least, it is inevitable. For some of these individuals, the point of accelerating it all is more or less to just get the whole thing over with.
That is, to cause the inevitable self-annihilating outcome to occur as soon as possible. The vehicle we are in is not headed for a favorable destination, but rather, a wall. We are traveling at fatal speeds, and we cannot stop, get out, or change directions.
We can see the wall, and so, by accelerating, we will cause the crash to happen quicker and spare us of the dread and chaos that would otherwise form, persist, and increase in the anticipation of the crash. For some, perhaps anticipation of pain is worse than pain itself. For all functional purposes, accelerationism is extremely obscure with no meaningful, singular vision for what the world will be, should be, or why.
The entire concept is somewhere between philosophy, gospel, science-fiction, occult theory, and a potentially self-fulling doomsday prophecy. But perhaps what’s equally if not more interesting is what accelerationism seems to point to about humanity’s current state and sensibilities in general. In many ways, at this point in history, it’s much easier to imagine a world without humanity at all than to imagine a world that isn’t fully intertwined with capitalism and technological advancement.
“[There is a] widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it,” wrote Mark Fisher. We are seemingly locked in this technocaptalistic vehicle, destined to go wherever it goes. But that’s just it.
Where is it going? And why is it going? There appears to be no one in the driver’s seat.
There is no clear navigation system or central orientation. There is just this ceaseless, aimless, and inescapable forward motion. A popular sentiment amongst many powerful modern individuals, like tech-company founders, CEOs, and investors as well as technocrats and futurists more broadly, is the belief that progress is ultimately a natural and good thing, but it is not really something anyone can control.
According to the mathematical physicist, quantum computing researcher, and tech company founder Guillaume Verdon, “The goal is for the human technocapital mimetic machine to become self-aware and to hyperstiously engineer its own growth … To lean into the natural tendencies of the system to adapt for its own growth. ” If this is true, this raises the following questions: If it is not us controlling it, what is? What is determining its hypersitions, success, direction, and growth?
Is it truly purely in line with the metaphysical and natural order of the world? How do we know, and how do we know this is a good thing? With this line of thinking increasingly gaining popularity and control over the modern world, technology is becoming our new God.
It is all knowing, all powerful, and all good. It is the Abrahamic God that must be worshiped despite consequence or reason. It will create heaven and hell; it will know the wickedness and goodness of the world; it will cause the floods; and it will usher in the apocalypse to make way for new order and reorientation.
But all will be good, all will be part of its divine plan. Of course, there are very good, rational reasons to welcome some degree of significant technological and capitalistic advancement. In many cases, the results of new technologies can be incredible, fully deserving of the awe and reverence it receives.
Technologies have and will continue to save and improve the lives of many people—in health and medicine, connectivity, communication, accessibility, creative expression, safety, and so on. But the problem remains, particularly with the scope and scale in the modern world, technology cannot be narrowly controlled or focused onto any particular area or application. An AI that can easily diagnose ailments and prescribe medical treatments can also easily diagnose an individual’s psychology and execute effective manipulation strategies.
AI and automation that can easily execute real time stock trading, assist with artistic and entrepreneurial creation, or carry out shipping and fulfillment tasks, can also be used in warfare, create and distribute art and services indiscernible from what is human-made, or maliciously alter economic and political systems. Technology is not a hose that is used to put specific fires out; it’s always an opening of a floodgate. It will put out fires, but it will also destroy many things.
The truth is, no one knows if something terrible or great is going to happen—or how terrible or how great the things that will happen will be. But our goal, obviously, should not be to accelerate bad things or knowingly permit bad things to happen—wherever possible. We should not hinge our hopes on the romantic ideal that haphazardly causing or allowing things to go how they are will ultimately lead to a net good.
Part of what makes humanity as powerful as it is, is the ability to consider what is happening, to self-reflect, and to self-correct. Perhaps we are not as helpless in the vehicle of technology and capitalism as we might think. Perhaps we can apply the brakes in certain moments.
Perhaps we can steer in different directions on occasion. And perhaps we can consult each other in increasingly meaningful, effective ways to develop a more synergetic orientation and navigation system. Of course, this is all much, much easier said than done, and perhaps, on some level, it is naïve.
But if we have the power to create what we create, why not also the power to wield what we create in more careful ways? Our hope for our relationship and future with technology, like all things, starts with a better understanding and relationship with ourselves. We must deeply recognize our fallibility and ignorance—even with, and perhaps especially with, the augmentation of technology.
Technology merely inherits our fallibility and distributes it at scale. We must move through this world with a humility, thoughtfulness, and care, recognizing that we often don’t know what the world will be like or should be like; that we often don’t know what the world needs or what will make it a so-called better place. We don’t yet even properly know what we need as individuals.
Perhaps we should strive, as individuals and as a humanity, at least at times, to slow down—in our assessments and aspirations—to look in and down, instead of always up and forward. Our problems are not merely technological but psychological.