AI revolution could cause ‘Great Homogenization’.
This is an opinion editorial by Aleksandar Svetski, founder of The Bitcoin Times and The Amber App and author of “The UnCommunist Manifesto,” “Authentic Intelligence” and the upcoming “Bushido Of Bitcoin.”
The world is changing rapidly. Artificial intelligence (AI) is a groundbreaking technology, but probably not for the reasons you might think.
You may have heard things like, “Artificial general intelligence (AGI) is just around the corner,” or, “Now that language is solved, the next step is conscious AI.”
However, I’m here to tell you that those concepts are both misleading. They are either the naive beliefs of technologists who believe that AI is omnipotent, or the deliberate incitement of fear and hysteria by people with ulterior motives.
- SEC’s All-In Political Battle Over Crypto.
- New Crypto Bill that Gary Gensler wants to keep hidden.
- Blockchain and crypto’s future is uncertain.
I don’t think AGI is a threat or that there is an “AI safety problem,” or that we’re on the verge of some singularity with machines.
But…
I do believe this technological paradigm shift poses a significant threat to humanity — which is in fact, about the only thing I can somewhat agree on with the mainstream — but for completely different reasons.
To understand what these reasons are, let’s first try to understand what’s really happening here.
Introducing… The Stochastic Parrot!
Technology is a double-edged sword. It can be used for good or bad purposes.
Just like a hammer can be used to build a house or harm someone, computers can be used to document ideas that change the world or operate central bank digital currencies (CBDCs) that enslave you into crazy, communist cat ladies working at the European Central Bank.
The same goes for AI. It is a tool, a technology. It is not a new life form, despite what some enthusiasts may believe.
What makes generative AI so interesting is not that it is sentient, but that it’s the first time in our history that we are “speaking” or communicating with something other than a human being, in a coherent fashion. The closest we’ve been to that before this point has been with… parrots.
Yes: parrots !
You can train a parrot to kind of talk and talk back, and you can kind of understand it, but because we know it’s not really a human and doesn’t really understand anything, we’re not so impressed.
But generative AI… well, that’s a different story. We’ve been acquainted with it for six months now (in the mainstream) and we have no real idea how it works under the hood. We type some words, and it responds like that annoying, politically-correct, midwit nerd who you know from class… or your average Netflix show.
In fact, you’ve probably even spoken with someone like this during support calls to Booking.com, or any other service in which you’ve had to dial in or web chat. As such, you’re immediately shocked by the responses.
“Holy shit,” you tell yourself. “This thing speaks like a real person!”
The English is immaculate. No spelling mistakes. Sentences make sense. It is not only grammatically accurate, but semantically so, too.
Holy shit! It must be alive!
Little do you realize that you are speaking to a highly-sophisticated, stochastic parrot . As it turns out, language is a little more rules-based than what we all thought, and probability engines can actually do an excellent job of emulating intelligence through the frame or conduit of language.
The law of large numbers strikes again, and math achieves another victory!
But… what does this mean? What the hell is my point?
That this is not useful? That it’s proof it’s not a path to AGI?
Not necessarily, on both counts.
There is lots of utility in such a tool. In fact, the greatest utility probably lies in its application as “MOT,” or “Midwit Obsolescence Technology.” Woke journalists and the countless “content creators” who have for years been talking a lot but saying nothing, are now like dinosaurs watching the comet incinerate everything around them. It’s a beautiful thing. Life wins again.
These tools are useful for various purposes including ideation, faster coding, and high-level learning. However, when it comes to Artificial General Intelligence (AGI) and consciousness, it’s difficult to say for sure. There could be a pathway there, but it seems unlikely at the moment. Consciousness is much more complex than what can be conjured up with probability machines. To believe otherwise is a strange blend of ignorance, arrogance, naivety, and emptiness.
So, what’s the problem with the current state of technology and what’s the risk?
Enter The Age Of The LUI
As previously mentioned, computers are arguably the most powerful tool mankind has built. They have evolved from punch cards to command line to graphical user interface to mobile, and now we’re moving into the age of the LUI or “Language User Interface.”
This is a significant paradigm shift. Every application we interact with will have a conversational interface, and we will no longer be limited by the speed at which we can type or tap on screens. Speaking in a language is much faster than typing or tapping. Thinking is even faster, but it’s not practical to put electrodes in our heads. LUIs may make Neuralink-type technology obsolete because the risks associated with implanting chips in the brain will outweigh any marginal benefit over just speaking.
However, there’s a danger to this. Generative AI will determine the answers to every question we have. Just like how Google determines what we see in searches and social media platforms “feed us” through their feeds, the screen will become the model of the world. Every bit of information we consume may be transformed into some safe, responsible, or acceptable version of the truth, approved by some faceless “safety police.” This is what’s referred to as the “Great Homogenization.”
The ‘Great Homogenization’
Imagine every bit of data you seek being returned in a way that is deemed “safe,” “responsible,” or “acceptable” by some faceless “safety police.” Imagine every opinion you ask for being some inoffensive, apologetic response that doesn’t actually tell you anything or worse, an ideology wrapped in a response so that everything you know becomes some variation of what the manufacturers of said “safe AI” want you to think and know. This is the danger of the Great Homogenization.
If only everyone could be reduced to numbers on a spreadsheet or robotic beings with the same beliefs, it would be much simpler to create a utopian society. We could distribute resources equally, resulting in everyone being equally miserable.
This is like a combination of George Orwell’s thought police and the movie “Inception,” where every question or thought is monitored and controlled by artificial intelligence, potentially implanting ideologies into people’s minds. This is what information does; it plants seeds in our minds.
Therefore, it’s vital to have a diverse range of ideas in people’s minds. We want a flourishing rainforest of ideas, not a monoculture field of wheat that is dependent on one source for survival.
Initially, the internet was a place where anyone could express their opinions freely. However, it is now under attack from various sources, such as the de-anonymization of social profiles and the algorithmic filtering of information.
One potential solution is to allow LUIs to take over, creating a superior user experience, while creating an “AI safety council” that would regulate large language models for safety.
The danger of this approach is that the truth could become what the model decides, rather than actual reality. It’s uncertain what will happen to the internet when information discoverability fundamentally transforms, and it may become increasingly difficult to find alternative viewpoints.
Attempts to regulate and filter speech may pave the way for squashing possible alternatives, making it vital to build alternative systems now.
The text discusses the risks of relying on artificial intelligence (AI) and large language models (LLMs) to filter and control the information we consume. The author argues that if all language is filtered through approved algorithms, it will limit the range of ideas and viewpoints available to us, creating a narrow worldview. The author believes that this is a massive risk for society, as it could lead to a loss of critical thinking and a lack of diversity of ideas.
To avoid this scenario, the author suggests pushing back against the narrative of AI safety committees, which are actually speech and thought regulators, and building alternative open-source models. The author’s team is working on building smaller, narrow models that people can use as substitutes to large language models. The goal is to make these models compact enough to run locally on personal machines while retaining a unique bias for use when and where needed. The author plans to unveil their first model soon, focusing on the topic of Bitcoin. The ultimate aim is to create a world with real diversity of thought, ideas, and viewpoints, which the author refers to as an “idea-versity.”