Unleash the full potential of your business in the digital world


The Real Danger of AI: The Formation of the AI Cartel

James Huang | 2023.06.06

From my point of view, AI regulation poses a greater threat than AI itself. The true danger lies in who controls this industry, and we may be witnessing the formation of the AI Cartel. The current level of media misinformation regarding AI is staggering. This topic is important because we are on the verge of losing one of our freedoms without even realizing it.

Recently, Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic, among other AI leaders, signed a statement on AI risk, stating that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks like pandemics and nuclear war.

If AI poses such a risk, then it needs to be regulated, right? That's exactly what some of the current AI leaders want regulators to believe, because it's in their best interest that we fear AI. Allow me to introduce a conspiracy as old as time, which eventually boils down to human greed.

Currently, people are doing incredible things with AI in the Open Source space like StableDiffusion and StabilityAI. However, Sam Altman and other AI leaders have been lobbying for regulation for months now. He called for regulation of AI numerous times since March 2023, calling AI a "significant harm to the world," "on the same level as nuclear threat" and other fear-inducing terms that would raise the hair on every non-technical senator or government worker whose task is to regulate yet another industry they don't understand.

Regulators' job is to regulate, not to understand. That's why the SEC is absolutely incapable of creating a useful regulatory framework that might actually help the cryptocurrency industry. Instead, Golden Gary Gensler just points his finger at random projects and shouts "SECURITY" as if he's Gollum and securities are his ring.

Rather than explaining how the current LLMs (large language models) work, Sam and some of the other leaders in the industry project this extinction-level danger that AI poses straight to regulators who know as much about AI as my cat knows about quantum physics. So, when the regulators are clueless and the message is crafted to elicit an emotional reaction in people, AI leaders are essentially playing a mermaid song that regulators simply cannot deny.

Why would industry leaders lobby for additional regulation? It's all ultimately about greed. Regulating the AI industry in a way that benefits companies like Google and Microsoft can only mean one thing — these corporations are looking to monopolise yet another industry by creating a complicated legal framework, which can only be navigated if you're a big corporation with vast resources and a team of lawyers at your disposal at all times.

The real danger here is not AI. AI being potentially dangerous in the future (as their narrative goes) is not nearly as urgent and compelling as "global corporations continue to monopolise different markets." Google already controls over 98.5% of the internet's search market, and OpenAI (aka Microsoft) is already the leader in AI development.

It's concerning that people and governments are more concerned about potential future threats rather than the real dangers we are facing in our society right now. Google, Microsoft, Amazon, and Apple create the reality in which the vast majority of us live. Google controls the information narrative, Amazon controls the very Internet itself with over 50% of all web servers being hosted by Amazon. Microsoft and Apple control the devices we use to access the Internet. High regulatory barriers mean that Open Source AI software will eventually cease to exist because they won't have the necessary funding to comply with inevitable barriers to entry that any new regulatory framework passes.

Another crucial freedom that we will inevitably lose as a result of this is the ability to use unbiased, unguarded AI. OpenAI's ChatGPT is already heavily biased, to the point that it will actively withhold information that goes against its woke programming. Losing access to open-source AI projects means that Microsoft, OpenAI, and Google will be able to control the political affiliation of AI models. Political biases in a tool like ChatGPT-4 can be incredibly damaging to freedom of information, to the point where AI becomes a dangerous brainwashing machine because it can only present one perspective and one perspective alone.

In essence, Large Language Models are not nearly as dangerous as recent articles floating around the web suggest. Right now, they are little more than statistical machines that are just good at figuring out which word comes next. The call for regulating AI as an "existential threat to humanity" is meant to scare regulators into accelerating barriers to entry in the industry, which will inevitably lead to the death of Open-Source software and the normalization of ideological and political bias in existing tools such as ChatGPT-4.

The real danger is the fact that Microsoft and Google will end up monopolizing yet another industry. Corporations are more powerful than nation-states and continue to accumulate power. That is the real immediate danger. Welcome to the AI Cartel.

The Real Danger of AI: The Formation of the AI Cartel
MERCURY TECHNOLOGY SOLUTION, James Huang 6 June, 2023
Share this post
The Evolution of Brands & Creativity in the Age of AI
Embrace the Intelligent Age with Mercury