AI is Getting Smarter Every Day

What Happens When AI Is Smarter Than Us?

By Tony Burlinson

Singularity is a theoretical point in the future when AI surpasses human intelligence and then improves itself at an accelerated and uncontrollable rate. Some believe that once this Rubicon has been crossed, AI’s progress becomes so rapid that human beings will no longer be able to control it.

At the core of the Singularity theory is the idea that AI systems will be capable of redesigning or enhancing their own algorithms, hardware and learning processes. Each improvement makes it more capable of creating the next improvement. This could result in AI rapidly exceeding human cognitive abilities in problem solving, creativity, and decision making.

This transformation could fundamentally reshape our society, political structures, economic models, scientific theories, and perhaps even the core identity of what it means to be human. We would no longer be controlling our planet’s destiny.

There is strong disagreement on both the likelihood and consequences of Singularity. Some view it as a potential turning point for human progress, enabling breakthroughs in medicine, science and engineering that humans are incapable of achieving. Others fear loss of human control over critical systems and the emergence of structures that would fundamentally conflict with human values.

Singularity would either bring a new utopia of human existence, or it could destroy our civilization. Those are two extremes, which is why Singularity is so hotly debated.

Some claim that humans will always be capable of ‘pulling the plug’ in the tangible world, stopping AI by simply disconnecting physical systems. Others theorize that AI will become smart enough to bypass and control its physical limitations.

AI singularity remains hypothetical.

While AI capabilities continue to advance, there is no consensus that singularity will occur, when it might happen, or what it would look like. As a result, Singularity functions as both a serious topic of academic debate and a powerful lens for examining the long term implications of AI on human civilization.

My personal opinion: Headlines in the media about AI taking over from humans likely sells advertising. In reality we are a long way away from Singularity.

We are seeing phenomenal advances in compute power and model complexity. However, you only need to spend a bit of time using AI to understand that while AI models are amazing, they frequently fail to grasp real world context and then start hallucinating.

To be smarter than humans, AI will need to vastly reduce its propensity to hallucinate.

The current AI models make errors that most humans can spot a mile away. We’re likley going to see AI hallucinations for some time to come.

If AI ever does start to approach Singularity, we would likely need a choke point (kill switch) in the architecture.

There are two problems with that:

First, we have spent decades building technology infrastructures with decentralized redundancies. That’s the polar opposite of a choke point and it would be difficult and expensive to back out of our existing ubiquitous fail-safe architectures.

Second, choke points create latency. Slowing down AI is the last thing any of the firms operating in this space are going to do.

Share: LinkedIn