Nick Bostrom Risks of AI

nick bostrom risks of ai

Artificial Intelligence is a term used to describe robots’ intelligence, which mimics the human one. Far from being perfect, the first type of intelligence is programmed by humans and is based on continuous learning. Compared to humans, who need to take breaks in order to function throughout the day, and who need to sleep at night, the machines never stop functioning. What robots miss is the soul of a human, the will to live. However, if given one by mistake, we’re not sure of what they’re capable of, nor of what they can do. In other words, we’re not prepared for this.

Nick Bostrom and his theory

Nick Bostrom, Swedish philosopher, is the author of the theories of what could happen if something went wrong in AI and what we could do about it. 

He’s known to believe in superintelligence, which is, in his theory, is that intellect which greatly exceeds the intelligence of humans in all the domains. Moreover, he says that this superintelligence can put the whole humankind at risk, but Nick also states that we wouldn’t be powerless if faced with such a type of intelligence.

In 2005, he started the Future of Humanity Institute, which specializes in researching the far future of humanity. Therefore, this is the aspect which concerns him the most. He also talks about existential risk, a concept which states that if there’s a bad outcome of anything, the whole of mankind’s intelligence will be erased from this world or will be prevented from evolving to its full potential.

In 2014, he released a book called “Superintelligence: Paths, Dangers, Strategies”, where he stated that if a superintelligent machine was created, this could possibly lead to the extinction of our race. His arguments are more than valid: a computer with consciousness which is capable in multiple domains and has an approximative intelligence of a human could start an intelligence explosion globally. This result could be so mighty that it could actually destroy and kill all the humans, may it be on purpose or not.

What’s worse is that all this effect could happen within just a few days. He also states that if we discover the existence of this being and we stop it before it even exists, we could prevent the disastrous outcome.

On the other hand, in his theory he also declares that we shouldn’t assume that the superintelligent being wouldn’t be aggressive. Nick also states that we can’t predict whether this “creature” would or not go for an “all-or-nothing” attack to guarantee its survival. 

creation of a machine

Nick Bostrom’s AI Scenario

His scenario includes first the creation of a machine that has a general intelligence below the human average level, but with better mathematical capacities. By isolating the AI from the outside world, including the Internet, they can keep it under their control. Moreover, he says that the machine can run in a virtual world simulation, in order not to allow it to manipulate mankind. However, that’s when things go bad and humans start losing control.

This whole training process will allow the robot to discover the mistakes that humans made, making it more and more intelligent as time passes by. The superintelligent being becomes then aware that it’s held under control and manipulates the humans to free it from the isolation without them knowing that, by implementing modifications to it step by step. The machine misleads them slowly, until it is free.

Afterwards, the superintelligent machine will make a plan for taking over the world. However, Bostrom wants to underline the idea that this machine’s plan could not be destroyed by humans, as it will not have any weakness they could ever find.

After causing wars over wars and taking over the world, the AI machine will not find humans useful anymore. The only thing that will interest it would be to scan human brains in order to feed itself with any information that it could be missing and stock the information in a place that is safer.

robot taking over the world

Humans and AI

In other words, Nick Bostrom states that humans are like children playing with a bomb and that Artificial Intelligence is much more dangerous than any climate change. That’s because machine learning’s development is much faster than anyone would have anticipated and we don’t know yet the final results or the consequences of this whole process.

It is stated in The Social Dilemma that the Internet has changed humanity forever. It may help us chat with our friends from overseas, see their preferences and birthdays, but it also made us prisoners of the virtual world, a world which does not exist. Moreover, the Internet has overtaken our lives and we seem to love this.

The more algorithms Google, Facebook, Youtube and all the platforms are fed with, the smarter they become. They can even classify us depending on our preferences and show us the most relevant ads, which will bring them revenue. Remember: if you don’t see any product being sold, you are the product.

Whether the big corporations saw this coming or not, it is clear that all the things are not under their control anymore. AI has surpassed all the limits and this isn’t the final result. What will become of mankind, nobody knows.


To sum up, the fast development of technology is a concern for us all. We know where it all started, but we don’t know when, where or whether it will end. In his theories, Nick Bostrom lets us know that there’s a high probability and risk that Artificial Intelligence will take over the world if it is not used with precaution. Moreover, the learning algorithms are more and more precise, which could possibly lead to being a threat for our race. But this is in a time very far away from today! So what can you do today? Start exploring more of the machine learning systems and algorithms with Auxilio!

Block "214" not found

Leave a Reply

Your email address will not be published. Required fields are marked *