Elon Musk Thinks: The End Is Near, But Why?

Elon Musk Thinks: The End Is Near, But Why?:- Should we be afraid of artificial intelligence? If you ask AI experts, they will say you are getting carried away by the thoughts of what AI could be. They believe we can have sentient machines capable of communication and reaction like humans or even better. But they think that level of sentience is still far away in the future.

Elon Musk thinks otherwise; he believes that artificial intelligence represents a great danger, and we might be facing a threat to our existence as humans. Many AI experts think the outcry about the negative aspect of artificial intelligence is too overwhelming.

For the people involved with developing AI, all they see is a lot of thinking, and they see the AI taking inputs and using them to predict outputs. They don’t see killer robots overtaking the world and turning us all into enslaved people. But Elon Musk has an urgent warning to us all!

Musk is terrified that artificial intelligence will achieve more knowledge than humans, leaving us left behind by a long stretch. He has also tried to warn and convince people about the dangers that artificial intelligence poses and the need to slow down its development, but he thinks it’s too late.

On several occasions, the richest man in the world has warned us; if you do a quick search, you will see several videos about his warnings. Musk has warned us for years, depicting scary outcomes like in The Terminator.

We doubt if we can build a Skynet, but who knows, a group of scientists or military personnel somewhere might be building something close to it. It seems the business magnate would be taking matters into his own hands concerning AI.

Musk is very involved in the development and research of artificial intelligence; Neuralink and OpenAI are proof he’s not entirely opposed to the idea of having artificial intelligence run things. Tesla cars, which he is most famous for, have artificial intelligence at the heart of their full self-driving function.

It is what makes them so loved and cherished around the world. But Musk is very concerned about how fast AI is developing and growing. In 2014, he told CNBC’s Closing Bell saying, “I like to just keep an eye on what’s going on with artificial intelligence.”

He went on to say, “There have been movies about this, you know, like The Terminator, there are scary outcomes.” In 2017, he referred to the Terminator movie again, here he cited his neurotechnology startup Neuralink – which aims to develop brain implants for humans to interface with machines.

He wrote on Twitter, saying, “That is the aspiration: to avoid AI becoming the other.” Musk thinks Neuralink is needed as a preventative tool against a possible AI threat like Skynet. Musk emphasized his warning while speaking again in 2017 at the year’s annual summer meeting of the National Governors Association, a Washington, D.C.-based nonpartisan political organization.

During his speech, he said, “Robots will be able to do everything better than us, I have exposure to the most cutting edge AI, and I think people should be really concerned by it.” Musk believes that DeepMind is a Top Concern when it comes to artificial intelligence.

The tech billionaire profited from an early investment in DeepMind, but that didn’t stop him from warning us about them. Google acquired DeepMind in 2014 for an amount estimated to be $600 million.

The research lab is led by Demis Hassabis, best known for developing artificial intelligence systems that can play games better than humans. Musk thinks the nature of the AI being built by DeepMind is not ideal.

In an interview, he told the New York Times that “Just the nature of the AI that they are building is one that crushes all humans at all games. It is basically the plotline in War Games.”

Musk believes that the time frame where AI gets smarter than humans is less than five years. He told the New York Times that he could confidently say that technological singularity will happen due to his experience working with AI at Tesla.

He says that it doesn’t mean that everything will go to hell in five years. It just means that things will get unstable or weird. AI pioneer Yoshi Bengio contradicts Elon Musk; he spoke to the BBC, saying, “We are very far from superintelligent AI systems, and there may even be fundamental obstacles to get much beyond human intelligence.”

But Musk is convinced that Artificial intelligence constitutes a bigger threat than nuclear weapons. Nuclear weapons are regulated, so people cannot go around building them without expecting sanctions. But there is no regulatory oversight over artificial intelligence.

Musk knows that artificial intelligence is not all bad, but it needs to be kept out of some hands, so we don’t end with undesirable outcomes. Speaking about the need for regulating AI, Musk has revealed that the usual approach for regulating things will not apply to AI.

He thinks the process of implementing regulations is slow and linear right now while artificial intelligence grows at an exponential rate. We can’t have a linear response to an exponential threat, and we set ourselves up to fail if we do that.

Musk is scared that if the development of AI continues unchecked, someone might intentionally or accidentally develop a fully sentient AI that might decide to send us packing from our planet. The destruction of our planet is not the only threat we face by allowing AI to develop unchecked.

Another threat posed by AI is the manipulation of data; if equipped with the proper programs, an AI can have eyes and ears everywhere by accessing every camera and microphone connected to the internet.

It can use data to manipulate people, propose propaganda, or manipulate a very important presidential election that could lead to global dispute, disrupting global peace. Propaganda can prove to be more threatening than being hunted by a killer drone.

Have you heard of that phrase that says if you can’t be them, you join them? Musk thinks the answer to keeping artificial intelligence in check is to merge with it. But how can we be sure that the future encompasses the best part of humanity?

With a layer of artificial intelligence in our brains, can we even call ourselves human? Is this the next step of evolution? For Musk, a layer of artificial intelligence in our heads might be the best possible solution.

Instead of having such a high level of intelligence exclusively controlled by the government and large corporations in purely digital form, it can be democratized as part of our minds. But some people are taking a different approach towards curbing the potential dangers of artificial intelligence.

In 2017, the Future of Life Institute published an open letter outlining a set of principles that they deem necessary to avoid an out-of-control artificial intelligence or a doomsday scenario involving autonomous weapons.

This open letter has been endorsed by Stephen Hawking, Elon Musk, Ilya Sutskever, and Demis Hassabis. The letter identifies research issues; the research goal of AI should be to create valuable intelligence, not undirected intelligence.

It also states that all investments in AI should be accompanied by funding for research to ensure its beneficial use.

It mentions a constructive and healthy exchange between AI researchers and policymakers, teams developing artificial intelligence should actively cooperate to avoid corner-cutting regarding safety standards, and a culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

The letter touched on the issue of ethics and values, which everyone is most concerned about. In an arms race, lethal autonomous weapons should not be used. AI technologies should benefit and empower as many people as possible.

If an AI system causes harm, it should be possible to ascertain why. It also talks about personal privacy. People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

The subject of human values was touched in the letter; it talked about AI systems being designed and operated to be compatible with ideals of human dignity, freedom, rights, and cultural diversity.

All these principles are great, and they ensure that AI would not someday spell doom for us, as Elon Musk predicts. Principles are not rules, and no one is obligated to follow them. Musk thinks if we humans decide that an AI extension of ourselves is the right way to deal with artificial intelligence, laying the groundwork is the most important thing we have to do.

Musk has embarked on the ambition to sync the human brain with artificial intelligence, demonstrating with a neural implant that can transmit brain waves without external hardware. Musk describes the neural implant as a Fitbit in our skulls with tiny wires.

This implant will allow us, humans, to communicate with computers without pushing buttons or even lifting a finger. It would be called the Brain to Computer Interface (BCI). It will translate our thoughts to computer language while translating it to brain waves directly back to our minds.

———————–

Thanks for reading till the end. Comment what’s your opinion about this information “Elon Musk Thinks: The End Is Near, But Why?“.

Also Read:

Information Source: Elon Musk Live

4 thoughts on “Elon Musk Thinks: The End Is Near, But Why?”

  1. agree fully , we can barley control our selves .et alone a suepr warrior that reaosn with no soul sees us as useless , i saw that thought in the book the second genesis 1969 the race was one n for my self can be transport our conscious in the artificial super power unit,,,, that woudl not die as do??leave our frail bodies to live hundreds of years develop the ego i was enjoying it could be down and how to move one entity into unliving body to enter a new life with unlimited know how ?? scary but could have cared less , do it anyway ….the rift consequences would have huge out come ,?? to what end ? is the question we knew the geno had two like antennae like sticking out one on each side ,,, they appear to shorten as we age ,that could be controlled by protecting the antenna like features to preventing aging by coating and protecting the two pronged items and live for hundreds of more years that would note solve our depraved nature to kill dominate the lessor . Amassing greater consequences in the end for what we have done???. living for 800 years had possibility that could be reached , stage one. the world powers were seeking the new ideas fast in those years …the question is when and where and how soon how far it is now gone >??

    Reply

Leave a Comment