What If Elon Musk Is Right About AI?

Elon Musk is at or near the top of every AI influencer list I’ve ever come across, despite the fact that he doesn’t have a degree in AI and seems to have only one academic journal article in the field, which is very well received notice.

There is nothing wrong with that; Yann LeCun was trained in physics (the same field as one of Musk’s two graduate degrees) but is more appropriately known for his pioneering work in machine learning.

“I’m also known for my AI work, but I’ve trained in cognitive science. The most important paper I’ve written for AI was in a psychology journal. Well, Musk’s work on driverless cars has undoubtedly influenced the development of AI.

But everything he says about AI is so wrong. Most notoriously, none of his predictions about the timeline for self-driving cars have been correct.

In October 2016, he predicted that a Tesla would drive itself from California to New York by 2017. (It didn’t happen.) Tesla has deployed a technology called “Autopilot,” but everyone in the industry knows the name is a fib, more marketing than reality.

Teslas are no closer to being able to drive themselves; Seven years after Tesla introduced the software is still buggy enough that a human driver still must pay attention at all times.

Musk also consistently misunderstands the relationship between natural (human) intelligence and artificial intelligence. He repeatedly argued that Tesla doesn’t need lidar — a sensing system that virtually every other autonomous vehicle company relies on — based on a misleading comparison between human vision and cameras in driverless cars.

While it’s true that humans don’t need lidar to drive, current AI doesn’t seem anywhere near good enough to be able to understand and deal with the full road situation without it. Driverless cars need lidar as a crutch because they do not have human-like intelligence.

Tesla also can’t avoid collisions with constantly stopped emergency vehicles, a problem the company has failed to solve for more than five years. For reasons still not publicly disclosed, perceptual and decision-making systems for cars have not yet managed to drive with sufficient reliability without human intervention.

Musk’s claim is like saying that humans don’t need to walk because cars don’t have legs. If my grandmother had wheels, she would be a car.

Despite a spotty track record, Musk continues to make announcements about AI, and when he does, people take it seriously.

His latest, first reported by CNBC and then widely picked up on, took place a few weeks ago at the World Government Summit in Dubai. Everything Musk has said is, in my professional judgment, spot-on — and some of it is way off.

Most erroneous was his implication that we are close to solving AI – or reaching so-called “artificial general intelligence” (AGI) with the flexibility of human intelligence – claiming that ChatGPT has shown people just how advanced AI has become.

That’s just silly. To some people, especially those who haven’t been following the AI field, the extent to which ChatGPT can mimic human prose seems pretty amazing.

But it is also deeply flawed. A truly super-intelligent AI will be able to tell truth from falsehood, to reason about people and objects and science, and be just as versatile and quick at learning new things as humans – none of which are the current features of chatbots. generation is not efficient.

All ChatGPT can do is predict text that may be plausible in different contexts based on the vast body of written work, but it has nothing to do with whether it is true or not.

Information Source: VOX

Read More:

Leave a Comment