Elon Musk Unveils Tesla’s Quantum Computer – Dojo

Elon Musk Unveils Insane New Tesla’s Quantum Computer 2022: Tesla made a cryptic reference to a project called Dojo, a “super-powerful training computer”. Elon Musk, has tweeted that, “Tesla is developing a [neural network] training computer called Dojo that will process extremely large amounts of video data.”

It’s a beast! … A truly useful exaflop at de facto FP32.” So, what is it? Does it have something to do with Metaverse? Or is it just a normal computer with unique name?

Quantum computing could be the next big thing. This will make computers faster than our current ones. This technology will be used for computers that solve huge, complex problems.

These are the types of problems that normally take a normal computer day, weeks, or even months to solve. Using quantum computers, the work could be done in a fraction of the time it would take with a conventional computer.

Quantum computing is an area of computing that focuses on developing computer technology based on the principles of quantum theory, which explains the behavior of energy and material at the atomic and subatomic level. And as of right now, computers encode information in bits that have a value of 1 or 0.

Quantum computing uses quantum bits, also known as qubits. This allows them to exist in more than one state, so a 1 or 0 at the same time. Quantum computing is an extremely complicated field. As of right now, we do not have the technology to make this a viable option.

At this point, quantum computers are mostly inferior to computers of today. According to experts, quantum computing won’t be ready for another decade or so. Although some experts are predicting that by 2023, quantum computers may have some advantages compared to the computers of today. That’s why Musk’s company is indulged in it.

Tesla’s senior director of Autopilot hardware and the leader of the Dojo project, Ganesh Venkataramanan, gave the presentation in which he unveiled Dojo’s 7 nanometer D1 chip, a breakthrough in bandwidth and performance.

This is the second chip the Tesla team designed internally after the FSD chip in the FSD computer hardware 3. In response to the new D1 chip, the engineer said that the entire project has been designed internally by the Tesla team.

Everything from the architecture to the package has been designed by the Tesla team. It’s like GPU-level computation with CPU-level flexibility and twice the network chip’s IO bandwidth. Tesla’s chip is designed to “seamlessly connect without glue” and the automaker took advantage of that by connecting 500,000 nodes together.

It adds the interface, power, and thermal management to make what’s known as a training tile: This is a 9-PFlops training tile with 36TB per second of bandwidth in less than a cubic foot. He also had an actual Dojo training tile on stage.

And according to the engineers, “It’s never happened before. It’s amazing.” But sadly, Tesla didn’t unveil any actual Dojo hardware at the event.

On stage, Venkataramanan even appeared to surprise Andrej Karpathy, Tesla’s head of AI, by revealing that the Dojo training tile utilized one of his neural networks for the first time. But it still needs to form a compute cluster using those training tiles in order to claim to be the first Dojo supercomputer ever built.

Tesla says that it can combine 2 x 3 tiles in a tray and 2 trays in a computer cabinet for a total of over 100 PFlops per cabinet. And due to Tesla’s extremely high bandwidth, they claim to be able to link them all together in order to create the ExaPod.

Additionally, using the Tesla ExaPod, a 10 cabin system, Tesla’s Dojo ExaPod will break the barrier of the ‘ExaFlop of Compute’ – a goal that supercomputer makers have been attempting for a very long time.

Tesla has not put this system together yet, but Musk has stated that it will be in operation by the end of the year. However, despite being power efficient and being relatively small for a supercomputer, it would become the largest AI training computer in the world.

Tesla plans to use this new supercomputer to train its own neural networks to develop self-driving technology, but it also plans to make it available to other AI developers in the future too.

And due to the fact that this was Tesla’s first attempt at developing a supercomputer from scratch, the company believes that there is a lot of room for improvement, and it is teasing a 10 fold performance increase in some levels in the next release of Dojo.

Andrej Karpathy was speaking at the 4th International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2021). And he said, “I wanted to briefly give a plug to this insane supercomputer that we are building and using now.”

Karpathy explained that the cluster has 720 nodes, each powered by eight Nvidia A100 GPUs (the 80GB model), totaling 5,760 A100s. The accelerator firepower is complemented by ten petabytes of “hot tier” storage, which can transfer 1.6 terabytes per second.

Karlpathy described this “incredibly fast storage” as “one of the world’s fastest filesystems.” He said, “So this is a massive supercomputer. I actually believe that in terms of flops this is roughly the number five supercomputer in the world, so it’s actually a fairly significant computer here.”

Karpathy’s remarkable claim seems to hold up with some back-of-the-envelope math. It’s true that Nvidia’s marketing materials claim each A100 can push 9.7 teraflops, but in benchmarks for systems like the Selene supercomputer, eight A100 nodes each turn out 113.3 teraflops.

However, after 720 eight-A100 nodes, you get around 81.6 Linpack petaflops that put the Tesla system well ahead of Nvidia’s Selene system, which delivers 63.5 Linpack petaflops and ranked fifth among the most recent Top500 lists.

Pretty cool, right? But this is not it, Karpathy also showed off a slide with a poorly-photoshopped brain in a zooming car, captioned with statistics comparing humans to meat computers with a “250 ms reaction latency” in a “tight control loop with one-ton objects at 80 miles per hour.”

And FSD is about replacing that sluggish computer (which Karpathy noted could write poetry, but often couldn’t stay within the lines on the road) with a faster, safer one. But in truth, training computers to understand roads – even with cameras and lidar on board – is tough.

There are innumerable contingencies and wacky scenarios that make it hard for the vehicle to process the surroundings as the human brain would. In one example, Karpathy showed a truck kicking up dust and debris that blinded the cameras for a few seconds.

So, in order to train systems that can deal with these obstacles, Tesla first collects tons of data. Karpathy said, “For us, computer vision is the bread and butter of what we do and what enables the autopilot. And for that to work really well, you need a massive dataset – we get that from the fleet.”

And in fact, the datasets are huge: one million ten-second videos from each of the eight cameras on the sampled Teslas, each running at 36 frames per second and recording “highly diverse scenarios.” Each video has six billion object labels (including depth and velocity data) and totals 1.5 petabytes.

Karpathy said, “You … need to train massive neural nets and experiment a lot. Training this neural network – like I mentioned, this is a 1.5 petabyte dataset – requires a huge amount of computer.” In other words, Tesla “invested a lot” in this feature.

According to Karpathy, the newly unveiled cluster is optimized for rapid video transfer and processing thanks to its “incredibly fast storage” and “a very efficient fabric” that enables distributed training across all nodes. Meanwhile, Dojo continues to be teased.

Karpathy says they’re working on Project Dojo, which will take this to the next level. But he’s not ready to reveal any more details about that at this point. Because only a few tweets by Musk mention the exaflop target, with a claim that “Dojo uses our own chips [and] a computer architecture optimized for neural net training, not a GPU cluster” and sharing that Dojo will be available as a web service for model training “once we work out the bugs.”

His another tweet stated, “Could be wrong, but I think it will be best in the world.” But as of now, Tesla is content to tell the world that it is investing heavily in HPC – and that the investments are only increasing.

Karpathy said the HPC team is expanding a lot, and he encouraged audience members who were interested in self-driving cars to contact the company. In other words, whenever this happens, we’ll be able to enjoy a new yet very smart way to be able to operate a car.

With people like this, I can assure you that things are going to get much more awesome than you could have ever imagined. Anyways, what do you guys think about this? Tell us in the comments.

————————-

Thanks for reading till the end. Comment what’s your opinion about this information “Elon Musk Unveils Insane New Tesla’s Quantum Computer 2022“.

Read More:

Information source: Genius’s Guide

Leave a Comment