To build a better artificial intelligence supercomputer, we need light

GlobalFoundries, a company that makes chips for companies including AMD and General Motors, previously announced a partnership with Lightmatter. Harris said his company is “working with the largest semiconductor companies in the world as well as hyperscalers,” referring to the largest cloud computing companies like Microsoft, Amazon and Google.

If Lightmatter or other companies can reinvent the wiring of large-scale artificial intelligence projects, a key bottleneck in developing smarter algorithms may disappear.The use of more computing is fundamental to the advancements that led to ChatGPT, and many AI researchers see further expansion of the hardware development as critical to future progress in the field and the hope of achieving vaguely specified goals. general artificial intelligenceor AGI, meaning a program that can match or exceed biological intelligence in every aspect.

Lightmatter CEO Nick Harris said connecting a million chips with light could surpass today’s cutting-edge technology by generations. “Passage will enable the AGI algorithm,” he said confidently.

The massive data centers needed to train giant artificial intelligence algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them.Wires and switches – yes huge engineering undertakingThe conversion between electrical and optical signals also places fundamental limits on the chip’s ability to perform computations as a whole.

Lightmatter’s approach is designed to simplify the tricky flow of traffic within AI data centers. “Usually you have a bunch of GPUs, and then a layer of switches, a layer of switches, a layer of switches, and you have to traverse that tree” to communicate between the two GPUs, Harris said. Harris said that in a data center connected through Passage, each GPU can establish high-speed connections to other chips.

Lightmatter’s work on Passage is an example of how the recent boom in artificial intelligence has inspired companies large and small to try to reinvent the key hardware behind advanced technologies like OpenAI’s ChatGPT. Leading supplier of GPUs for artificial intelligence projectsheld its annual meeting last month, in which CEO Jensen Huang Launched the company‚Äôs latest chip for training artificial intelligence: called a GPU blackwellNvidia will sell GPUs in a “SuperChip,” which consists of two Blackwell GPUs and a traditional CPU processor, all connected using the company’s new high-speed communications technology called “SuperChip.” NVLink-C2C.

The chip industry is known for finding ways to squeeze more computing power out of chips without increasing their size, but Nvidia is going against the flow. The Blackwell GPUs inside the company’s superchips are twice as powerful as their predecessors, but are made by joining two dies together, meaning they consume more power. In addition to Nvidia’s efforts to glue its chips together with high-speed links, this trade-off suggests that upgrades to other key components of AI supercomputers, such as those proposed by Lightmatter, may become more important.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button