Intel connects 64 Loihi 'brain chips' together and hands them off to researchers

Intel's current brain network isn't quite as smart as a frog. Next up: mole rats.

Intel said Monday that its “Pohoiki Beach” neuromorphic system, consisting of 64 of its Loihi chips connected together to simulate 8 million neurons, is being made available to the broader research community. A 100 million-neuron system, known as Pohoiki Springs, is on track to debut later this year, as originally planned

In 2017, Intel began defining Loihi, the first of its neuromorphic “brain chips” that simulate how the human brain works, in terms of the synapses that connected them. Now, the company has returned to just talking about simulated neurons.

If a Wikipedia summary of the number of neurons within a particular animal’s brain is assumed to be correct, that would put the level of intelligence of the Pohoiki Beach system at about half that of an average frog, or slightly less than an adult zebrafish. One hundred million simulated neurons would be on par with the Ansell’s mole rat, described as a species of rodent native to Zambia that's noted for its tunnels that can stretch over two kilometers long. 

nahuku neuromorphic board Intel

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. (Credit: Intel Corporation/Tim Herman)

Intel is looking at neuromorphic or probabilistic computing as an alternative to conventional silicon architectures. It’s not quite clear what role Intel sees for a neuromorphic chip in a world governed by x86 PC processors, but the company is making the chips available to researchers to test out neural-inspired algorithms such as sparse coding, simultaneous localization and mapping (SLAM), and path planning.

The University of Waterloo has been able to run a real-time learning benchmark on top of a Loihi chip at 109 times less power than a GPU, which have been adapted to run dedicated algorithms in past years, researchers said. Scaling the network by fifty-fold only increases the power consumption by thirty times, they said, while still maintaining the algorithm in real time.