Major advancement in computing solves complex math problem 1 million times faster

Reservoir calculation is already one of the most advanced and powerful types of artificial intelligence scientists have – and now a new study is showing how to make it up to a million times faster on certain tasks.

This is an exciting development when it comes to tackling the most complex IT challenges, from predicting how the weather will turn out to modeling the fluid flow through a particular space.

It is for these problems that this type of resource-intensive computing was developed; now, the latest innovations will make it even more useful. The team behind this new study is calling it the next generation of reservoir computation.

“We can perform very complex information processing tasks in a fraction of the time using much less computing resources compared to what reservoir computing can currently do.” says physicist Daniel Gauthier, Ohio State University.

“And the tank calculation was already a significant improvement over what was previously possible.”

Reservoir computing is based on the idea of neural networks – machine learning systems based on the functioning of living brains – which are trained to spot patterns in a large amount of data. Show a neural network a thousand images of a dog, for example, and it should be precise enough to recognize a dog the next time it appears.

The details of the additional power provided by the tank calculation are quite technical. Essentially, the process sends information into a “reservoir”, where the data points are linked in different ways. The information is then sent out of the reservoir, analyzed and fed back into the learning process.

This makes the whole process faster in some ways and more adaptable to learning sequences. But it also relies heavily on random processing, which means what’s going on inside the reservoir is not clear. To use an engineering term, it’s a “black box” – it usually works, but no one is quite sure how or why.

With the new research just published, tank computers can be made more efficient by removing randomization. Mathematical analysis was used to determine which parts of a tank computer are really critical to its operation, and which are not. Getting rid of those redundant bits speeds up processing time.

One of the end results is that a period of “warming up” is necessary: ​​this is where the neural network is fed training data to prepare it for the task it is supposed to perform. The research team has made significant improvements here.

“For our new generation tank calculation, there is almost no warm-up time required”, said Gauthier.

“Right now, scientists have to put in 1,000 or 10,000 or more data points to warm it up. And that’s all the data that’s lost, that isn’t needed for the actual work. We just have to put one in. or two or three data points. “

A particularly difficult forecasting task was completed in less than a second on a standard desktop computer using the new system. With current tank computing technology, the same task takes much longer, even on a supercomputer.

The new system turned out to be between 33 and 163 times faster depending on the data. When the task focus was shifted to prioritize accuracy, the updated model was a million times faster.

This is just the beginning of this type of super-efficient neural network, and the researchers behind it hope to confront it with more difficult tasks in the future.

“What’s exciting is that this next generation of reservoir computation takes what was already very good and makes it considerably more efficient” said Gauthier.

The research was published in Nature Communication.

Source link

Comments are closed.