Researchers Present Machine Learning-Based Algorithm for Efficient Approximation of Matrix Multiplications

One of the most important and computationally demanding processes in machine learning is matrix multiplication. As a result, much research has been done on matrix multiplications which can be approximated well. The researchers are trying to develop a learning-based system that works much better than the current approaches in this study. It runs 100 times faster than real matrix products and ten times faster than most popular approximate algorithms, based on experiments using hundreds of matrices from various fields. The approach also has the intriguing virtue of requiring no multiply-add when a matrix is ​​known in advance, which is a typical scenario.

These results indicate a more promising basis for machine learning than the sparse, factorized, and scalar quantized matrix products that have recently been the subject of intensive study and hardware investment: a combination of hashing, averaging, and rearranging d ‘bytes. The main difficulty is to minimize the computation time needed to approximate linear processes with a certain level of accuracy. When we obtain a data matrix A whose rows are samples and we want to apply a linear operator B to these samples, this situation naturally occurs in machine learning and data mining. B can be, among others, a linear classifier, a linear regressor or an integration matrix.

As a practical illustration, consider the approximation work of a softmax classifier trained to predict image labels using integrations obtained from a neural network. Here, the columns of B are the weight vectors for each class, while the rows of A are the embeddings for each image. By computing the product AB and obtaining the argmax inside each result row, the classification is accomplished. On the CIFAR-10 and CIFAR-100 datasets, our technique outperformed its best competitors. The results are shown in the figure below.


Instead, MADDNESS1 applies a nonlinear preprocessing function and simplifies the problem to table lookups. Also, when B is known in advance, such as when applying a trained linear model to new data, MADDNESS does not require any multiply-add operation. The approach is more closely related to vector similarity search methods. Instead of using an expensive quantization function with several multiple additions, a family of quantization functions without multiple additions has been presented.

The contributions of the document can be summarized as follows:

  1. A family of fast-learning vector quantization algorithms capable of encoding over 100 GB of data per second in a single CPU thread.
  2. A low-bandwidth integer summation technique that avoids upscattering, saturation, and overflow.
  3. An approximate matrix multiplication method based on these functions. Experiments on hundreds of different matrices show that our approach significantly outperforms existing options. It also includes theoretical quality assurances.

The critical empirical finding is that the proposed MADDNESS approach offers speedups of one order of magnitude over existing AMM methods and speedups of up to two orders of magnitude over the compact baseline. Up to three orders of magnitude can also compress matrices. These results are computed on a processor and are only available when a training set for a matrix exists. Researchers claim superior performance only when one array is larger than the other and both arrays are tall. This is the regime in which the fast (but less accurate) coding function is advantageous.

When the largest matrix is ​​known in advance, the approach loses its usefulness; this assumption is common in similarity search and eliminates the need for a fast encoding function. Approximate integer summation and merged table lookups are probably usable even without any of these assumptions, but establishing it is a task for the future. The MADDNESS source code is freely available on GitHub.

This Article is written as a summary article by Marktechpost Staff based on the paper 'Multiplying Matrices Without Multiplying'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, github.

Please Don't Forget To Join Our ML Subreddit

Comments are closed.