CPUs may replace GPUs for training Deep Learning Models
CPUs can be four to fifteen times faster than GPUs at training Deep Learning models using new methods being developed at Rice University. This is almost the exact opposite of the current situation.

In rather startling news, researchers are finding that CPUs can be four to fifteen times faster than GPUs at training Deep Learning models. This is almost the exact opposite of the current situation. Current deep learning training involves high performance matrix multiplication on GPUs. But a new approach uses hashes and search algorithms on CPUs resulting in faster performance using CPU features.

One important implication of this is that lower cost commodity cloud computing (i.e. regular AWS EC2 instances) can be used to train deep learning models faster and at lower cost. A rather large language model that might have cost $60,000 to train last year, might cost $500 to train next year when you consider speed improvements and the use of lower cost resources. We will be working with the technology at East Agile as it becomes ready for production.

This new innovation is demonstrated by Anshumali Shrivastava and Shabnam Dagahani from Rice University.  See https://arxiv.org/pdf/2103.10891.pdf

Tags
AI
10327
Share this
Leave a comment
There are no comments about this article, let us know what you think?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.