Deep Learning

PLASTER: How To Measure Deep Learning Performance

September 12, 2018
5 min read
Featured-Image.jpg

PLASTER: Addressing the Seven Major Challenges for Enabling AI Based Services

Over the past decade, humanity has witnessed a major epoch of evolution from the Age of Information to the Age of Artificial Intelligence. Artificial Intelligence (AI) has long been confined within the boundaries of academia. Innovation within the space (while still driven by academia) is now being pushed forward to corporate R&D labs, and bold quick-moving startups. Mainstream adoption of this technology in the marketplace will require direction in order for AI to make better decisions within organizations of all sizes.

Motivated by such an initiative to make AI practical, NVIDIA has formulated a framework for Deep Learning -- PLASTER. This framework serves as a checklist for organizations willing to adopt AI successfully. The criteria mandated by PLASTER ensures the sustainability of the product through its life-cycle. The life-cycle itself inspired the following seven criteria:

What is PLASTER?

1- Programmability: The pipeline for developing deep learning models starts with coding the system for automating the training. Even the optimization should be automated using Grid Search or Bayesian Optimization. Hence, the proposed framework should be flexible to design models and automate the training, testing, and optimization cycle.
2- Latency: The system should return inference results preferably within a handful of milliseconds, as the response time should meet user expectations for good user experience. According to the whitepaper(1), Google has stated that 7 milliseconds is an optimal latency target for image-and video-based uses.
3- Accuracy: For some applications such as medical diagnosis, even a tiny margin of error is not desired. Many other applications related to self-driving cars and drones should in theory be error free (although technically, this may not be feasible). Hence, depending on the application, the accuracy of the model is critical.
4- Size of model: Most modern deep learning models contain thousands of parameters (layers, nodes, computation per layer, number of connections per layer). The deep learning model size ideally is proportional to the amount of compute and physical networking resources needed to run inferences.
5- Throughput: As the business scales out, the user base increases exponentially. The number of concurrent users mandates a system that can serve as many users as possible.
6- Energy efficiency: Power consumption is a factor that cannot be ignored in this new age. Economic considerations related to price/watt of power consumption, and humanitarian efforts such as curbing global warming should also be considered.
7- Rate of learning: Most deep learning models are trained offline. However, some applications such as algorithmic trading for finance require the model to learn from real-time data streams and react to changes in the global market.

DEEP-LEARNING-SOLUTIONS.png


Scalability of Deep Learning

Extreme learning is a new term coined for the scalability of deep learning to such edge cases. In order for AI to make its way through to real-world applications, these issues should be appropriately handled, otherwise, these AI-based systems can never gain traction in the market.

While some deep learning models operate on classifying millions of categories such as NLP and E-Commerce images classifications, others depend on an ensemble of models to drive the inference results. In both cases, the system mandates high computational power and storage resources, which is explicitly addressed by PLASTER.

In Conclusion

The PLASTER framework, coined by NVIDIA, demonstrates the seven necessary pillars to implement AI successfully. As NVIDIA has excelled in manufacturing hardware specialized for deep learning, Google and Microsoft have also recently invested heavily in developing their own deep learning chips. Even small startups are appearing by the dozens as the race for AI superiority, has in a way created a "hardware revival". PLASTER may become the holy grail of deep learning development over the coming months and years.

Any company interested in delivering AI-based services should consider every point of PLASTER as many have failed to scale their products since they have failed on one or more metrics of listed above. It's of extreme importance to consult domain experts on the problem at hand as some metrics could be relaxed for some specialized applications. For example, a high profile private security company may tolerate false positives while false negatives would be disastrous. In other words, allowing a blacklisted individual to walk in versus triggering a false alarm in case of an innocent individual.

Exxact Deep Learning Systems Featuring NVIDIA GPUs

DSC_0043-clean-ex2-300x295.png

Exxact Deep Learning workstations and servers are able to deliver optimal performance for today's deep learning models. All of our GPU solutions are fully turnkey so you start working quickly. The Exxact Deep Learning Development Box (DevBox) is one of our best selling deep learning offerings, powered by state-of-the-art NVIDIA GPUs.

Have any questions about deep learning performance or our systems? Contact us directly here.

References:

(1) "PLASTER: A Framework for Deep Learning Performance" - Tirias Research 2018

Topics

Featured-Image.jpg
Deep Learning

PLASTER: How To Measure Deep Learning Performance

September 12, 20185 min read

PLASTER: Addressing the Seven Major Challenges for Enabling AI Based Services

Over the past decade, humanity has witnessed a major epoch of evolution from the Age of Information to the Age of Artificial Intelligence. Artificial Intelligence (AI) has long been confined within the boundaries of academia. Innovation within the space (while still driven by academia) is now being pushed forward to corporate R&D labs, and bold quick-moving startups. Mainstream adoption of this technology in the marketplace will require direction in order for AI to make better decisions within organizations of all sizes.

Motivated by such an initiative to make AI practical, NVIDIA has formulated a framework for Deep Learning -- PLASTER. This framework serves as a checklist for organizations willing to adopt AI successfully. The criteria mandated by PLASTER ensures the sustainability of the product through its life-cycle. The life-cycle itself inspired the following seven criteria:

What is PLASTER?

1- Programmability: The pipeline for developing deep learning models starts with coding the system for automating the training. Even the optimization should be automated using Grid Search or Bayesian Optimization. Hence, the proposed framework should be flexible to design models and automate the training, testing, and optimization cycle.
2- Latency: The system should return inference results preferably within a handful of milliseconds, as the response time should meet user expectations for good user experience. According to the whitepaper(1), Google has stated that 7 milliseconds is an optimal latency target for image-and video-based uses.
3- Accuracy: For some applications such as medical diagnosis, even a tiny margin of error is not desired. Many other applications related to self-driving cars and drones should in theory be error free (although technically, this may not be feasible). Hence, depending on the application, the accuracy of the model is critical.
4- Size of model: Most modern deep learning models contain thousands of parameters (layers, nodes, computation per layer, number of connections per layer). The deep learning model size ideally is proportional to the amount of compute and physical networking resources needed to run inferences.
5- Throughput: As the business scales out, the user base increases exponentially. The number of concurrent users mandates a system that can serve as many users as possible.
6- Energy efficiency: Power consumption is a factor that cannot be ignored in this new age. Economic considerations related to price/watt of power consumption, and humanitarian efforts such as curbing global warming should also be considered.
7- Rate of learning: Most deep learning models are trained offline. However, some applications such as algorithmic trading for finance require the model to learn from real-time data streams and react to changes in the global market.

DEEP-LEARNING-SOLUTIONS.png


Scalability of Deep Learning

Extreme learning is a new term coined for the scalability of deep learning to such edge cases. In order for AI to make its way through to real-world applications, these issues should be appropriately handled, otherwise, these AI-based systems can never gain traction in the market.

While some deep learning models operate on classifying millions of categories such as NLP and E-Commerce images classifications, others depend on an ensemble of models to drive the inference results. In both cases, the system mandates high computational power and storage resources, which is explicitly addressed by PLASTER.

In Conclusion

The PLASTER framework, coined by NVIDIA, demonstrates the seven necessary pillars to implement AI successfully. As NVIDIA has excelled in manufacturing hardware specialized for deep learning, Google and Microsoft have also recently invested heavily in developing their own deep learning chips. Even small startups are appearing by the dozens as the race for AI superiority, has in a way created a "hardware revival". PLASTER may become the holy grail of deep learning development over the coming months and years.

Any company interested in delivering AI-based services should consider every point of PLASTER as many have failed to scale their products since they have failed on one or more metrics of listed above. It's of extreme importance to consult domain experts on the problem at hand as some metrics could be relaxed for some specialized applications. For example, a high profile private security company may tolerate false positives while false negatives would be disastrous. In other words, allowing a blacklisted individual to walk in versus triggering a false alarm in case of an innocent individual.

Exxact Deep Learning Systems Featuring NVIDIA GPUs

DSC_0043-clean-ex2-300x295.png

Exxact Deep Learning workstations and servers are able to deliver optimal performance for today's deep learning models. All of our GPU solutions are fully turnkey so you start working quickly. The Exxact Deep Learning Development Box (DevBox) is one of our best selling deep learning offerings, powered by state-of-the-art NVIDIA GPUs.

Have any questions about deep learning performance or our systems? Contact us directly here.

References:

(1) "PLASTER: A Framework for Deep Learning Performance" - Tirias Research 2018

Topics