Are Deep Neural Networks Dramatically Overfitted? lilianweng.github.io - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from lilianweng.github.io Daily Mail and Mail on Sunday newspapers.
Researchers accelerate sparse inference on XNNPack and TensorFlow Lite for realtime apps
Mar 10, 2021 01:20 EST with 0 comments
As proved by the Universal Approximation Theorem, neural networks can learn any arbitrary function. This allows us to capture hidden patterns in data to make more accurate and robust models for a wide variety of tasks. However, a big caveat in this process is that neural networks tend to grow quickly as the parameters (or complexity) of the task at hand increases. Naturally, these large neural networks require substantial computational power.
To this end, researchers have been working towards optimizing large neural networks so that they can be run on smaller devices like smartphones and less powerful computers. Inference frameworks like TensorFlow Lite with XNNPACK ML acceleration library, specialize in this task. They optimize machine learning models to run on a variety of devices by finding a sweet spot between model size,