Optimization for faster edge inference

Faster edge inference

Faster inference for edge devices without compromising the model accuracy

Smart but slow

Neural networks need not only solve complex problems. Most of the time they also need to run very fast. In this project the customer already had a big neural network that worked well for their use case. But it required powerful hardware, and they needed our help to improve the inference speed on edge devices.

Optimized for speed

We helped the customer utilize the latest hardware accelerations on the target edge devices and quantization solutions; to optimize and greatly improve the inference speed.

Blazingly fast

The project resulted in a much faster inference speed with a smaller model in size; without compromising model accuracy. The customer could now utilize the model on the target edge devices and provide the same experience as when the model was run on a much more powerful device.

Trusted By Innovative Companies

Safe SolutionsSpendencyModcamCrunchfishRefinedSpectronicEvidensDicentia
Swing by for a coffee!
North Link, Helsingborg
+46 70 823 80 35

Sundstorget 3, 252 21 Helsingborg, Sweden