IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
New AI-driven optimisation tech for deep neural networks
Tue, 15th Mar 2022
FYI, this story is more than a year old

ENOT, a developer of neural network tools, has released AI-driven optimisation technology for deep neural networks.

The technology has been developed for AI developers and edge AI companies. The company says the integration of its framework makes deep neural networks faster, smaller and more energy-efficient, from cloud to edge computing. It says the technology helps achieve optimisation ratios resulting in acceleration up to 20x and compression up to 25x, reducing total computing resource (hardware) costs by as much as 70%.

"ENOT applies a unique neural architecture selection technology that outperforms all known methods for neural network compression and acceleration," says ENOT founder and CEO, Sergey Aliamkin.

"ENOT's framework has a simple-to-use Python API that can be quickly and easily integrated within various neural network training pipelines to accelerate and compress neural networks."

It allows users to automate the search for the optimal neural network architecture, taking latency, RAM and model size constraints for different hardware and software platforms into account. The neural network architecture search engine allows users to automatically find the best possible architecture from millions of available options, using several parameters such as input resolution, depth of the neural network, operation type and activation type, number of neurons at each layer, and bit-width for target hardware platform for NN inference.

The ENOT framework is aimed at companies that utilise neural networks on edge devices, such as:

  • Electronics
  • Healthcare
  • Oil and gas
  • Autonomous driving
  • Cloud computing
  • Telecom
  • Mobile apps
  • Internet of Things (IoT)
  • Robotics

The company has completed more than 20 pilot projects where it optimised neural networks for several global tech giants, among a dozen medium-sized OEMs and AI companies.

"ENOT delivered 13.3x acceleration of a neural network used by a top-three smartphone manufacturer as part of the image enhancement process," says Aliamkin.

"The optimisation reduced the neural network depth from 16 to 11 and reduced the input resolution from 224x224 pixels to 96x96 pixels, yet there was practically zero loss of accuracy."

He says another project with the same company delivered 5.1x acceleration for a photo denoising neural network, with an almost imperceptible change in image quality, even though the network had already been manually optimised. He says this translates to faster processing and significantly extending battery life for end-users.

"Today, when neural networks are widely used in production and applications. Neural networks should be more effective in the consumption of computational resources and affordable. Their implementation should be faster, better and cheaper for all," says Aliamkin.

"ENOT is at the forefront of next-level AI optimisation, helping bring fast, real-time levels of AI advancement through high throughput data into reality as science fact, and our journey has only just begun with examples such as the Weedbot laser weeding machine that gained a 2.7 times thanks to the ENOT framework."