HCL 'Optimized AI on the Edge' Solution
Running predictions on a live video feed using Deep Neural Network (DNN) model can be time consuming. It involves processing of the incoming stream, image segmentation and object detection. Each prediction takes additional compute time which has an impact on the real time execution of the application. Utilizing Intel® Distribution of OpenVINO™ toolkit, the pre-trained DNN model is optimized for maximum performance running on edge devices powered by Intel processors (CPU/VPU/FPGA). The substantial improvement in inference time helps in accelerated processing of the various steps in the video processing pipeline. An Object detection model based on SSD InceptionV2 is optimized and run on edge devices which speeds up while retaining the mAP of the model. This optimized model can be deployed on low powered hardware like IOT devices, cameras running on Intel hardware. Various industrial use cases like ADAS, Surveillance etc. can benefit from this.
*Please note that member solutions are often customizable to meet the needs of individual enterprise end users.CONTACT COMPANY► CONTACT INTEL ACCOUNT MGR►
- Optimized Deep Learning model using OpenVINO
Convert the Native Neural Network representation into an inference ready version optimized for the underlying compute hardware
- Deployment on Process specific hardware
Deploy the DL model onto edge devices powered by Intel hardware like FPGA, CPU, Movidius etc
- Video process improvement
Improve the latency and throughput of the prediction step and improve the process pipeline execution time .