OpenVINO™ Model Server Boosts AI Inference Operations OpenVINO™ Model Server Boosts AI Inference Operations

This post illustrates how the OpenVINO™ Model Server extends workloads across Intel® hardware (including accelerators) and maximizes performance across computer vision accelerators—CPUs, integrated GPUs, Intel Movidius VPUs, and Intel FPGAs
Categories: Compute - Intel® Xeon® Processors, Intel® Stratix® 10 FPGA, Intel® Movidius™ Myriad™ X VPU | Framework Optimizations - MXNet, TensorFlow, Caffe | Tools - Intel® OpenVINO | Topology - ResNet50 | Workload - Inference, Batch Learning