A high threshold for writing software, performance bottlenecks and power-control issues are slowing the deployment of FPGA technology in large-scale AI business applications. Inspur aims to meet these challenges with its TF2 Compute Acceleration Engine equipped with Intel® Arria® 10 FPGA. TF2 is a FPGA computing acceleration engine supporting TensorFlow, which helps AI customers quickly implement FPGAs based on mainstream AI training software and deep neural-network-model DNN inference. The engine delivers high performance and low latency for AI applications through the world’s first DNN shifting technology on FPGAs. The solution is powered by the Inspur F10A FPGA card with Intel® Arria® 10 FPGA, and TF2, the FPGA computing acceleration engine. The F10A is the world’s first half-height and half-length FPGA accelerator card to support that Intel® chip. The SqueezeNet model on the Inspur card shows excellent computational performance for the TF2 computing acceleration engine. SqueezeNet is a typical convolutional neural network architecture, which is a streamlining model, and its accuracy is comparable to AlexNet. It is especially suitable for image-based AI applications with high real-time requirements. See the full Inspur TF2 demo in person at One Intel Station (OIS) at SC18 in Dallas, Texas from November 11-14, 2018 or at the Intel® AI DevCon (AIDC) in Beijing, China from November 14-15, 2018.
Blog List / Inspur FPGA Acceleration Solution: Real-time Object Detection at SC18 and AIDC Beijing