GIBots Accounts Payable Automation solution uses advanced RPA, OCR, NLP and ML methods for image, text and video analysis. This paper discusses how GIBots realizes 1.75X inference performance gains by leveraging XGBoost optimized for Intel Architecture to help customers achieve better ROI.

Course5 Discovery: An Augmented Analytics Platform Optimized using OpenVINO™ Execution Provider for ONNX Runtime with Cnvrg.io
In this paper, we cover optimization of Course5 Discovery’s Natural Language Understanding (NLU) model deployed on Intel-powered CPU devices for faster inference. By using the OpenVINO Execution Provider for ONNX Runtime with Cnvrg.io, the model was optimized to decrease the inference time. In this use case, a BERT Topology-based Natural Language Understanding (NLU) model was optimized for inference which gave faster performance without compromising accuracy.

Huiying Medical AI-Enabled Bone Fracture Detection Runs Faster on 3rd Generation Intel® Xeon® Scalable Processor
Huiying Medical Bone Fracture Detection Solution uses DL algorithms to examine patients' X-rays images to help doctors quickly identify and locate
the fracture and apply medical treatment. Running on Intel® Xeon® processors and using the Intel oneAPI Deep Neural Network Library (oneDNN), Huiying Medical's bone fracture detection solution delivers fast inference performance with high throughput.
Sayint, Tech Mahindra’s Conversational AI platform, uses NLP to conversationally engage with customers for automating business processes and functions. Working with Tech Mahindra developed TTS that perform faster and scale well across the range of Intel solutions from entry level desktop processors to the Intel® Xeon® scalable processors.
Intel is developing the industry’s first Ethernet-based, intelligent network switch that can help eliminate network communication bottlenecks, speeding up machine learning model training up to 2.27x
KFBIO’s AI-based digital imaging analysis uses convolutional neural networks (CNNs) to train the thyroid nodule detection algorithms. Inferencing a patient’s scan with the trained algorithm for intelligent diagnostic clinician assistance. With Intel AI software optimization on Intel architecture, KFBIO improved inference performance by 21.38X.
Silo OS Visual Quality Control Solution monitors industrial manufacturing processes in real time with computer vision to ensure product quality. It is ideally suited for detecting product defects, analyzing dimensions and for quality audits, product grading, and traceability. Optimizing for Intel Distribution of OpenVINO toolkit on 3rd Gen Intel Xeon Scalable produced excellent performance results.
Chest-rAi analyses chest radiographs using machine learning techniques to identify and highlight various pulmonary and other thoracic diseases. Chest-rAi is a deep learning system blended with a traditional radiologist approach of systematically examining a chest radiograph. The inference pipeline for Chest-rAi is optimized for 3rd Gen Intel Xeon Scalable Processor and Intel OpenVINO Toolkit.
Neural Magic enables GPU-class performance on readily available 3rd Gen Intel Xeon Scalable processors. The Neural Magic Inference Engine is a
runtime software that takes advantage of readily available model optimization
techniques, such as pruning and sparse-quantization, and the latest advances in these processors.
Natural Language Processing advancements have escalated with the availability of powerful computing at lower costs, but training on GPUs is expensive. Lilt, an AI-powered enterprise translation software and services company, wanted to optimize training tasks using CPUs instead of GPUs. By optimizing TensorFlow on Intel Xeon 8380 processors, Lilt was able to increase increasing inference by nearly 4X and deploy their workloads on Google Cloud N2 high memory instances with Intel Optimizations
for TensorFlow 2.4.