Reset Search
Keyword
BUILDERS PROGRAM

BUILDERS PROGRAM Reset

Solution Brief
Intel® AI Builders - Solution Brief
GIBots Accounts Payable Automation solution uses advanced RPA, OCR, NLP and ML methods for image, text and video analysis. This paper discusses how GIBots realizes 1.75X inference performance gains by leveraging XGBoost optimized for Intel Architecture to help customers achieve better ROI.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, CSP - Microsoft Azure, On-premise (Private Cloud, Other) | Industry - Finance and Insurance, Manufacturing | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Distribution for Python* | Model Training - Models cant be re-trained - Inference only | Operating System - Linux | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - Other | Use Case - Document Management

GIBots Accounts Payable Automation solution uses advanced RPA, OCR, NLP and ML methods for image, text and video analysis. This paper discusses how GIBots realizes 1.75X inference performance gains by leveraging XGBoost optimized for Intel Architecture to help customers achieve better ROI.

White Paper
Intel® AI Builders - White Paper
In this paper, we cover optimization of Course5 Discovery’s Natural Language Understanding (NLU) model deployed on Intel-powered CPU devices for faster inference. By using the OpenVINO Execution Provider for ONNX Runtime with Cnvrg.io, the model was optimized to decrease the inference time. In this use case, a BERT Topology-based Natural Language Understanding (NLU) model was optimized for inference which gave faster performance without compromising accuracy.
Categories: 
Application Type - Machine Learning, Deep Learning | Compute - Intel® Xeon® Scalable processor, Intel® Core™ processor | Deployment Channel - CSP - Amazon Web Services, CSP - Google Cloud, CSP - Microsoft Azure, On-premise (Private Cloud, Other) | Industry - Software | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for TensorFlow*, Intel® Distribution for Python* | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models can be trained - data input only required | Operating System - Linux | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - RNN, BERT | Use Case - Anomaly Detection, Conversational Bots and Voice Agents, Data Analytics

In this paper, we cover optimization of Course5 Discovery’s Natural Language Understanding (NLU) model deployed on Intel-powered CPU devices for faster inference. By using the OpenVINO Execution Provider for ONNX Runtime with Cnvrg.io, the model was optimized to decrease the inference time. In this use case, a BERT Topology-based Natural Language Understanding (NLU) model was optimized for inference which gave faster performance without compromising accuracy.

Solutions Brief
Intel® AI Builders - Solutions Brief
Huiying Medical Bone Fracture Detection Solution uses DL algorithms to examine patients' X-rays images to help doctors quickly identify and locate the fracture and apply medical treatment. Running on Intel® Xeon® processors and using the Intel oneAPI Deep Neural Network Library (oneDNN), Huiying Medical's bone fracture detection solution delivers fast inference performance with high throughput.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Other, On-premise (Private Cloud, Other) | Industry - Healthcare | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for PyTorch* | Model Training - Models cant be re-trained - Inference only | Operating System - Linux | Solution Country/Region Availability - Mainland China | Solution Type - AI Software/SaaS | Topology - ResNet50, Faster RCNN | Use Case - Medical imaging, analysis and diagnostics

Huiying Medical Bone Fracture Detection Solution uses DL algorithms to examine patients' X-rays images to help doctors quickly identify and locate the fracture and apply medical treatment. Running on Intel® Xeon® processors and using the Intel oneAPI Deep Neural Network Library (oneDNN), Huiying Medical's bone fracture detection solution delivers fast inference performance with high throughput.

Solutions Brief
Intel® AI Builders - Solutions Brief
Sayint, Tech Mahindra’s Conversational AI platform, uses NLP to conversationally engage with customers for automating business processes and functions. Working with Tech Mahindra developed TTS that perform faster and scale well across the range of Intel solutions from entry level desktop processors to the Intel® Xeon® scalable processors.
Categories: 
Application Type - Machine Learning, Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, CSP - Google Cloud, CSP - Microsoft Azure, CSP - Other, On-premise (Private Cloud, Other), Hybrid Cloud | Industry - Automotive, Cross-Industry, Finance and Insurance | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for PyTorch* | Model Training - Models can be trained - data input only required, Models can be trained - online learning, Models can be trained - requires labeled data | Operating System - Windows, Linux, Other (pls specify) | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Use Case - Conversational Bots and Voice Agents, Robotic Process Automation, Data Analytics

Sayint, Tech Mahindra’s Conversational AI platform, uses NLP to conversationally engage with customers for automating business processes and functions. Working with Tech Mahindra developed TTS that perform faster and scale well across the range of Intel solutions from entry level desktop processors to the Intel® Xeon® scalable processors.

White Paper
Intel® AI Builders - White Paper
Intel is developing the industry’s first Ethernet-based, intelligent network switch that can help eliminate network communication bottlenecks, speeding up machine learning model training up to 2.27x
Categories: 
Application Type - Machine Learning, Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - On-premise (Private Cloud, Other) | Industry - Manufacturing, Not for profit, Other, Professional and Business Services, Real Estate, Rental and Leasing, Retail, Software, Communications, Transportation and Warehousing, Agriculture, Arts and Entertainment, Automotive, Cross-Industry, Defense and Space, Education, Energy and Utilities, Finance and Insurance, Government, Healthcare | Model Training - Models can be trained - data input only required, Models can be trained - online learning, Models can be trained - requires labeled data | Operating System - Linux | Solution Country/Region Availability - Worldwide | Topology - ResNet50, InceptionV3, SSD-VGG16, GNMT, SSD, Deep Speech 2, NMT, LSTM, Proprietary, RNN, VGG-19, BERT, Faster RCNN, MobileNet, Other, Unet, Yolo | Use Case - Factory Automation, Image / Object Detection / Recognition / Classification, Anomaly Detection, Conversational Bots and Voice Agents, Drug Discovery, Facial Detection / Recognition / Classification, Medical imaging, analysis and diagnostics, Other (pls specify), Predictive maintenance and analytics, Robotic Process Automation, Smart City, Video Surveillance and Analytics, Content generation, Data Preparations and Management, Document Management, Data Analytics

Intel is developing the industry’s first Ethernet-based, intelligent network switch that can help eliminate network communication bottlenecks, speeding up machine learning model training up to 2.27x

Solutions Snapshot
Intel® AI Builders - Solutions Snapshot
KFBIO’s AI-based digital imaging analysis uses convolutional neural networks (CNNs) to train the thyroid nodule detection algorithms. Inferencing a patient’s scan with the trained algorithm for intelligent diagnostic clinician assistance. With Intel AI software optimization on Intel architecture, KFBIO improved inference performance by 21.38X.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Other, On-premise (Private Cloud, Other) | Industry - Healthcare | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for PyTorch* | Model Training - Models can be trained - requires labeled data | Operating System - Linux | Solution Country/Region Availability - Mainland China | Solution Type - AI Software/SaaS | Topology - Proprietary, Other | Use Case - Medical imaging, analysis and diagnostics

KFBIO’s AI-based digital imaging analysis uses convolutional neural networks (CNNs) to train the thyroid nodule detection algorithms. Inferencing a patient’s scan with the trained algorithm for intelligent diagnostic clinician assistance. With Intel AI software optimization on Intel architecture, KFBIO improved inference performance by 21.38X.

Solutions Snapshot
Intel® AI Builders - Solutions Snapshot
Silo OS Visual Quality Control Solution monitors industrial manufacturing processes in real time with computer vision to ensure product quality. It is ideally suited for detecting product defects, analyzing dimensions and for quality audits, product grading, and traceability. Optimizing for Intel Distribution of OpenVINO toolkit on 3rd Gen Intel Xeon Scalable produced excellent performance results.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, CSP - Google Cloud, CSP - Microsoft Azure, On-premise (Private Cloud, Other) | Industry - Cross-Industry, Manufacturing | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models cant be re-trained - Inference only | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - Unet | Use Case - Image / Object Detection / Recognition / Classification, Anomaly Detection, Predictive maintenance and analytics

Silo OS Visual Quality Control Solution monitors industrial manufacturing processes in real time with computer vision to ensure product quality. It is ideally suited for detecting product defects, analyzing dimensions and for quality audits, product grading, and traceability. Optimizing for Intel Distribution of OpenVINO toolkit on 3rd Gen Intel Xeon Scalable produced excellent performance results.

Solutions Snapshot
Intel® AI Builders - Solutions Snapshot
Chest-rAi analyses chest radiographs using machine learning techniques to identify and highlight various pulmonary and other thoracic diseases. Chest-rAi is a deep learning system blended with a traditional radiologist approach of systematically examining a chest radiograph. The inference pipeline for Chest-rAi is optimized for 3rd Gen Intel Xeon Scalable Processor and Intel OpenVINO Toolkit.
Categories: 
Application Type - Deep Learning | Combatting Covid-19 - Combatting Covid-19 | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Microsoft Azure | Industry - Healthcare | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models can be trained - requires labeled data | Operating System - Windows, Linux | Solution Country/Region Availability - India | Solution Type - AI Software/SaaS | Topology - Proprietary, Other | Use Case - Medical imaging, analysis and diagnostics

Chest-rAi analyses chest radiographs using machine learning techniques to identify and highlight various pulmonary and other thoracic diseases. Chest-rAi is a deep learning system blended with a traditional radiologist approach of systematically examining a chest radiograph. The inference pipeline for Chest-rAi is optimized for 3rd Gen Intel Xeon Scalable Processor and Intel OpenVINO Toolkit.

Solutions Snapshot
Intel® AI Builders - Solutions Snapshot
Neural Magic enables GPU-class performance on readily available 3rd Gen Intel Xeon Scalable processors. The Neural Magic Inference Engine is a runtime software that takes advantage of readily available model optimization techniques, such as pruning and sparse-quantization, and the latest advances in these processors.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Other, On-premise (Private Cloud, Other) | Industry - Cross-Industry, Software | Model Training - Models cant be re-trained - Inference only | Operating System - Other (pls specify) | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - ResNet50 | Use Case - Image / Object Detection / Recognition / Classification, Other (pls specify)

Neural Magic enables GPU-class performance on readily available 3rd Gen Intel Xeon Scalable processors. The Neural Magic Inference Engine is a runtime software that takes advantage of readily available model optimization techniques, such as pruning and sparse-quantization, and the latest advances in these processors.

Solutions Snapshot
Intel® AI Builders - Solutions Snapshot
Natural Language Processing advancements have escalated with the availability of powerful computing at lower costs, but training on GPUs is expensive. Lilt, an AI-powered enterprise translation software and services company, wanted to optimize training tasks using CPUs instead of GPUs. By optimizing TensorFlow on Intel Xeon 8380 processors, Lilt was able to increase increasing inference by nearly 4X and deploy their workloads on Google Cloud N2 high memory instances with Intel Optimizations for TensorFlow 2.4.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Google Cloud, Hybrid Cloud | Industry - Cross-Industry, Government, Professional and Business Services | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for TensorFlow* | Model Training - Models cant be re-trained - Inference only, Models can be trained - online learning | Operating System - Linux | Solution Country/Region Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - GNMT | Use Case - Other (pls specify)

Natural Language Processing advancements have escalated with the availability of powerful computing at lower costs, but training on GPUs is expensive. Lilt, an AI-powered enterprise translation software and services company, wanted to optimize training tasks using CPUs instead of GPUs. By optimizing TensorFlow on Intel Xeon 8380 processors, Lilt was able to increase increasing inference by nearly 4X and deploy their workloads on Google Cloud N2 high memory instances with Intel Optimizations for TensorFlow 2.4.