Reset Search
Keyword
BUILDERS PROGRAM

BUILDERS PROGRAM Reset

Solution Brief
Intel® AI Builders - Solution Brief
This brief discusses optimizing Matroid’s Similarity Search object detection model for Intel Xeon Scalable processors has achieved increased performance and could open new doors for more flexible and possibly lower cost services and deployments.
Categories: 
Application Type - Deep Learning, Other | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, CSP - Google Cloud, CSP - Microsoft Azure, CSP - Other, On-premise (Private Cloud, Other), Hybrid Cloud | Industry - Arts and Entertainment, Cross-Industry, Retail, Communications | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for TensorFlow*, Intel® Distribution for Python* | Model Training - Models cant be re-trained - Inference only | Operating System - Linux | Solution Geographic Availability - Worldwide | Solution Type - AI Software/SaaS | Topology - Other | Use Case - Image / Object Detection / Recognition / Classification, Facial Detection / Recognition / Classification, Video Surveillance and Analytics | Solution Geographic Availability - Worldwide

This brief discusses optimizing Matroid’s Similarity Search object detection model for Intel Xeon Scalable processors has achieved increased performance and could open new doors for more flexible and possibly lower cost services and deployments.

Solution Brief
Intel® AI Builders - Solution Brief
This brief discusses Seassoon’s text detection solution which inferences data input for cognitive decision-making. It highlights how utilizing Intel Optimizations for PyTorch and Image Detection enabled Seassoon to achieve 3X faster Inferencing and avoid the need for a more costly and complex GPU solution.
Categories: 
Application Type - Machine Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - On-premise (Private Cloud, Other) | Industry - Cross-Industry, Energy and Utilities, Finance and Insurance, Government | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for PyTorch* | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Operating System - Linux | Solution Geographic Availability - China (PRC) | Solution Type - AI Platform as a Service (AI PaaS) | Topology - Proprietary, Other | Use Case - Data Preparations and Management, Document Management, Image / Object Detection / Recognition / Classification

This brief discusses Seassoon’s text detection solution which inferences data input for cognitive decision-making. It highlights how utilizing Intel Optimizations for PyTorch and Image Detection enabled Seassoon to achieve 3X faster Inferencing and avoid the need for a more costly and complex GPU solution.

Solution Brief
Intel® AI Builders - Solution Brief
This brief discusses ICETech vision computing-based systems that automatically identify vehicles and license plates in unattended smart parking operations, allowing them to run more efficiently. Optimizing for OpenVINO and quantizing for INT8 using the Post Training Optimization toolkit (POT) and inferencing with Intel DL Boost (VNNI) improved ICETech’s inferencing performance with minimal impact on accuracy.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - On-premise (Private Cloud, Other) | Industry - Transportation and Warehousing | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Distribution for Python* | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models cant be re-trained - Inference only | Operating System - Linux | Solution Geographic Availability - Brazil, China (PRC), India, Korea, Taiwan, Other - Asia Pacific, Other - Europe and Africa, Other - North and South America | Topology - MobileNet, SSD-VGG16 | Use Case - Smart City, Image / Object Detection / Recognition / Classification

This brief discusses ICETech vision computing-based systems that automatically identify vehicles and license plates in unattended smart parking operations, allowing them to run more efficiently. Optimizing for OpenVINO and quantizing for INT8 using the Post Training Optimization toolkit (POT) and inferencing with Intel DL Boost (VNNI) improved ICETech’s inferencing performance with minimal impact on accuracy.

Solution Brief
Intel® AI Builders - Solution Brief
Knowledge Lens assists manufacturers and industries with Artificial Intelligence (AI), Industrial IoT, Big Data, and other technologies that help transform enterprises into Industry 4.0-grade operations. This brief highlights how Knowledge Lens worked with the AI Builders team to utilize OpenVINO optimization for multiple use cases we have incredible improvement in performance while not compromising accuracy.
Categories: 
Application Type - Machine Learning | Compute - Intel® Xeon® Scalable processor, Intel® Core™ processor | Deployment Channel - CSP - Amazon Web Services, CSP - Microsoft Azure, On-premise (Private Cloud, Other) | Industry - Transportation and Warehousing | Operating System - Linux | Solution Geographic Availability - Worldwide | Solution Type - AI Platform as a Service (AI PaaS) | Topology - ResNet50, Yolo | Use Case - Video Surveillance and Analytics

Knowledge Lens assists manufacturers and industries with Artificial Intelligence (AI), Industrial IoT, Big Data, and other technologies that help transform enterprises into Industry 4.0-grade operations. This brief highlights how Knowledge Lens worked with the AI Builders team to utilize OpenVINO optimization for multiple use cases we have incredible improvement in performance while not compromising accuracy.

White Paper
Intel® AI Builders - White Paper
MaxQ AI uses the phrase ‘Data Industrialization’ to represent the end-to-end life cycle for data mining and treatment in support of software-based medical devices. This white paper explores the methods MaxQ uses to help ensure the utmost security during Data Industrialization within the use of their products.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor, Intel® Core™ processor | Industry - Healthcare | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models can be trained - requires labeled data | Operating System - Windows | Solution Geographic Availability - Other - Europe and Africa, Other - North and South America | Topology - Proprietary, VGG-19, Unet | Use Case - Medical imaging, analysis and diagnostics

MaxQ AI uses the phrase ‘Data Industrialization’ to represent the end-to-end life cycle for data mining and treatment in support of software-based medical devices. This white paper explores the methods MaxQ uses to help ensure the utmost security during Data Industrialization within the use of their products.

Solution Brief
Intel® AI Builders - Solution Brief
This Solution Brief highlights how Intel optimizations of Winning Health’s Bone Age Assessment (BAA) model helped greatly reduce image analysis time enabling large scalability of SaaS solution for hospitals and clinicians on cloud platforms.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Other, On-premise (Private Cloud, Other) | Industry - Healthcare | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Optimization for PyTorch* | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models cant be re-trained - Inference only | Operating System - Linux | Solution Geographic Availability - China (PRC) | Solution Type - AI Software/SaaS | Topology - Proprietary, Other | Use Case - Medical imaging, analysis and diagnostics

This Solution Brief highlights how Intel optimizations of Winning Health’s Bone Age Assessment (BAA) model helped greatly reduce image analysis time enabling large scalability of SaaS solution for hospitals and clinicians on cloud platforms.

Solution Brief
Intel® AI Builders - Solution Brief
This solution snapshot illustrates how Yellow Messenger virtual assistant needed to inference an intent classification model in under 100 ms to provide customers with optimal experiences. Optimizing Yellow Messenger’s intent classification model on 3rd Gen Intel® Xeon® Scalable processors reduced inferencing time to less than 100 ms. Optimization cuts latency and speeds throughput, delivering real-time, intelligent responses for optimal customer experiences.
Categories: 
Application Type - Machine Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, CSP - Google Cloud, CSP - Microsoft Azure | Industry - Cross-Industry, Finance and Insurance, Healthcare | Operating System - Linux | Solution Geographic Availability - India, Other - Asia Pacific | Solution Type - AI Platform as a Service (AI PaaS) | Topology - Other | Use Case - Conversational Bots and Voice Agents

This solution snapshot illustrates how Yellow Messenger virtual assistant needed to inference an intent classification model in under 100 ms to provide customers with optimal experiences. Optimizing Yellow Messenger’s intent classification model on 3rd Gen Intel® Xeon® Scalable processors reduced inferencing time to less than 100 ms. Optimization cuts latency and speeds throughput, delivering real-time, intelligent responses for optimal customer experiences.

Solution Brief
Intel® AI Builders - Solution Brief
This solution snapshot illustrates how accelerating tuning on 2nd Gen Intel® Xeon® Scalable processors allowed Nordigen needed to reduce the hyperparameter tuning time for their XGBoost model (part of the Scoring Insights product suite) in order to streamline their model search efforts. Nordigen to expand the parameter space and even run faster on 3rd Gen Intel Xeon Scalable processors.
Categories: 
Application Type - Machine Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services, On-premise (Private Cloud, Other) | Industry - Finance and Insurance | Model Training - Models can be trained - data input only required | Operating System - Linux | Solution Geographic Availability - Brazil, Germany, India, Mexico, United Kingdom, United States, Other - Asia Pacific, Other - Europe and Africa | Solution Type - AI Software/SaaS | Use Case - Data Analytics

This solution snapshot illustrates how accelerating tuning on 2nd Gen Intel® Xeon® Scalable processors allowed Nordigen needed to reduce the hyperparameter tuning time for their XGBoost model (part of the Scoring Insights product suite) in order to streamline their model search efforts. Nordigen to expand the parameter space and even run faster on 3rd Gen Intel Xeon Scalable processors.

White Paper
Intel® AI Builders - White Paper
This white paper discusses ACCIPIO, a software device designed to be installed within healthcare facilities’ radiology networks to identify and prioritize NCCT scans based on algorithmically identified findings of acute intracranial hemorrhage (aICH).
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor, Intel® Core™ processor | Industry - Healthcare | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models can be trained - requires labeled data | Operating System - Windows | Solution Geographic Availability - Other - Europe and Africa, Other - North and South America | Topology - Proprietary, VGG-19, Unet | Use Case - Medical imaging, analysis and diagnostics

This white paper discusses ACCIPIO, a software device designed to be installed within healthcare facilities’ radiology networks to identify and prioritize NCCT scans based on algorithmically identified findings of acute intracranial hemorrhage (aICH).

Solution Brief
Intel® AI Builders - Solution Brief
This solution brief highlights how NimbleBox’s platform utilizes Intel® Distribution of OpenVINO™ toolkit and Intel optimizations for machine learning frameworks and languages boost inferencing on popular AI models running on Intel CPUs.
Categories: 
Application Type - Deep Learning | Compute - Intel® Xeon® Scalable processor | Deployment Channel - CSP - Amazon Web Services | Industry - Education, Other | Intel® AI Analytics Toolkit powered by oneAPI - Intel® Distribution for Python* , Intel® Optimization for TensorFlow* | Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI - Intel® Distribution of OpenVINO™ Toolkit powered by oneAPI | Model Training - Models cant be re-trained - Inference only | Operating System - Linux, Other (pls specify) | Solution Geographic Availability - India | Solution Type - AI Platform as a Service (AI PaaS) | Topology - Yolo | Use Case - Other (pls specify)

This solution brief highlights how NimbleBox’s platform utilizes Intel® Distribution of OpenVINO™ toolkit and Intel optimizations for machine learning frameworks and languages boost inferencing on popular AI models running on Intel CPUs.