This brief discusses optimizing Matroid’s Similarity Search object detection model for Intel Xeon
Scalable processors has achieved increased performance and could open new doors for more flexible and possibly lower cost services and deployments.
This brief discusses Seassoon’s text detection solution which inferences data input for cognitive decision-making. It highlights how utilizing Intel Optimizations for PyTorch and Image
Detection enabled Seassoon to achieve 3X faster Inferencing and avoid the need for a more costly and complex GPU solution.
This brief discusses ICETech vision computing-based systems that automatically identify vehicles and license plates in unattended smart parking operations, allowing them to run more efficiently. Optimizing for OpenVINO and quantizing for INT8 using the Post Training Optimization toolkit (POT) and inferencing
with Intel DL Boost (VNNI) improved ICETech’s inferencing performance with minimal impact on accuracy.
Knowledge Lens assists manufacturers and industries with Artificial Intelligence (AI), Industrial IoT, Big Data, and other technologies that help transform enterprises into Industry 4.0-grade operations. This brief highlights how Knowledge Lens worked with the AI Builders team to utilize OpenVINO optimization for multiple use cases we have incredible improvement in performance while not compromising accuracy.
MaxQ AI uses the phrase ‘Data Industrialization’ to represent the end-to-end life cycle for data mining and treatment in support of software-based medical devices. This white paper explores the methods MaxQ uses to help ensure the utmost security during Data Industrialization within the use of their products.
This Solution Brief highlights how Intel optimizations of Winning Health’s Bone Age Assessment (BAA) model helped greatly reduce image analysis time enabling large scalability of SaaS solution for hospitals and clinicians on cloud platforms.
This solution snapshot illustrates how Yellow Messenger virtual assistant needed to inference an intent classification model in under 100 ms to provide customers with optimal experiences. Optimizing Yellow Messenger’s intent classification model on 3rd Gen Intel® Xeon® Scalable processors reduced inferencing time to less than 100 ms. Optimization cuts latency and speeds throughput, delivering real-time, intelligent responses for optimal customer experiences.
This solution snapshot illustrates how accelerating tuning on 2nd Gen Intel® Xeon® Scalable processors allowed
Nordigen needed to reduce the hyperparameter tuning time for their XGBoost model (part of the Scoring Insights product suite) in order to streamline their model search efforts. Nordigen to expand the parameter space and even run faster on 3rd Gen Intel Xeon Scalable processors.
This white paper discusses ACCIPIO, a software device designed to be installed within healthcare facilities’ radiology networks to identify and prioritize NCCT scans based on algorithmically identified findings of acute intracranial hemorrhage (aICH).
This solution brief highlights how NimbleBox’s platform utilizes Intel® Distribution of OpenVINO™ toolkit and Intel optimizations for machine learning frameworks and languages boost inferencing on popular AI models running on Intel CPUs.