This solution snapshot illustrates how Yellow Messenger virtual assistant needed to inference an intent classification model in under 100 ms to provide customers with optimal experiences. Optimizing Yellow Messenger’s intent classification model on 3rd Gen Intel® Xeon® Scalable processors reduced inferencing time to less than 100 ms. Optimization cuts latency and speeds throughput, delivering real-time, intelligent responses for optimal customer experiences.
This solution snapshot illustrates how accelerating tuning on 2nd Gen Intel® Xeon® Scalable processors allowed
Nordigen needed to reduce the hyperparameter tuning time for their XGBoost model (part of the Scoring Insights product suite) in order to streamline their model search efforts. Nordigen to expand the parameter space and even run faster on 3rd Gen Intel Xeon Scalable processors.
This white paper discusses ACCIPIO, a software device designed to be installed within healthcare facilities’ radiology networks to identify and prioritize NCCT scans based on algorithmically identified findings of acute intracranial hemorrhage (aICH).
This solution brief highlights how NimbleBox’s platform utilizes Intel® Distribution of OpenVINO™ toolkit and Intel optimizations for machine learning frameworks and languages boost inferencing on popular AI models running on Intel CPUs.
This solution brief highlights how GIGABYTE was able to improve their AI workloads by implementing a TensorFlow framework with Intel Distribution of OpenVINO toolkit and utilizing 2nd Gen Intel Xeon Scalable processors with Intel DL Boost in a GIGABYTE system.
This solution brief highlights how, to help optimize the performance of the full-cycle AI medical imaging solution, Intel offered HYHY technologies such as 2nd Gen Intel® Xeon® Scalable processors with Intel® Deep Learning Boost (Intel® DL Boost) as the core processing engine of this solution, and software optimization tools such as the OpenVINO™ toolkit and Intel® Distribution for Python with HYHY seeing significant improvements to inference speed in image analysis scenarios such as COVID-19 screening and breast cancer detecting.
This snapshot shares the success achieved from Baosight and Intel team in building an unsupervised time series anomaly detection project, using long short-term memory (LSTM) models on Analytics Zoo
This solution brief highlights how the Paperspace Gradient platform optimizes machine learning pipelines and
delivers faster inferencing and lower query latency on 2nd Gen Intel® Xeon®
processors using Intel Distribution of OpenVINO™ toolkit and Intel Distribution of OpenVINO Model Server
This solution brief highlights how, already optimized with Intel Optimizations for TensorFlow, Embold
AI-powered intelligent static code analysis tool gains additional
performance with fine tuning for GPT-2 model.
This white paper highlights the HCL Social Distance Monitoring solution and how the application is optimized with Intel® Distribution of OpenVINO™ Toolkit and can run inference algorithms across a variety of Intel enabled edge/data center devices.