Imagine a world where you have the flexibility to infuse intelligence into every application, from edge to cloud. Learn how leading businesses are getting AI done, and how yours can too with the New 3rd Gen Intel® Xeon® Scalable platform.
Whether you’re sequencing genomes, making product recommendations, or optimizing your supply chain, enabling smarter apps is good for your bottom line. Through a broad choice of smart solutions and tools, optimized for our general purpose processors with built-in AI acceleration and domain-specific accelerators – All built on the scalable & open oneAPI standard – it is now possible for everyone to unleash limitless insight, from edge to cloud.
Nordigen needed to reduce the hyperparameter tuning time for their XGBoost model (part of the Scoring Insights product suite) in order to streamline their model search efforts. Accelerating tuning on 2nd Gen Intel® Xeon® Scalable processors allowed Nordigen to expand the parameter space and even run faster on 3rd Gen Intel Xeon Scalable processors.
Yellow Messenger virtual assistant needed to inference an intent classification model in under 100 ms to provide customers with optimal experiences. Optimizing Yellow Messenger’s intent classification model on 3rd Gen Intel® Xeon® Scalable processors reduced inferencing time to less than 100 ms. Optimization cuts latency and speeds throughput, delivering real-time, intelligent responses for optimal customer experiences.
Most businesses want to benefit from AI, but without investing a lot of time and untold resources to build it from scratch. For developers, the struggle to create an AI application from nothing can take months! That’s where Intel’s vast ecosystem of partner solutions come into play. Get started today by choosing from Intel’s rich catalog of smart enterprise apps and turnkey solutions that are making it possible to unleash more valuable insights with faster time-to-solution and lower cost than ever before.
Harness tools that streamline end-to-end data science on Intel® Xeon® Scalable processors. Then, plug and play with the domain-specific accelerators and innovative technologies you need from Intel’s comprehensive lineup – built on a common open standard (oneAPI) to minimize switching costs – making it easier and faster to build and deploy smarter models into every application.
How Wonderful Gets Done 2021 will be your chance to hear directly from our leaders as they outline a bold vision for Intel technologies from the edge to the cloud. Join us for a launch event that will offer expert insights from Intel and our ecosystem partners, as well as a deep look at our re-imagined solutions.
Matroid prides ourselves on being able to help enterprises deploy computer vision detectors quickly and easily, on our customer's premises or in the cloud. We are happy to see the Intel AI Builder team was able to leverage Xeon Scalable processors and AVX-512 instructions embedded into TensorFlow to deliver the performance that we needed for inference on one of the main detectors used by Similarity Search. This result provides our customers with a viable alternative HW infrastructure for deploying this detector. Further, with the latest 3rd Gen Intel Xeon Scalable processors, the Intel team has been able to deliver a new level of performance which we are eager to take to our customers who have Xeon processors deployed en masse.
Reza Zadeh CEO of Matroid & Adjunct Professor at Stanford University
John Snow Labs is excited to put the AI acceleration features of the 3rd Gen Intel Xeon Scalable processors to good use. Using Intel's optimized hardware and software as part of the Spark NLP library with the current generation Xeon processors provides the open-source community and healthcare industry with turnkey acceleration of common text analytics scenarios - and the upgraded speed and cost-effectiveness of the 3rd Gen Intel Xeon processors are set to have a larger impact on a broader set of use cases.
David Talby CTO, John Snow Labs
The incredible flexibility of new 3rd Gen Intel® Xeon® Scalable processors empowers us to select higher throughput versus lower latency by varying the number of streams making Intel technology ideal for batch processing and transactional workloads. Utilizing the Intel Distribution of OpenVINO™ Toolkit enabled us to provide greater performance and value for our customers.