Supermicro AI/Machine Learning Ready Platform

Supermicro AI/Machine Learning Ready Platform

The Supermicro AI/Machine Learning Ready Platform pushes the boundaries of deep learning and can scale up to hundreds of accelerators in one AI cluster. The Platform delivers a ready-to-go solution that offers the fastest path to scaling-up AI and enables customers to build their own enterprise grade AI cloud. Striking a balance between computing, networking and storage, the Platform is an optimized solution for certified Intel AI ecosystem partners. It enables quicker AI model training and inferencing with images and speech, more efficient gas & oil exploration, more accurate medical image analytics, and faster autonomous driving model generation. With Supermicro’s first-to-market, largest portfolio of server and storage building blocks, Supermicro will work with customers to define and configure the best infrastructure to meet all AI application needs. Our accelerated deployment model enables customers to spend more time driving insights and less time building infrastructure.

*Please note that member solutions are often customizable to meet the needs of individual enterprise end users.



  • Ready to deploy end-to-end solution
    - Configuration rich and can custom develop to meet any AI application needs - Scale out one to many racks - Industry leading Performance / Watt / $ / ft²
  • Intel AI Ecosystem Support
    - Optimized solution for certified Intel AI partners - Intel OpenVINO and MKL support
  • Container and open source orchestration software support
    Option 1: Red Hat OpenShift Container/OpenStack Platform and Ceph Storage Option 2: Canonical Kubernetes Container/ KubeFlow and Ceph Storage
  • Optional consulting and service support
    - Rack level solution design, configuration, testing and deployment


Brazil China (PRC) France Germany India Japan Korea Mexico Other - Asia Pacific Other - Europe and Africa Other - North and South America Russia Taiwan United Kingdom United States Worldwide Intel® Core™ Processor Family Intel® FPGA Hybrid Cloud On-premise (Private Cloud, Other) Caffe Caffe2 Chainer Custom / Other Keras MXNet PaddlePaddle TensorFlow Theano Torch / PyTorch Agriculture Arts and Entertainment Automotive Communications Cross-Industry Defense and Space Education Energy and Utilities Finance and Insurance Government Healthcare Manufacturing Not for profit Other Professional and Business Services Real Estate, Rental and Leasing Retail Software Transportation and Warehousing Models can be trained - data input only required Models can be trained - online learning Models can be trained - requires labeled data Models cant be re-trained - Inference only AI Appliance Linux Compute Library for Deep Neural Networks (clDNN) Distiller Intel® Data Analytics Acceleration Library (Intel® DAAL) Intel® Distribution for Python Intel® Machine Learning Scaling Library (Intel® MLSL) Intel® Math Kernel Library (Intel® MKL) Intel® Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) Intel® nGraph™ NLP Architect RL Coach Intel® Distribution of OpenVINO™ toolkit NLP Architect RL Coach BERT Deep Speech 2 Faster RCNN GNMT InceptionV3 LSTM MobileNet NMT Other Proprietary ResNet50 RNN SSD SSD-VGG16 Unet VGG-19 Yolo Anomaly Detection Content generation Conversational Bots and Voice Agents Data Analytics Data Preparations and Management Document Management Drug Discovery Facial Detection / Recognition / Classification Factory Automation Image / Object Detection / Recognition / Classification Medical imaging, analysis and diagnostics Other Predictive maintenance and analytics Robotic Process Automation Smart City Video Surveillance and Analytics Deep Learning Machine Learning Other