The Intel® AI DevCloud for Builders is an exclusive cloud for Intel® AI Builders members

  • Provided at no charge.
  • Includes a cluster of high-end Intel® Xeon® Scalable Processors that will assist with machine learning, deep learning training and inference compute needs.
  • Provides access to precompiled software optimized for Intel® architecture on Intel® Xeon® Scalable Processors to simplify the deep learning process and accelerate time-to-solution.
  • Provides Enterprise customers a controlled environment where everything "just works".
  • May be used for limited POCs lasting 3 - 6 months.
  • Starts at 200 GB of storage.

To learn more about how to join Intel® AI Builders please visit https://builders.intel.com/ai/howtojoin.

Already an Intel® AI Builders member? Log in to the Member Portal for access instructions.

Intel and Amazon Web Services (AWS)

Want to take advantage of the incredible performance of Intel technology on AWS? Learn more about the new compute-intensive C5 instances for Amazon EC2 and the longstanding collaboration between Intel and AWS here:

C5 instances for Amazon EC2 are:

  • Ideal for compute-intensive scientific modeling, financial operations, machine learning (ML) inference, high performance computing (HPC) and distributed analytics that require high performance for floating point calculations.
  • Includes Intel® custom cloud solution based on next generation Intel® Xeon® Scalable processors and Intel AVX-512.
  • Offers up to 72 vCPUs (twice that of the previous generation compute-optimized instances).
  • Supports 144 GiB of memory.

Intel and Google Cloud Platform (GCP)

Google Cloud now offers the latest Intel® Xeon® Processor family codenamed “Skylake” that can be specified through the CPU selector tool and is available in all of North America, Europe and Asia-Pacific GCP regions.

Intel Skylake-based instances in Google Compute Engine (GCE) or Google Kubernetes Engine (GKE) deliver record breaking performance at no additional cost that can improve application performance with:

  • Up to 20% faster business application compute performance**
  • Up to 82% faster HPC performance**
  • Almost 2X more memory bandwidth**

Get started with $300 in free credits. To secure additional credits, please contact your Intel® AI Builders account manager.

Latest Intel® Optimizations for TensorFlow* Now Available

Find out how you can take full advantage of Intel® architecture and to extract maximum performance for your deep learning applications with optimized TensorFlow using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). For download links and instructions on installing with python* packages or Docker containers, see Intel® Optimizations for TensorFlow* Installation Guide.

** All measurements based on comparing GCP 32 vCPU machine type instances between Intel Xeon processor codenamed "Skylake" versus previous generation Intel Xeon processor codenamed "Broadwell." 20% based on SpecINT. 82% based on on High Performance Linpack for 4 node cluster with AVX512. Performance improvements include improvements from use of Math Kernal Library and Intel AVX512.. Performance tests are measured using specific computer systems, components, software, operations and functions, and may have been optimized for performance only on Intel microprocessors. Any change to any of those factors may cause the results to vary. You should consult other information to assist you in fully evaluating your contemplated purchases. For more information go to https://www.intel.com/benchmarks.