Developers face challenges when integrating solid state drives, faster network infrastructure, and the latest processor technology. This program delivers content that outlines performance and cost optimization as well as careful consideration and an understanding of the underlying hardware and software.
In this course, Petar Torre, technical lead for the Intel Service Provider Group, explains OpenStack* Enhanced Platform Awareness (EPA). The session looks at the impact of EPA on an NFV system and reviews results from early proof of concept trials.
In this course, Mirantis* Technical Instructor and Writer, Devin Parrish, helps viewers to know more about OpenStack* by discussing the history and components of the software, the OpenStack foundation, OpenStack benefits and how the software has evolved.
This course provides a brief overview of Red Hat's involvement with the open source community.
In this course, John Kariuki, Storage Applications Engineer, provides an overview of the Storage Performance Development Kit, covering the problem that it addresses, licensing, H/W and OS Support.
This course provides an overview of traditional storage architecture, including key concepts and features of a storage system. In addition, the services driving storage transformation are shared and core offerings of software defined storage are explored.
In this course, you will learn how to use Intel® Cache Acceleration Software to improve business by providing solutions to typical issues faced in the Data Center.
In this course, you will learn how to use Intel® Cache Acceleration Software to improve business by providing solutions to typical issues faced in the Data Center. We will provide a deep dive into Intel® CAS use cases, industry feedback, and learn how Intel® CAS and Intel® Optane™ SSD’s to achieve high performance results.
This course explains the ways organizations use SPDK to accelerate their application access to local and remote storage and serve optimized block storage to Virtual Machines. You will gain a deeper understanding of the capabilities provided by the SPDK stack. Lastly, key concepts central to building an asynchronous polled I/O application are discussed.
In this video, Intel Research Scientist and Architect, Andrew Herdrich, describes Intel® Resource Director Technology (RDT), its uses and its benefits and how to deploy it in the data center.
This video demonstrates using Intel® VTune™ amplifier to profile SPDK I/O API, analyze PCI-E traffic, and identify SPDK device related I/O performance issues.
In this course, Petar Torre, Intel's Lead Architect, Service Provider Group, provides an introduction to virtualization and cloud principles, and key open source projects for building NFV platforms.
In this course, James Chapman, platform applications engineer at Intel provides an overview of OpenStack* Enhanced Platform Awareness (EPA) features including CPU pinning, huge page, NUMA, PCIe pass-through and SR-IOV.
In this course, users are provided with an overview of ISA-L, including the functions in the library, key use cases, and a direct comparison to QAT.
In this course, Eric Adams, Intel Software Engineer, provides insight into Kata Containers, features and roadmap, technical details and how to customize it to your workload.
In this course, Karl Vietmeier, Intel Senior Cloud Solutions Architect, provides insight into the core components in a Ceph Cluster, the RADOS daemons, and shares a plethora of related materials.
This course introduces OpenVINO and provides instruction on how to enable the toolkit to develop unique visual solutions and experiences, as well as a variety of tools that can assist data center solutions developers in maximizing the performance of their solutions. The Intel(r) Distribution of OpenVINO(tm) toolkit is particularly relevant to data center solutions developers that seek to emulate human vision on convolutional networks (CNN). The toolkit extends workloads across Intel hardware (including accelerators) and maximizes performance. Other features/benefits: enables deep learning at the edge, supports heterogeneous execution across computer vision accelerators - CPU, GPU, Intel(r) Movidius(tm) Compute Stick, and FPGA - using a common API, speeds up time to market via a library of functions and preoptimized kernels, and optimized calls for OpenCV and OpenVX.