Today, we took another step toward the ambitious Cloud for All vision with the launch of a new data center telemetry framework into the open source community. We unveiled this framework, called snap, at the Tectonic Summit in New York, a two-day event for innovators in container infrastructure.
Snap enables better data center scheduling and workload management through access to underlying telemetry data and platform metrics. The snap framework will improve system administrators’ control of the intelligent use of data center infrastructure in cloud environments by:
- Empowering systems to expose a consistent set of telemetry data
- Simplifying telemetry ingestion across ubiquitous storage system
- Improving the deployment model, packaging and flexibility for collecting telemetry
- Allowing flexible processing of telemetry data on agent (e.g. machine learning)
- Providing powerful clustered control of telemetry workflows across small or large clusters
So why do we need a framework like snap? As workload scheduling and management becomes more advanced with the advent of software-defined infrastructure, access to the underlying platform capabilities and work states is critical. Data is the key to optimizing workload deployment based on performance and capability requirements. And this is where snap enters the picture.
Snap-enabled software tools will give system integrators, operators, solutions providers, and the data center analytics ecosystem a much more comprehensive view of infrastructure capabilities, utilization, and events in real time—making full automation and orchestration of workloads across server, storage, and network resources a reality.
There are three essential pieces to how the snap framework empowers all cloud platforms. First is the plugin architecture: snap has a simple and smart modular design. The three types of plugins (collectors, processors, and publishers) allow snap to mix and match functionality based on user need. All plugins are designed with versioning, signing and deployment at scale in mind. The open plugin model allows for loading built-in, community, or proprietary plugins into snap.
Secondly, snap is designed to update dynamically. Each scheduled workflow automatically uses the most mature plugin for that step, unless the collection is pinned to a specific version. Loading a new plugin automatically upgrades running workflows in tasks. Load plugins dynamically, without a restart to the service or server. This dynamically extends the metric catalog when loaded, giving access to new measurements immediately. Swapping a newer version plugin for an old one in a safe transaction. All of these behaviors allow for simple and secure bug fixes, security patching, and improving accuracy in production.
Third, snap administration scales through snap tribe. With snap tribe, nodes work in groups (aka tribes). Requests are made through agreement- or task-based node groups, designed as a scalable gossip-based node-to-node communication process. Administrators can control all snap nodes in a tribe agreement by messaging just one of them. There is auto-discovery of new nodes and import of tasks & plugins from nodes within a given tribe. It is cluster configuration management made simple.
The Intel news from the Tectonic Summit didn’t stop with snap. We are also using this forum to highlight the publication of a new reference architecture that will help accelerate the arrival of easy-to-deploy cloud solutions. This reference architecture features CoreOS Tectonic running on Intel® Xeon® processor-based Supermicro platforms and full solution integration by Redapt. Intel is squarely focused on delivering a choice of easy to deploy cloud solutions to the market that are fully optimized for Intel platforms. This reference architecture will help accelerate Tectonic solution based deployments.
These are just a few of the many steps we are taking with our technology partners and the broader ecosystem to help businesses realize the vision of Cloud for All—and tens of thousands of new clouds. For a closer look at the work we on doing on this front, visit Intel Cloud for All.