About Conversations in the Cloud
IT leaders driving the future of a software-defined infrastructure are sharing their knowledge and thoughts for the current market trends. The podcast series features members of the Intel Builders programs. Participants are also Intel experts and industry analysts. They provide you with valuable information on delivering, deploying and managing cloud computing, technology, and services in your data center or enterprise.
Have you missed the podcast? We are here to share with you the most important topics covered in the discussion “Is Latency the Key to Storage?”.
Capacity storage vs. latency storage
Many storage experts note capacity vs. latency storage is a key storage dimension.
Capacity storage is vast in, well, capacity. However, it is less demanding in terms of performance. It is usually in the petabyte range and supports systems with low performance requirements – systems which are OK with latency of 1+ millisecond, sometimes 3-10 ms. Capacity storage systems can be found in archive, data retention, time-series databases use cases, for Internet of things, video, photos and other similar applications. These systems are sometimes built in different geographic regions, as they can tolerate high latency, which is inevitable, when data must travel vast distances. It is the law of physics.
Usually, latency storage can be found in the borders of one data center. It supports active applications, which have demand fast performance. Such applications are Databases, Virtual Machines, VDI (Desktop Virtualization), OLTP (On Line Transaction Processing) systems and others. All these applications demand latencies of less than 1 millisecond down to a few tens of microseconds.
The lower the latency, the faster the application
A fact, which is not well understood is that latency is probably the most important storage metric of a storage system. It is more important than IOPS for the majority of use cases. Since many applications are running sequential operations – the lower the latency of a storage system the faster the application is.
For example a database will issue a read or write operation to the storage system, will wait (latency) to get the data and will then use it – join, merge, etc to produce a result. Only then will this result be used to do the next query. Therefore a system delivering 200 ms of latency will be 10 times faster than a traditional SAN with 2 ms latency.
In the episode of Conversations in the Cloud, Boyan Krosnov talked about the storage landscape. He clarified the difference between capacity-driven and latency-driven storage systems.
Software-defined storage and software-defined data centers
Boyan started with a short retrospect of the StorPool’s history dating back to 2011. In this time the idea for developing StorPool came to his team, based on market feedback. Existing storage systems were slow, expensive and not scalable enough. Initially, the main mission of StorPool was to serve companies that wanted to build a public cloud and had storage challenges around the service they were building. Оn the next stage of the solution development, private clouds were also covered. Using StorPool’s solution you can now build high performance public and private clouds.
StorPool developed a new type of storage software from the ground up. One that was a scale-out, high-performance and extremely low in latency. In addition, it was extremely efficient in terms of server resources, so it can run on a standard server, alongside applications (hyper-converged).
In the next few years the market matured. Thus terms like “software-defined storage” and “software-defined data center” were born and became established.
Most of the software-defined storage users nowadays are companies which build public or private clouds. They search for a flexible and scalable storage system suited to their needs. In addition it should provide a low latency and high performance.
If the company is running hundreds of VMs, a storage system which delivers hundreds of thousands of IOPS and microseconds-level of latency is a must. A best-of-brees software-defined storage solution delivers just that, at a much lower price point than traditional SANs or all-flash arrays.
Is Latency the Key to Storage?
Boyan Krosnov explained that the IOPS numbers advertised by a lot of storage system vendors actually have nothing to do with the application performance companies will get. It is the latency of storage operations that is tightly related to the application performance you are going to get.
Therefore if you design for lower latency, you can achieve latency levels which are a fraction of what a system, which is not designed for lower latency, can do. To summarize Boyan outlined that a standard StorPool system, built with standard hardware, can do the impressive 200 microseconds under load – for a shared storage system this is amazing.
The full podcast can be found below:
In conclusion – when you are buying a storage system, focus on a low-latency architecture. This will deliver qualitatively better results for your users and applications.