We use a 4 node cluster, each node configured with the following hardware:
- Intel® Server System R2208WF
- 2x Intel® Xeon® Platinum 8168 CPU (24cores @ 2.7Ghz)
- 128GiB DDR4 DRAM
- 2x 375GB Intel® Optane SSD DC P4800X (NVMe SSD)
- 4x 1.2TB Intel® SSD DC S3610 SATA SSD
- Intel® Ethernet Connection X722 with 4x 10GbE iWARP RDMA
- BIOS configuration
- C States disabled
- BIOS performance plan
- Turbo On
- HT On
We deployed Windows Server 2016 Storage Spaces Direct and stood up VMFleet with:
- 4x 3-copy mirror CSV volumes
- 24 VMs per node
- Each VM rate limited to 7,500 IOPS (similar to Azure P40 disk)
Each VM runs DISKSPD, with 4K IO size at 90% read and 10% write rate limited at 7,500IOPS. This produces a total IOPS of ~720K (4 * 24 * 7,500).
Read IO is served at about 80 microseconds! This is significantly less than anything else we have seen before. Write IO is served at about 300 microseconds. The write latency is higher than read latency due primarily to network latency, as writes are mirrored in 3 copies to peer nodes in the cluster.
In addition, CPU consumption is less than 25%, which means there is plenty of headroom for applications to consume this leap in storage performance.
We are very excited to see these numbers and the value that the new generation of Intel Xeon Scalable Processors, Intel Optane DC SSD devices combined with the Intel Ethernet Connection X722 with 4x 10GbE iWARP RDMA can deliver to our joint customers. Let me know what you think.
Until next time