Storage Spaces Direct Practical Scalability

When I was attending Ignite, I was pretty anxious to hear about what was called Storage Spaces Shared Nothing which is now called Storage Spaces Direct.

One of the main feature of Storage Spaces Direct is the ability to leverage NVMe flash drives. Let’s look at some of the practical sizing calculation for this use case.

Maximum Storage Spaces Direct Cluster Nodes: 12 at this point in time, possibly more in the future
Maximum Throughput per NVMe drive: 3GB/s for a Samsung XS1715
Maximum NVMe drives per cluster node: 8 (AIC SB122 or Dell R920)
Maximum Throughput/Node: 24GB/s
Total Cluster Throughput: 288GB/s

Those are sure impressive numbers but if you want to have a non-blocking architecture for this, your network design needs to follow. Here’s what would be needed.

With each server pushing 192Gbps, a cluster could potentially push 12 * 192Gbps, so 2.3Tbps. In order to get to those numbers, from what I understand so far from Storage Spaces Direct, you would need to create basically one SOFS share per cluster node and spread your data across those 12 shares. Not too bad so far. Why? CSV block IO redirection is still around from what I gathered which means if you data slab/chunk are distributed across the cluster, you have a problem, more on this later. Let’s say you want to do something crazy like running SQL Server and leverage that nice throughput. One does not simply partition his SQL database across 12 VHDX. Why? That would mean all the IO of the cluster would need to converge back to the node running your SQL Server instance. 2.3Tbps incoming! Take cover! So what kind of hardware would you need to support this you ask? If you take 100GbE, which to my knowledge is the fastest interconnect generally available, you would need 23 ports per server running your workload or more tangibly 12 Mellanox ConnectX-4 dual port cards. You only have 10 PCIe slots in your R920? How unfortunate! Forget about the AIC 1U machine! I would also like to see the guy doing the wiring job on those servers! Another small detail, at 24 ports per machine, you could only hook up 3 servers to a set of 2 switches with 36 ports if you’re trying to do dual pathing for link redundancy. Hmm..Don’t we need to connect 12 of those servers? Looks like a incoming sizing fail to me. Perhaps I’m wrong and I missed something… If that’s the case let me know! Really what’s needed TODAY for this is 1Tbps link interconnect in order to make this setup practical and fully leverage your investment and future proof your infrastructure a bit.

So the only viable option to use that kind of throughput would be to shard your database across multiple SQL Server instances. You could use AlwaysOn and create read-only replicas (up to 8 in 2014, getting close to 12 but nope, not there, not sure if that’s bumped up in 2016) but then you are wasting quite a bit of storage potentially by duplicating your data around. Azure SQL on premise running on Azure Stack/Azure Service Fabric could start to look very nice. Perhaps another NoSQL database or Hadoop would be more suitable but in the vast majority of enterprises, it’s a radical change for sure!

Food for thoughts! I’ll still let that one stir for a bit…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s