vSAN on HPE Synergy Composeable Infrastructure – Part 2

Firstly, Apologies for the delay in getting the follow up to this series posted. I am getting all these together now to post in quicker succession. Hopefully will have all these posted around VMworld US!

So, this blog post is going to dive into the configuration and components of our vSAN setup using HPE Synergy, based on the overview in Part 1 where I mentioned this would be supporting VDI workloads.

Firstly, a little terminology to help you follow along with the rest of the blog articles:
Frame – Enclosure chassis which can hold up to 12 compute modules, and 8 interconnect modules.
Interconnect – Linkage between blade connectivity and datacentre such as Fibre Channel, Ethernet, and SAS.
Compute Module – Blade server containing CPU, Memory, PCI Expansion cards, and optionally Disk.
Storage Module – Storage chassis within above frame which can hold up to 40 SFF drives (SAS / SATA / SSD). Each storage module occupies 2 compute slots.
Stack – Between 1 and 3 Synergy frames combined together to form a single logical enclosure. Allows sharing of Ethernet uplinks to datacentre with Virtual Connect.
Virtual Connect – HPE technology to allow multiple compute nodes to share a smaller set of uplinks to datacentre networking infrastructure. Acts similar to a switch internal to the Synergy stack.

All of the vSAN nodes are contained within a single Synergy frame or chassis. Main reason behind this is that today, HPE do not support SAS connectivity across multiple frames within a stack, therefore the compute nodes accessing storage must be in the same frame as the storage module. You can mix the density of storage vs. compute within Synergy how you like. So, using a single storage module will leave 10 bays for compute.

Our vSAN configuration is set out as so:
1 x 12000 Synergy Frame with:

2 x SAS Interconnect Modules
2 x Synergy 20Gb Interconnect Link Modules
1 x D3940 Storage module with:

10 x 800GB SAS SSDs for Cache
30 x 3.84TB SATA SSDs for Capacity
2 x I/O Modules

10 x SY480 Gen 10 computer modules with:

768GB RAM (24 x 32GB DDR4 Sticks)
2 x Intel Xeon Gold 6140 CPUs (2.3Ghz x 18 cores)
2 x 300GB SSDs (for ESXi Installation)
Synergy 3820C CNA / Network Adapter
P416ie-m SmartArray Controller

The above frame is actually housed within a logical enclosure or stack containing 3 frames. This means the entire stack shares 2 redundant Virtual Connect Interconnects out to the physical switching infrastructure – but in our configuration these are in a different frame to that containing the vSAN nodes. The stack is interconnected with 20GB Interconnect modules to a pair of master Virtual Connects. For our environment, we have 4 x 40Gbe uplinks to the physical switching infrastructure per stack (2 per redundant Interconnect).

We keep our datacentre networking relatively simple, so all VLANs are trunked through the Virtual Connect switches directly to ESXi. We decided not to have any separation of networking, or internal networking configured within Virtual Connect. Therefore, vSAN replication traffic, and vMotion traffic will traverse out to the physical switching infrastructure, and hairpin back in, however this is of little concern given the bandwidth available to the stack.

That’s all for an overview of the hardware. But do let me know if there is any other detail you would like to see surrounding this topic! The next post will detail how a blade is ‘cabled’ to use the storage and network in a profile.

Leave a Reply

Your email address will not be published. Required fields are marked *