vSAN on HPE Synergy Composeable Infrastructure – Part 3

Today’s blog post will be about how we configure a Synergy blade from a OneView perspective ready to install ESXi and vSAN.

Synergy is centrally managed using HPE OneView. This management appliance is actually a hardware module which slots into the front of the Synergy frames. It is a highly available configuration with active / passive appliances spread across Synergy frames / chassis.
The OneView appliance is capable of managing multiple stacks, therefore you will only need 1 pair of Synergy appliances per datacentre, they are networked together across multiple Synergy frames using a ring topology, with secondary uplink ports to your datacentre switching infrastructure for connectivity.

In our environment, we have 2 stacks (logical enclosures) of 3 frames, so a total of 6 enclosures, 2 uplinks from the Synergy management ring to datacentre switching.

Getting back to OneView, once the initial setup of Synergy is done (which I will not go into as part of this blog series) – you will first of all differentiate the Server Hardware types.
OneView automatically discovers components, so once it’s found your compute modules, it will categorise them into different Server Hardware types.
The benefit of this is that you can instantly identify different compute blades for your various workloads. As our frames have both vSAN, FC blades, and dual-height blades for HPC, it will separate those out. This is granular right down to the Mezz cards installed.


So, we can see in the above image, I have a server hardware type which I have renamed to make it easy to identify as our vSAN configuration. It has identified which adapters we have installed, and the SmartArray P416ie adapter is the important one, as this will be the SAS controller which connects out to the D3940 storage module across the frame’s backplane. It’s identified that there are 10 server hardware (compute modules) so we can now build a profile template to provision these.

Now onto creating a server profile template. This takes a static entry of the server hardware type we identified above, and an enclosure group to define a set of parameters to apply when creating a server profile against a specific compute module.


The great thing about using a profile template, is it ensures a standard configuration across all the compute modules using that profile, and if you change a setting in the template, you can pull those settings into the server profile. The profile allows you to define the following:
• Firmware baseline
• Connections (such as networking, FC)
• Local storage
• SAN storage (for 3PAR integration)
• Boot settings
• BIOS settings

You can see in the above picture how we have configured our server profile template. For the sake of this blog article I have narrowed it down to the network and storage configuration.
For network connectivity we have a tunnel for each ‘side’ of the frame to ensure redundancy across uplinks to physical networking. These trunk all the VLANs presented from the physical switching infrastructure up to the compute blade, where a DVS will handle the networking config.

The local storage controller we have configured to create a RAID 1 array with 2 drives internal to the compute module. As per the last blog, these are 300GB SSDs which we install ESXi on. We use this method because we have still found limitations around using an SD card install with large memory sizes for core dumps, and vSAN traces. We wanted to avoid having to setup some central log point for this. SATADOMs are an option for Synergy, but the complexity of how these are utilized in the SmartArray controllers was confusing, therefore we decided to just keep things simple.
This also incorporates the storage you configure from the D3940 storage module, as it’s SAS attached it’s considered local, vs. storage external to Synergy just as SAN storage.

This is the cool bit. OneView has already discovered what drives are in the storage module and shows you these when creating the mapping for the drives. So, for the vSAN capacity tier you can see I have requested 3 x 3.84TB SATA SSDs, and for the cache tier I am requesting a single 800GB SAS SSD. With this configured, when you create a server profile, it will automatically assign disks from the storage module in ascending numerical order. Below is a screenshot of the storage module in OneView for reference.

A few more settings for the BIOS, and boot order and we are finished with the template. Now we can use the template to do the heavy lifting of creating server profiles against the compute modules. No cabling SAS adapters / ports, installing drives, or network connections in sight!

I will briefly show a screenshot of how you can create a server profile in OneView, but it’s not my style to create 10 server profiles by hand, it’s got to be done in PowerShell!

How do you do this magic in PowerShell, actually the OneView PowerShell module is pretty powerful, and works fantastic for automating Synergy. With a few lines of code, I am able to automatically create all our server profiles ready for installing ESXi. And of course, that’s all going to automated too.

First of all, you are going to want to obtain the OneView PowerShell Module. You can find it over at the PowerShell Gallery.

Creating the profile is actually pretty straight forward. Below is how I bulk created the profiles for the vSAN compute nodes:

Connect-HPOVMgmt -Hostname myoneviewappliance.local

$vSANBlades = Get-HPOVServerHardwareType -Name 'SY480-Gen10 vSAN' | Get-HPOVServer | sort position

# Create Profiles
$count = 0
$ProfileTemplate = Get-HPOVServerProfileTemplate -Name 'SYN01-SY480-10-ESXi-vSAN'
foreach($Blade in $vSANBlades) {
  $BladeName = "myesxhost0$($count).local"
  Write-Host $BladeName
  New-HPOVServerProfile -Name $BladeName -Server $Blade -ServerProfileTemplate $ProfileTemplate -AssignmentType Server
  $count++
}

That’s it! I named the hosts in an ascending numerical sequence. The profile will automatically pull in the setting from the template, then assign the SSDs in the D3940 storage module based on the allocation config. Unfortunately, there is no iLO configuration in OneView – which is a shame, I wish there was as it has all the rights into the iLO for orchestration, so it would be relatively simple to put a base configuration down. So, I also automate the base iLO config. For this you will need the HPREST PowerShell Module. This effectively sets the password, and the iLO hostname, then reboots it. We use DHCP with Dynamic DNS in our iLO management network, so that helps keep things simple. Here’s the code:

# Form names
$ServerNames = (0..9) | % {"myesxhost0$($_)"}

# Initial iLO Config
foreach($Server in $ServerNames) {
  $iLOSSO = Get-HPOVServerProfile -Name "$($Server).local" | Get-HPOVIloSso -IloRestSession

  if($iLOSSO) {
    Write-Host '--> Successful!' -ForegroundColor Green
    
    Write-Host '-> Getting iLO Accounts' -ForegroundColor Green
    $UserAccounts = (Get-HPRESTDir -Href '/redfish/v1/AccountService/Accounts' -Session $iLOSSO).Members.'@odata.id'

    foreach($Account in $UserAccounts) {
      $AccountInfo = Get-HPRESTDataRaw -Session $iLOSSO -Href $Account
      if($AccountInfo.UserName -eq 'Administrator') {
        Write-Host '--> Setting Administrator Password' -ForegroundColor Green
        $PWDSetting = @{}
        $PWDSetting.Add('Password','myPass')
      
        $PWDResult = Set-HPRESTData -Href $Account -Session $iLOSSO -Setting $PWDSetting
      }
    }

    Write-Host '-> Setting Hostname' -ForegroundColor Green

    $iLOHostSet = @{}
    $iLOHostSet.Add('HostName',"$($Server)-ilo")

    $NetResult = Set-HPRESTData -Session $iLOSSO -Href '/redfish/v1/Managers/1/NetworkService/' -Setting $iLOHostSet
    if($NetResult.error.'@Message.ExtendedInfo'[0].MessageId -like '*Reset*') {
      Write-Host '--> Set Successfully!' -ForegroundColor Green
    }

    Write-Host '-> Resetting iLO' -ForegroundColor Green
    $ResetResult = Invoke-HPRESTAction -Session $iLOSSO -Href '/redfish/v1/Managers/1/Actions/Manager.Reset/'

  }
}

After this we are ready to start getting ESXi installed, and vSAN Configured!

That’s it for the blog article for today. Next blog post will detail automating the ESXi installation and getting vSAN configured.

2 thoughts on “vSAN on HPE Synergy Composeable Infrastructure – Part 3

  1. Have you tested the sata drives with sas modules firmware update?
    If I read correct sata drives have singe controller so drives will be down during firmware upgrades on frame.

Leave a Reply

Your email address will not be published. Required fields are marked *