Playing around with Azure Stack HCI

Decided to have some fun with (nested) Microsoft Azure Stack HCI in my lab.

If you want to do the same, I’ve scripted most the stuff you need, so… maybe it will be useful.

Steps to prepare a brand new, shiny, nested Azure Stack HCI lab are (roughly):

  • prepare your (parent) Windows Server 2022 Hyper-V host (ensure enough resources are available)
    • it already hosts my Active Directory, DNS, DHCP, router, …,  VMs
    • everything will be saved locally to D:\AzureStackHCI
  • (optional) install Windows Admin Center (WAC) for easier management
    • download it and install with simple command:

  • obtain the Azure Stack HCI 60-day trial ISO image from here
  • make VHD(X) from the obtained ISO image:

    • note that I’m using Convert-WindowsImage.ps1 available here
    • which gives me nice, generalized Azure Stack HCI VHD(X), which we will “upgrade with things” and later use for VM creation
  • install prerequisites into VHD(X)
    • this one is fairly easy – install Windows roles and features directly to the VHD(X) itself:

    • NOTE: If you try to install the Hyper-V role later, it may fail as we’re running Azure Stack HCI on “normal” Windows Server, so it get’s confused with nested virtualization availability. With preinstaling it, we make sure it just works.
  • update VHD(X) with latest patches:

    • I have previously downloaded all the Azure Stack HCI patches available to D:\AzureStackHCI\Updates
  • add Unattend.xml to handle the “set password at first login issue”
    • it annoys me that I need to set up the initial password, so… simple Unattend.xml file, injected into VHD(X) should take care of this:

    • NOTE: Make sure you don’t use clear-text passwords in Unattend.xml file!
  • create Azure Stack HCI VMs
    • I’m creating two VMs from our prepared VHD(X), with a couple of additional data disks, few network adapters for different purposes, nested virtualization enabled, etc.:

  • set node networking, join them to the domain, prepare for cluster (by using PowerShell Direct):

  • create the Azure Stack HCI cluster:

  • register (optional) the Azure Stack HCI cluster:

  • create CSV(s) and virtual switches for child workloads (but add nodes/cluster to WAC before, if not using PowerShell)

  • play around with your new cluster
  • (optional) clean all/redeploy if needed:

And now you have fully functional, nested, 2-node Azure Stack HCI cluster – nothing too fancy, but you can extend it how you wish! 😊

You can begin exploring the Azure Stack HCI itself, use it with Azure Arc, or perhaps install AKS on Azure Stack HCI and play around with it. Or something else.

Cheers!

P.S. You can use these scripts also for stuff other than Azure Stack HCI, of course! 😉
P.P.S. Code is also available on my GitHub page.

Basic SharePoint load balancing

I’ve recently created a simple lab which gave me some answers around load balancing a SharePoint 2016 farm with SSL offloading.

To start, I’ve created a couple of virtual servers (on top of my “supercool home Windows Server 2016 Hyper-V PC” Smile) – a domain controller, a SQL server and two SharePoint servers. I’ve also downloaded a KEMP LoadMaster appliance (there is also a free one here, which would have been just enough for this lab) and prepared my DigiCert wildcard certificate (there is no need for the wildcard option, but this is the one I already have, so I’ve decided to use it).

So… I’ve prepared a domain controller, joined all the other servers to the domain and then installed SQL Server 2016. After that, on SharePoint servers, I’ve ran a preparation wizard and created a new SharePoint farm from the first node… with second node joining to it later. At the end, I’ve done the “Farm configuration” wizard and was all set to do the load balancing part. (And yes – I know that clicking “Next” is lame, but… it works. Smile)

The networking configuration for this lab is pretty simple. I have two VLANs – 111 (backend, where all the servers are residing) and VLAN 101 (frontend, where my LB virtual servers are).

I’ve created a new virtual machine for the load balancer, attached it to the two mentioned networks and also added the virtual disk downloaded from KEMP’s website.

image

After that, I’ve done the initial configuration wizard of LoadMaster which is actually straight-forward (setting the password, IP addresses, and importing a certificate afterwards).

With this done, we can create our virtual service(s) – there is actually a great guide for configuring the SharePoint load balancing virtual servers with KEMP LoadMaster.

I’ve used the following basic (manual) settings for my virtual service:

image
image

HINT: When troubleshooting load balancing – make sure that you have only one node behind the balancer… it makes things so much easier to troubleshoot! Smile

One last thing that wasn’t working with this “Next, Next, Next…” configuration was the Alternate Access Mappings (AAM) part – to be able to access a SharePoint farm over HTTPS and a public name, AAM should “know about it”. There is a great guide about AAM available – make sure you read it.

Default AAM settings for my farm were:

image

After (a lot) of troubleshooting and research, they were changed to this:

image

And… that’s it – it works! Smile

My totally awesome SharePoint 2016 site, located behind a load balancer and published with a trusted certificate (with SSL session terminating on my virtual KEMP load balancer), was alive:

image

To conclude – in all the configuration that was done, setting the AAM right was something that gave me most of the headache (load balancing/redirections not working right, troubleshooting what’s happening, etc.). Pay special attention to it! Once you figure it out, you’re done. Smile

Cheers!