Using a self-hosted runner with GitHub Actions

As I was going through the excellent short course called Azure Infrastructure as Code with GitHub (by fellow MVP, Barbara Forbes), a thought appeared – what do I need to do to use my custom runner machine inside a pipeline for… I don’t know… security/privacy concerns, isolation, special requirements, different OS, control, price… or just to complicate things a bit?

Of course, GitHub supports this and it’s called a self-hosted runner.

So, what do I need to do to use this self-hosted runner with my GitHub Actions?

It’s relatively simple – there is an application package, which will be installed on your runner machine, and which will listen for and eventually do all the work defined in your workflow!

But first, let’s introduce my environment.

I have a simple GitHub Action (workflow), which creates a simple storage account on my Azure environment (there is actually no need to convert Bicep to ARM before deployment, but it seemed cool 😀). It’s currently using the „ubuntu-latest“ runner, provided by GitHub… which has also all the needed components inside (like Azure CLI, Azure PowerShell, …).

And it works fine. When there is a push to my GitHub repository, GitHub Actions starts and does what is needed on my Azure environment via this workflow:

And the mighty Bicep file (😀) it’s using for the deployment is:

Of course, this runs just fine on a standard (hosted) runner:

To run this workflow (successfully) not that much is needed.

First, I’ve created a new virtual machine (I’ll use a simple Ubuntu Hyper-V VM, no autoscaling, no… nothing) called hermes (god of speed 😀), with freshly installed Ubuntu 22.04.1-LTS (minimized).

After that, I went to the Settings of my GitHub repository and got the download and install scripts for the x64 Linux runner:

As you can see, I’ll be using crontab later to automatically (re)start my self-hosted runner.

If everything went well, you should see your runner “up and running” (😀) in the GitHub portal:

Next, I’ll use the following script to install all prerequisites for my workflow (like Azure CLI, Azure PowerShell, etc. – it really depends on your workflow and things you use):

Once this is done, my self-hosted runner hermes should be ready to run the workflow.

To try this, I need to make a slight update to my workflow file – line 12 inside the job configuration should be updated from “runs-on: ubuntu-latest” to “runs-on: self-hosted“.

So, my workflow YAML file now looks like this:

And once I push the configuration to my GitHub, my workflow automatically starts and runs on hermes, my self-hosted runner:

If we prepared our runner right, all is good! 😊

Of course, our resources are deployed successfully:

So, this is how you can use your own, self-hosted runner, to execute your GitHub Actions (workflows).

Cheers!

UniFi Network Application on Ubuntu Server on Raspberry Pi 3 (arm64)

Another catchy title, right? 😀

Decided that I want to move my UniFi network controller to Ubuntu based installation, which will be running on top of Raspberry Pi (3) device which was collecting dust.

UPDATE: If you’re having issues with Java 11 or path not being set, take a look at comment by G. Faria below.

This can be easily achieved by following these steps:

  • take a backup of your current configuration (my controller was offline, so I’ve just copied the last automatic backup from /var/lib/unifi/backup/autobackup)
  • prepare the SD card with OS installation (detailed info here) – I’ve selected Ubuntu Server 21.10:

  • with prepared SD card, boot Ubuntu Server on your Raspberry Pi device
  • first login is ubuntu/ubuntu, and you’ll need to change password immediately after
  • next, you’ll probably want to set your Raspberry Pi to use static IP configuration – I’m using netplan to set it up:
    • just in case, I removed all *.yaml files inside /etc/netplan/
    • create new netplan template (YAML file) called 00-eth0.yaml in /etc/netplan/ (watch those white spaces!)

    • apply the configuration (your IP address will be reconfigured, so you’ll also lose the current SSH connection, if connected remotely(!))

  • install the UniFi Network Application:
    • there is a nice official guide here, once you take care of prerequisites, but basically:

  • restore from backup and start using it:

Note: If you’ll check your unifi service status, you may see “WARN Unable to load properties from ‘/usr/lib/unifi/data/system.properties’“:

This can easily be resolved by enabling the built-in “uncomplicated firewall” (ufw) – don’t forget to open ports you’ll need when it’s active! (such as SSH (22/tcp), inform endpoint for your devices (8080/tcp), UniFi Network Application web (8443/tcp), etc.):

* can be done more restricted, if needed
** full list of ports is available at https://help.ui.com/hc/en-us/articles/218506997-UniFi-Ports-Used – add them if needed (for instance, 6789/tcp is used when testing upload/download with mobile app)
*** be careful – it’s a firewall!

Note: If your access point is shown offline/timeout, maybe you forgot to open up the “inform port” 8080/tcp in UFW (been there, of course 😀):

And that’s it – you now have the fully functioning UniFi controller/UniFi Network Application running on top of your Ubuntu Server powered Raspberry Pi device!

Cheers!

P.S. Enable the autobackup feature… it’s useful (sometimes)! 😀

Bad Request for url (error 400) in AKS

I’ve decided to go through the **awesome** AKS Workshop on Microsoft Learn and had some issues (with my setup), which I wanted to share, in case someone else hits them.

It was all good until I got to the part of creating the AKS cluster with Azure CLI – I was using Windows Terminal with WSL (Ubuntu 20.04) instead of using Azure Cloud Shell as suggested. I’ve gone through the steps of preparing variables needed for creating the cluster as it says, and when I tried to finally create the cluster by using “az aks create” command, I’ve got an error:

Error states that something is wrong with our request and neither –verbose or –debug options were giving me any useful details (actually, it was in front of me all the time, but I didn’t see it 😊). I’ve rechecked/reset the variables, tried once more and once more… it was all the same. As Google was conveniently down at the time (who would say, right?!), I’ve had to try and figure it out by myself. So, I’ve looked at the error once again:

Operation failed with status: ‘Bad Request’. Details: 400 Client Error: Bad Request for url: https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/aks-workshop/providers/Microsoft.Network/virtualNetworks/aks-vnet/subnets/aks-subnet%0D/providers/Microsoft.Authorization/roleAssignments?$filter=atScope%28%29&api-version=2018-09-01-preview

… and then it struck me!

There’s some trash in the URL (more precisely – my AKS subnet ID was having “%0D” added to the end)!

And if we check what “%0D” exactly stands for, it says “carriage return” (which I’ve obviously didn’t want to be a part of my subnet ID) – so, even it all seemed fine when looking at the variable content, now I know it wasn’t.

Easy-peasy, we can fix the part where we’re extracting this subnet ID or we can just replace the variable’s value with the right one (without the %0D at its end, that is).

That got me going… towards the next error. This one was actually more descriptive (yes, and the first one is descriptive enough, if you read it carefully 😊) – it said that I’ve got additional content inside my Kubernetes version variable:

Operation failed with status: ‘Bad Request’. Details: Error to parse agent pool version “1.19.3\r”: Invalid character(s) found in patch number “3\r”

You can see the extra “\r“, which again, is here because of bad value assigned to the variable $VERSION.

Which can also be easily fixed.

One other funny thing I’ve observed was, when getting my Kubernetes cluster credentials, as you can see below, they were actually merged to C:\Users\tomica\.kube\config:

This was funny because I’m inside WSL… which doesn’t actually have C:\Users\tomica\.kube\config, right? (and no, credentials weren’t merged to /home/tomica/.kube/config, which kubectl there uses by default, so… they are actually at /mnt/c/Users/tomica/.kube/config – funny, will check with the MS folks) 😊

Fair enough – we can merge them manually or just select the right file and we’re good to go:

There you go – if you get stuck on similar things, maybe this can help you. 😊

Cheers!

Creating some virtual machines in Azure with PowerShell

The other day I was creating some Linux virtual machines (I know, I know…) and, with Azure being my preferred hosting platform, I’ve decided to create this machines by using a simple PowerShell script. Not because I’m so good at PowerShell, but because I like it… and sometimes I really don’t like clicking through the wizard to create multiple machines.

I wanted to create multiple machines with ease, each with “static” IP address from the provided subnet, accessible via the Internet (SSH, HTTP) and running the latest Ubuntu Linux, of course.

So, I was browsing through the official documentation (a.k.a. docs.com, more specifically https://docs.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-powershell), and I’ve come up with this (my version of the official docs):

If this helps you with similar task – you’re welcome.

Cheers!

Deploying Linux machines by using System Center 2016: Virtual Machine Manager templates

In light of “Microsoft loves Linux” initiative, you can now deploy your Linux virtual machines by using templates in the System Center 2016: Virtual Machine Manager. As I was searching on how to do this (successfully), there were couple of articles that helped, so I’ve decided to do a short list of all the necessary steps (in one place).

Steps to make your Linux VM template deployments work:

  • create a new (Generation 2) virtual machine (as you would normally do)
  • install the Linux operating system in that virtual machine (as you would normally do)
    • HINT: A list of supported Linux distributions and versions on Hyper-V is available here.
  • install the Linux Integration Services (LIS) (as per this post):
    • open the “modules” file
    • add the following to the end of this file:
    • save it (Ctrl+X and Y)
    • install LIS and reboot the machine by using the following commands:
    • check if the services are running by using the command:
  • install the Virtual Machine Manager agent (as per this post):
    • share the folder C:\Program Files\Microsoft System Center 2016\Agents\Linux on your VMM machine
    • copy the VMM agent files to Linux virtual machine
      • as a real Windows admin, I did it through the GUI
    • install the agent:
  • fix the boot for Generation 2 virtual machine (boot information is by default stored in the VM configuration file, not on disk – Ben wrote a great article on this “issue”)
    • Ben’s way (didn’t work for me):
      • change directory to the boot EFI directory
      • copy the ubuntu directory in to a new directory named boot
      • change directory to the newly created boot directory
      • rename the shimx64.efi file
    • TriJetScud’s way in the comments (worked for me with Ubuntu 16.04 Generation 2 VM):
  • shutdown the virtual machine and copy its VHDX to the VMM Library
    • HINT: Don’t forget to refresh the VMM Library.
  • go to the VMM Library, right-click the copied VHDX and select the Create VM template option
  • proceed with creating the template as you normally would, to the part Configure Operating System
    • HINT: If you are using Secure boot, don’t forget to select the MicrosoftUEFICertificateAuthority template in hardware settings.
  • there, under Guest OS profile, you select the option to create a new Linux operating system customization settings
  • next you specify your guest OS settings and finish creating the template
  • now you can create a new Linux virtual machine from the template you’ve configured!

Hope it helps!

Cheers!