Bad Request for url (error 400) in AKS

I’ve decided to go through the **awesome** AKS Workshop on Microsoft Learn and had some issues (with my setup), which I wanted to share, in case someone else hits them.

It was all good until I got to the part of creating the AKS cluster with Azure CLI – I was using Windows Terminal with WSL (Ubuntu 20.04) instead of using Azure Cloud Shell as suggested. I’ve gone through the steps of preparing variables needed for creating the cluster as it says, and when I tried to finally create the cluster by using “az aks create” command, I’ve got an error:

Error states that something is wrong with our request and neither –verbose or –debug options were giving me any useful details (actually, it was in front of me all the time, but I didn’t see it 😊). I’ve rechecked/reset the variables, tried once more and once more… it was all the same. As Google was conveniently down at the time (who would say, right?!), I’ve had to try and figure it out by myself. So, I’ve looked at the error once again:

Operation failed with status: ‘Bad Request’. Details: 400 Client Error: Bad Request for url: https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/aks-workshop/providers/Microsoft.Network/virtualNetworks/aks-vnet/subnets/aks-subnet%0D/providers/Microsoft.Authorization/roleAssignments?$filter=atScope%28%29&api-version=2018-09-01-preview

… and then it struck me!

There’s some trash in the URL (more precisely – my AKS subnet ID was having “%0D” added to the end)!

And if we check what “%0D” exactly stands for, it says “carriage return” (which I’ve obviously didn’t want to be a part of my subnet ID) – so, even it all seemed fine when looking at the variable content, now I know it wasn’t.

Easy-peasy, we can fix the part where we’re extracting this subnet ID or we can just replace the variable’s value with the right one (without the %0D at its end, that is).

That got me going… towards the next error. This one was actually more descriptive (yes, and the first one is descriptive enough, if you read it carefully 😊) – it said that I’ve got additional content inside my Kubernetes version variable:

Operation failed with status: ‘Bad Request’. Details: Error to parse agent pool version “1.19.3\r”: Invalid character(s) found in patch number “3\r”

You can see the extra “\r“, which again, is here because of bad value assigned to the variable $VERSION.

Which can also be easily fixed.

One other funny thing I’ve observed was, when getting my Kubernetes cluster credentials, as you can see below, they were actually merged to C:\Users\tomica\.kube\config:

This was funny because I’m inside WSL… which doesn’t actually have C:\Users\tomica\.kube\config, right? (and no, credentials weren’t merged to /home/tomica/.kube/config, which kubectl there uses by default, so… they are actually at /mnt/c/Users/tomica/.kube/config – funny, will check with the MS folks) 😊

Fair enough – we can merge them manually or just select the right file and we’re good to go:

There you go – if you get stuck on similar things, maybe this can help you. 😊

Cheers!

Yet another “Kubernetes with Raspberry Pi” post

There’s a ton of the tutorials on how to get Kubernetes installed onto your Raspberry Pi, so… let’s write another one. 😊

As mentioned in my last post, I’ve found my forgotten Raspberry Pi, and played around with installing and configuring Raspbian Buster on it.

Today, I wanted to check if it will be possible to install Kubernetes onto such small machine – they are many articles on the “widest of the world’s webs” that say “Yes, it can be done!“, so I’ve decided to give it a try! And I chose to follow one of them (seemed like a nice reference).

As you remember, I’m starting with a cleanly installed (and just slightly customized) Raspbian Buster and building it from there.

And I’ll be using kubeadm for installing my cluster.

So, once I had at least two machines (my Raspberry Pi for the “control plane” and Ubuntu 20.04 LTS Hyper-V virtual machine as the “node” – you can read more about it here), I prepared them like this:

  • install Docker (in my case)
  • change the default cgroups driver for Docker to systemd
  • add cgroups limit support (for my Raspberry Pi 3)
  • configure iptables
  • disable swap (this one was a bit challenging)
  • prepare for Kubernetes installation (source, keys, kubeadm)
  • install Kubernetes “control plane”
  • add flannel
  • add a node to the cluster
  • test with some workload

One thing that bothered me (on Buster) was disabling swap in a way that it also stays disabled after a reboot (I know, it’s the details that eventually get you) – after a while, I’ve stumbled on this forum post and the solution provided by powerpetedid the trick! Thank you, @powerpete! 😊

And finally, details about the each step are here (outputs are commented and somewhat redacted/condensed):

Seems to be working (😊):

Cheers!

P.S. I’ve read about some having issues with flannel and using other network options (didn’t have this one). Also, if you’ll have issues with iptables (v1.8+), maybe you’ll need to switch to legacy version (didn’t have this one either).

Found my forgotten Raspberry Pi

And, naturally, decided to put it to use (although, for exactly what… is currently unclear). 😊

So… how?

As there was already a micro SD card inside my Raspberry Pi, I was all set!

Basically, what I had to do:

  • download the OS image (Raspberry Pi OS Lite)
  • download imaging software (Etcher)
  • extract the OS onto micro SD card
  • enable SSH by adding an empty file called “ssh” (yes, without any extension) to the boot volume
  • boot it up
  • set it up as I like

Extracting the OS image onto micro SD card is a “breeze” with right tools – select OS image, select where do you want to put it and click Flash:

After it’s finished, don’t forget to enable yourself the SSH access (it’s easier that way):

Done.

Let’s put the card back into Raspberry Pi and boot it up.

Few seconds later, you can use (e.g.) Windows Terminal and included SSH client to access your Raspberry Pi (default networking option is DHCP, with default username of pi and password raspberry):

I wanted to “tweak” my installation a bit (with the provided raspi-config script), so I’ve used the following for disabling unnecessary devices, custom network settings, etc.:

And after a while, my Raspberry Pi is finally ready:

Cheers!

Utilman.exe to cmd.exe and back

Let’s say you have a Windows (virtual) machine, for which you’ve forgotten your login info, but you want to enter it anyway, because of… reasons. 😀

How can you do it?

Note – if the disk/VM is encrypted, you’ll need the decryption key, of course (if you don’t have it, well… I’m sorry, the following won’t really help you).

Ok, if it’s a virtual machine and you only need to grab some data from it, it’s relatively easy – you’ll just mount the virtual disk, extract the data needed and done.

If you need access to the OS instead, you can maybe use the old trick with replacing the Utilman.exe with cmd.exe, which essentially gives you command prompt with local system permissions, which then gives you… well, everything you need.

One minor obstacle with doing this “hack” would be the fact that the owner of Utilman.exe is actually the TrustedInstaller, so your workflow would be like this:

  • (e.g. turn off the VM, mount the disk, …)
  • replace the owner of Utilman.exe
  • add yourself the needed permissions
  • replace the Utilman.exe with cmd.exe
  • do what you need (e.g. change the local Administrator’s password, set this account as active, …)
  • cleanup (replace the replaced Utilman.exe with the original one)

And we can do this with PowerShell:

And now you can login as local Administrator again and do the work you wanted to do in the first place. 😊

To leave things in (somewhat) the way we found them, we can use the following PowerShell:

Cheers!

Network share feeds in WAC

You know about (and actively using) the Windows Admin Center (WAC), right?! 😉

While it’s great for managing your Microsoft infrastructure, it can also be extended with different extensions. You can even write and use your internal, custom extensions, which do… well, whatever you make them do. And you can read all about that here.

But let’s go back to the subject of today’s post – extensions can be installed via different feeds, either official or unofficial, provided by Microsoft or 3rd-party. You can easily add new feeds or remove existent by providing the feed location, which can be either a NuGet feed URL or a file share location, as stated in the official docs.

Using a file share location is easy:

  • you choose/create a folder:

  • share it (\\<my_server_name>\WACExtensions in my case):

  • and add it to your feeds – I’ll use the “PowerShell way”:

But no!

My feed seems to be added successfully, but it’s not showing in the list!

You can try the same through the web interface – it’s almost the same (OK, you’ll get the errors):

And permissions are fine, don’t worry. 😉

Why’s that?!

The catch here is that we added an empty folder/share – when adding this share, WAC intelligently looked into the folder, found nothing and (successfully) didn’t add our share to the feed list, as it’s empty. And yes, it also forgot to mention it when using PowerShell.

So, what can be done?

The workaround/solution is rather simple – just make sure you don’t add an empty feed/folder.

Just for fun – I’ve downloaded the HPE Extension for WAC, moved it into the WACExtensions shared folder and tried to add the feed again:

And – it worked! 😊

Cheers!

Having fun with Helm and file encoding

Had some spare time, so I’ve tried to learn a bit more about Helm, the package manager for Kubernetes.

I’ve decided to follow the relatively new Pluralsight course called – Kubernetes Package Administration with Helm, done by my MVP colleague Andrew Pruski. And it was great – not too long, clear and easy to follow, with only a handful of prerequisites if you want to follow along! Great job!

Of course, there is also the nice, official documentation.

But why am I writing this post?

I was normally following this course on my Windows 10 laptop, using Visual Studio Code, as suggested, and also using PowerShell terminal, with Helm v3.3.1.

It all went well until the part when we are creating our Helm Chart, more specifically – when we’re filling up our deployment.yaml and service.yaml files. Suggested (and simplest) method is to use the simple output redirection (with “>“), like this:

But, this gave me the following error when trying to deploy the chart:

It’s quite obvious – Helm works with UTF-8, and my .yaml files seem to be encoded differently. Quick look at the bottom of my VSCode confirms it:

How can I fix it?

As I’m using PowerShell, it’s pretty easy – instead of doing the simple output redirection (“>“), I pipe output to Out-File cmdlet with -Encoding UTF8 option, in all cases, which takes care of the encoding (and sets it to UTF-8 with BOM, which is just fine for Helm):

So, long story short – if you run into the error above (Error: unable to build kubernetes objects from release manifest: error parsing : error converting YAML to JSON: yaml: invalid leading UTF-8 octet”), remember to check your file’s encoding (and change it to UTF-8, if needed)! 🙂

Cheers!

P.S. Thanks to good people at Pluralsight for providing me a complimentary subscription!

Fixing Hyper-V virtual machine import with Compare-VM

Well, I was rearranging some stuff the other day, and come to an interesting “lesson learned”, which I’ll share. 🙂

In my lab, I’ve had a Hyper-V server running Windows 2012 R2, which I finally wanted to upgrade to something newer. I’ve decided to go with the latest Windows Server Insider Preview (SA 20180), just for fun.

When trying to do an in-place upgrade, I was presented with the message “it can’t be done“, which is fine – my existing installation is with GUI, the new one will be Core.

So, evacuate everything and reinstall.

In the process, I’ve also reorganized some stuff (machines were moved to another disk, not all files were on the same place, etc.).

Installed Windows, installed Hyper-V, created VM switches, but when I tried to import it all back (from PowerShell… because I had no GUI anymore), I was presented with an error.

Error during virtual machine import was (I know – could’ve used more specific Import-VM command, which will select all the right folders and required options, but… learned something new by doing it this way!):

So, the error says it all – “Please use Compare-VM to repair the virtual machine.” 🙂

But how?! 🙂

If you go to the docs page of Compare-VM, you can see how it’s used.

And, in my case, the whole process of repairing this virtual machine looks like this:

Hope this helps you as well!

Cheers!

Fixing things with… terraform destroy

I like Terraform, because it’s so clean, fast and elegant; OK, I also suck at it, but hey – I’m trying! 🙂

The long story

Usually, Terraform and its providers are very good at doing things in the order they should be done. But sometimes people do come up with silly ideas, and mine was such (of course) – I’ve decided to rename something and it broke things. Just a little.

I have a simple lab in Azure, with a couple of virtual machines behind the Azure Load Balancer, no big deal. All this is being deployed (and redeployed) via Terraform, using the official azurerm provider. It’s actually a Standard SKU Azure Load Balancer (don’t ask why), with a single backend pool and a few probes and rules. Nothing special.

I’ve deployed this lab a few days ago (thanks to good people at Microsoft, I have some free credits to spare), everything worked just fine, but today I’ve got the wild idea – I’ve decided to rename my backend pool.

With all the automation in place, this shouldn’t be a problem… one would think. 🙂

So, as I’ve updated my code and during the make it so phase (terraform apply), I’ve got some errors (truncated, with only the useful stuff):

Issue

After going through these errors, I’ve realized that my resources are indeed in the same region, but existing rules are referencing the current backend pool by name and actually blocking Terraform in renaming the backend pool.

There are a couple of options at this stage – you can destroy your deployment and run it again (as I normally would) and it should all be fine. Or you can try to fix only the dependent resources and make it work as part of the existing deployment.

With some spare time at my hands, I’ve tried to fix it using the second one and it actually worked.

Resolution

Terraform has a nice option for destroying just some parts of the deployment.

If you look at help for the terraform destroy command, you can see the target option:

And if you run it to fix your issues, you’ll get a nice red warning saying that this is only for exceptional situations (and they mean it!):

 

So… BE CAREFUL! (can’t stress this enough!)

 

Anyhow, I’ve destroyed rules which were preventing my rename operation:

And then another terraform apply –auto-approve recreated everything that was needed, and finally – my backend pool got renamed:

Another idea I’ve had was to taint the resources (terraform taint -help), which would probably be a lot nicer. Oh, well… maybe next time. 🙂

As things are constantly improving, it shouldn’t be long until this is fixed (should it even be fixed?!). Until then, hope this will help you with similar issues!

Cheers!

CRC – OpenShift 4 cluster in your pocket

… but only if you have large pockets, that is! 🙂

I suppose that, by now, you’ve already heard of minishift – tool which helps you run a single-node OpenShift 3.x cluster inside a virtual machine (on your preferred hypervisor), which is extremely convenient if you need to do some local development, testing, etc. Of course, you can always deploy what you need on top of Azure or any other cloud provider, but having OpenShift cluster locally can have its benefits.

And what if you need, let’s say, an OpenShift 4.x cluster “to go”?

Don’t worry, there is a solution for you as well! It’s called CodeReady Containers (CRC) (version 1.9 at the time of writing), and is basically a single node OpenShift 4.x cluster, inside a virtual machine (sounds a lot like minishift, doesn’t it?).

So, how can you get your hands on CRC and make it work?

There are a couple of steps involved:

  • download the OpenShift tools (as I’m using Windows, so I’ll use this one):

  • unzip all to a location you like (mine is C:\PocketOpenShift):

  • (optional) add this location to PATH for easier usage (as already mentined, I’m using Windows & PowerShell):

  • run “crc setup” which will prepare the environment:

  • run “crc start” (with one or more options, if needed – as you can see, I’ll be using custom number of vCPUs, amount of RAM and a custom nameserver):

  • at any time, you can check the status of your cluster by using “crc status” command:

  • once it is up, you can use “oc login” or console (bring it up with “crc console“) to connect to it, either as an (kube)admin or a developer, and continue working as you would normally with any other OpenShift cluster:

  • one other thing I like to do is to enable monitoring and other disabled stuff (note though – your VM should have 12+ GB RAM) – you can do it with two commands – first one lists all that is disabled, and the second one, with the index at the end, enables it (also note that in the official documentation there is an issue with “” and (they are switched), if you’re working in PowerShell):

  • monitoring should now be working as well:

And that’s it – you’re ready to work on your own “pocket OpenShift cluster”! 🙂

Of course, don’t forget that there is also the official documentation, including the Getting Started guide and Release Notes and Known Issues document. So… take a look!

Cheers!