Remote Desktop connection to an Entra ID-joined Windows Server with Entra ID credentials… quick and dirty

Connecting via Remote Desktop to an Entra ID-joined Windows machine, by using the Entra ID credentials, should be easy, right? It usually is… if you have covered all the prerequisites.

Multiple such guides are around, but none has listed all the steps needed (or I just haven’t found the right one) – I chose to follow this one, from my MVP colleague Tom Wechsler, available here.

Some points before we start:

In my case, all the required steps that allowed me to finally access my Entra ID-joined Windows (Server) machine in Azure via (publicly accessible) Remote Desktop (FOR DEMO PURPOSES), are the following:

Create an Entra ID-joined Windows virtual machine in Azure:

  • first, you will need to select a supported operating system (Windows 10/11, Windows Server Datacenter 2019/2022):
    • worked for me with both Windows Server 2019 Datacenter and Windows Server 2022 Datacenter

  • another thing needed during the creation of the machine itself is to select Login with Microsoft Entra ID under Management settings so that your machine would be automatically joined to the Entra ID and have the extension installed (note that this option automatically creates the system-assigned managed identity as well, and if not selected at the time of creation, it can also be added later, under virtual machine’s Extensions):

Create a user that will connect to the virtual machine:

  • the easiest thing would be to create a fresh, new Entra ID user, that will be used for connecting to this virtual machine (you can also use an existing user, but make sure it is not using MFA, which will prevent you from connecting) – my user will be called vmuser
  • try to sign in with this new user into the Azure portal if you need to change its password, or just to skip the MFA setup:

  • as the machine is Entra ID-joined, under the Access Control (IAM) settings of the virtual machine (in my case, temp-1), assign this user (vmuser) either the Virtual Machine User Login or the Virtual Machine Administrator Login role:

Connect to the virtual machine with a local admin account (created with the machine):

  • if you run dsregcmd /status command, the following should be configured already:
    • AzureAdJoined: YES
    • AzureAdPrt: NO
    • IsDeviceJoined: YES
  • next, go to the System Properties – Remote Desktop settings and disable the Network Level Authentication option (enabled by default):

On the computer you will be connecting from (not Entra ID-joined):

  • next, download the RDP connection file from the portal:

  • and then edit the downloaded RDP file (with Notepad) – it should look like the below (more on the available options):
    • remove the line with the username (as it will be provided on connection)
    • add the following two lines:
      • enablecredsspsupport:i:0
      • authentication level:i:2

  • try connecting to the virtual machine now, by using the edited RDP file, username, and password of your Entra ID account:
    • for the username, make sure you are using the AzureAD\upn-or-email-address format (in my case, AzureAD\[email protected])

  • your connection should be working, and you will have either the user or admin permissions on the system (depending on the assigned Entra ID role):

Hope this helps.

Cheers!

Challenge with FM radio signal… Raspberry Pi to the rescue!

Not so long ago (actually, a weekend or two ago), I was presented with a real-life issue – an issue that needs to be taken care of… ASAP!

Production was suffering. Production of high-quality foods in my mom’s kitchen, that is! 🙂

So, what was the issue?

To better help you understand the issue, we need to introduce you to the environment first – there’s my mom’s kitchen, from where many amazing dishes come out on a daily basis.

And there’s a small FM radio in this kitchen, providing her company when cooking alone – nothing special, but it’s an essential part of the kitchen (and the overall cooking process)!

About two weekends ago, the user (mom) starts complaining that the radio is having issues with the reception of her favorite FM station. It’s not good when users start complaining, of course. Especially if they are the important ones!

If this isn’t taken care of, production (of food) may suffer! 🙂

So, let’s solve the issue.

As nothing has changed from the FM radio perspective, it seems that the issue is somewhere else. After a short research, it seems that adding a new frequency to the user’s favorite radio station somehow impacted the remaining two (one of which we were using)… and now we’re having bad reception.

Tried to switch to the other two frequencies… didn’t help. This station is transmitted in at least three frequencies, but none of it provides us a good reception anymore.

Even tried with another antenna… no luck.

Switching to another radio station… is not an option. 🙂

When I was thinking about other options, I remembered that this radio station also streams over the Internet (like the example I’m using below)!

Splendid!

As I had this spare Raspberry Pi just standing there, collecting dust, an idea was born – turn it into the “Internet radio”!

The initial solution needs to be basic as possible, headless, work as soon as connected, wireless (as much as possible), and stream the radio station in question. Rather than ditching the FM radio, I’ll use it for the output part – so, Raspberry Pi’s 3,5mm output as an input to the AUX IN of the FM radio, using its amplifier and speakers (switching to AUX input is just one click away, which is fine).

I started by preparing my Raspberry Pi:

  • downloaded Raspberry Pi Imager
  • used it to download and customize the Raspbian image (Raspberry PI OS (32-bit)):
    • set hostname
    • enabled SSH
    • set username and password
    • configured wireless LAN
    • configured locale settings
  • booted my Raspberry Pi and did additional configuration via the included raspi-config utility:
    • configured System Options – Boot/Auto Login – Console Autologin
    • configured some other tiny things (like extending the storage, etc.)

Now it seems that I’m prepared for bringing up the “software part”.

After some reading and trying things out, I decided to go with VLC Player.

Now I just need to make it play what I want, play it on power on and without any other interaction.

Luckily, it’s not thaaat hard! 🙂

Great! That works if I manually start it… and there are no issues.

For the autostart part, I’m choosing to run it as a service, so:

And… that’s it!

With a few hits and misses, there’s finally a simple wireless Internet radio, which starts playing once Raspberry Pi powers on (and connects to WiFi, and waits for 30 seconds, of course)! No more bad FM reception and the user is satisfied! 🙂

Cheers!

Using a self-hosted runner with GitHub Actions

As I was going through the excellent short course called Azure Infrastructure as Code with GitHub (by fellow MVP, Barbara Forbes), a thought appeared – what do I need to do to use my custom runner machine inside a pipeline for… I don’t know… security/privacy concerns, isolation, special requirements, different OS, control, price… or just to complicate things a bit?

Of course, GitHub supports this and it’s called a self-hosted runner.

So, what do I need to do to use this self-hosted runner with my GitHub Actions?

It’s relatively simple – there is an application package, which will be installed on your runner machine, and which will listen for and eventually do all the work defined in your workflow!

But first, let’s introduce my environment.

I have a simple GitHub Action (workflow), which creates a simple storage account on my Azure environment (there is actually no need to convert Bicep to ARM before deployment, but it seemed cool 😀). It’s currently using the „ubuntu-latest“ runner, provided by GitHub… which has also all the needed components inside (like Azure CLI, Azure PowerShell, …).

And it works fine. When there is a push to my GitHub repository, GitHub Actions starts and does what is needed on my Azure environment via this workflow:

And the mighty Bicep file (😀) it’s using for the deployment is:

Of course, this runs just fine on a standard (hosted) runner:

To run this workflow (successfully) not that much is needed.

First, I’ve created a new virtual machine (I’ll use a simple Ubuntu Hyper-V VM, no autoscaling, no… nothing) called hermes (god of speed 😀), with freshly installed Ubuntu 22.04.1-LTS (minimized).

After that, I went to the Settings of my GitHub repository and got the download and install scripts for the x64 Linux runner:

As you can see, I’ll be using crontab later to automatically (re)start my self-hosted runner.

If everything went well, you should see your runner “up and running” (😀) in the GitHub portal:

Next, I’ll use the following script to install all prerequisites for my workflow (like Azure CLI, Azure PowerShell, etc. – it really depends on your workflow and things you use):

Once this is done, my self-hosted runner hermes should be ready to run the workflow.

To try this, I need to make a slight update to my workflow file – line 12 inside the job configuration should be updated from “runs-on: ubuntu-latest” to “runs-on: self-hosted“.

So, my workflow YAML file now looks like this:

And once I push the configuration to my GitHub, my workflow automatically starts and runs on hermes, my self-hosted runner:

If we prepared our runner right, all is good! 😊

Of course, our resources are deployed successfully:

So, this is how you can use your own, self-hosted runner, to execute your GitHub Actions (workflows).

Cheers!

Installing Azure ATP Sensor… failed with 0x80070643

Another “short and sweet” one! 😀

I was installing a couple of Microsoft Defender for Identity (a.k.a. Azure Advanced Threat Protection, or Azure ATP) sensors, on Domain Controllers behind corporate proxies.

Everything went well on all of them… but the last one, of course!

Every other picked up the system proxy settings and installation went fine, but the last one failed… with a highly descriptive error saying “Installation failed. Error code: 0x80070643“:

It seems that this installation hasn’t picked up the system’s proxy settings (for whatever reason), and this information needed to be set manually during the installation.

So, after downloading and unpacking the Azure ATP Sensor installation, open Command Prompt (as Administrator, of course) and, from its folder, run this command (make sure you enter your proxy information and the right access key):

Hope it helps (helped me)! If it doesn’t, try reading this thread.

Cheers!

Fixing permissions for EC2 private key file

This time, I was playing around with AWS and created some EC2 instances.

When you are creating and working with your instances, you will need to take care of the authentication – you would usually import or create new key pair and use private key on your machine to connect via SSH to the EC2 instance in AWS. The whole process of creating a key pair and downloading the private key is pretty simple – on the page below, you select name, type and format of your key pair and, when created, private key begins automatic download to your PC:

Now you can create your instance and select the created key pair for authentication:

If you have your private key ready and the instance is up and accessible to you, you can use (for example) SSH to connect to it:

So… we have a challenge! Looks like our private key is not secured enough and others may have access to it!

If we look at the permissions, we can see that all of them are actually inherited… so, we’ll need to remove the inheritance/inherited permissions and give them only to the account that needs it:

And after some “tweaking”:

If we retry the connection, this happens:

 

Excellent!

And if you’re not a fan of clicking through the permissions dialog, here are scripts that can help you with this – they basically remove the inheritance and add full access permissions to the owner of the file (needs path to your private key file as a parameter!):

  • the “PowerShell” flavour:

  • the “CMD” flavour:

Hope it helps!

Cheers!

P.S. Scripts are also available at my GitHub (https://github.com/TomicaKaniski/toms-notes-code/).

P.P.S. There’s also a script that restores inheritance and inherited permissions… in case you… mess something up. 😀

Playing around with Azure Stack HCI

Decided to have some fun with (nested) Microsoft Azure Stack HCI in my lab.

If you want to do the same, I’ve scripted most the stuff you need, so… maybe it will be useful.

Steps to prepare a brand new, shiny, nested Azure Stack HCI lab are (roughly):

  • prepare your (parent) Windows Server 2022 Hyper-V host (ensure enough resources are available)
    • it already hosts my Active Directory, DNS, DHCP, router, …,  VMs
    • everything will be saved locally to D:\AzureStackHCI
  • (optional) install Windows Admin Center (WAC) for easier management
    • download it and install with simple command:

  • obtain the Azure Stack HCI 60-day trial ISO image from here
  • make VHD(X) from the obtained ISO image:

    • note that I’m using Convert-WindowsImage.ps1 available here
    • which gives me nice, generalized Azure Stack HCI VHD(X), which we will “upgrade with things” and later use for VM creation
  • install prerequisites into VHD(X)
    • this one is fairly easy – install Windows roles and features directly to the VHD(X) itself:

    • NOTE: If you try to install the Hyper-V role later, it may fail as we’re running Azure Stack HCI on “normal” Windows Server, so it get’s confused with nested virtualization availability. With preinstaling it, we make sure it just works.
  • update VHD(X) with latest patches:

    • I have previously downloaded all the Azure Stack HCI patches available to D:\AzureStackHCI\Updates
  • add Unattend.xml to handle the “set password at first login issue”
    • it annoys me that I need to set up the initial password, so… simple Unattend.xml file, injected into VHD(X) should take care of this:

    • NOTE: Make sure you don’t use clear-text passwords in Unattend.xml file!
  • create Azure Stack HCI VMs
    • I’m creating two VMs from our prepared VHD(X), with a couple of additional data disks, few network adapters for different purposes, nested virtualization enabled, etc.:

  • set node networking, join them to the domain, prepare for cluster (by using PowerShell Direct):

  • create the Azure Stack HCI cluster:

  • register (optional) the Azure Stack HCI cluster:

  • create CSV(s) and virtual switches for child workloads (but add nodes/cluster to WAC before, if not using PowerShell)

  • play around with your new cluster
  • (optional) clean all/redeploy if needed:

And now you have fully functional, nested, 2-node Azure Stack HCI cluster – nothing too fancy, but you can extend it how you wish! 😊

You can begin exploring the Azure Stack HCI itself, use it with Azure Arc, or perhaps install AKS on Azure Stack HCI and play around with it. Or something else.

Cheers!

P.S. You can use these scripts also for stuff other than Azure Stack HCI, of course! 😉
P.P.S. Code is also available on my GitHub page.

Create a self-signed certificate for your web server with PowerShell

Sometimes you may need SSL certificate just for testing your (local) web application. Of course, for public and trusted purposes, you’ll probably use free Let’s Encrypt certificate or something similar (or, of course, any of the paid options).

And this is OK as long as you have publicly resolvable domain name.

But what if you need certificate for, let’s say, “localhost” or “webserver.local”?

Then you’ll probably use your internal PKI infrastructure or a simple self-signed certificate.

Second one can be easily achieved with PowerShell, by using the New-SelfSignedCertificate cmdlet (or with OpenSSL, yes 🙂).

So, let me show you how.

We have a simple IIS setup hosting a single (default) website, responding to http://localhost/:

We’ll issue a new self-signed certificate, make it trusted (important!) and then attach it to our test website, with following:

If everything goes well, we will see another binding created in our IIS console:

And if we open https://localhost/, all should be good as well:

Cheers!

Add a route to your VPN connection via PowerShell

I’m sure that you’re using some VPN somewhere, and you’re having “trouble” with split tunneling and routing, right?

Well, I had. 😀

As I’m “here and there” most of the time, I’ve setup an “anchor” location (no, it’s not in the cloud… yet) which is always available via VPN, and which has few machines that I’m, more or less, using regularly. When I’m not there, I connect there via my precious Windows 10/11 laptop and work as I’m there locally. I know – you know what VPNs are used for… bear with me a bit longer. 😀

So, all good – I have a VPN client (Windows built-in), a VPN server and Internet connection, and I can work.

One thing that I like to have is Internet access which is not routed via my “anchor” location, so that “the work stuff” goes through VPN and “the fun stuff” not.

It’s really easy to set this up – in properties of your VPN connection, just untick the “Use default gateway on remote network” checkbox:

But then you’ll have an issue with connecting to “the work stuff” – your current default gateway doesn’t know where “the work stuff” network is and how to get there.

It needs a route.

No problem, it’s easy to add a route in Windows (my “the work stuff” network is 192.168.13.0/24 and my VPN gateway is 192.168.14.1, or publicly 141.138.55.154):

And now you have access to “the work stuff” network again! And Internet access works as it should (not via the “anchor” location)!

Great.

But then you disconnect. And reconnect. And route you’ve added is gone. So, you repeat the procedure. Or script it. Or…

What if I tell you there is actually a better way?

I’m not really sure in which release this came out, but now you have an updated set of PowerShell cmdlets in (Windows 10/11) (which is cool!). For this story, the one we’re interested the most is Add-VpnConnectionRoute.

“So, doest that mean that, with it, I can configure my VPN connection to always have the route I need, whenever I connect to VPN? No more adding routes manually?!”

Exactly.

If I use the discussed Add-VpnConnectionRoute on my existing VPN connection, I can add the route I need and it will be written in the connection configuration and made active when the tunnel comes up, while still using the split tunneling.

Let’s see:

  • connected to “the work stuff” VPN and this is (part of) routing table prior the route configuration:

  • adding route configuration:

  • checking routes again:

As you can see, I’ve got new routes in my route table (it would be the same by using route add command above) and now I can access “the work stuff” without any issue:

And if I disconnect and connect again – it still works! 😊

Hope it helps someone!

Cheers!

Checking certificate expiration with PowerShell

Had an idea to write some (PowerShell) script which will check and maybe notify me of certificates that are nearing expiration for a bunch of (public) sites that… somewhat matter to me. 😊

As it turns out, someone already had this idea and wrote very nice PowerShell script that does just that, available here – thank you!

While testing it, there were sites on which the script worked just fine, and there were sites on which I got errors like this one (Error: “String was not recognized as a valid DateTime.”):

Seems to be connected to my regional settings (I know… who would ever use hr-HR instead of en-US, but… 😊) and date/time formatting:

I’ve tried to fix it in a couple of ways, but the one that finally did it (for me) was explained on Dan Sheehan’s blog (thanks!), implemented on lines 25-26 below.

So, my adapted script looks like this (and works with my hr-HR culture):

It provides the following output (which can be further customized per your needs, of course… and I know – need to insert some line breaks, convert output to HTML, send it via e-mail, … it’s a start! 😊):

Note that I’m returning expiration date “the Croatian way”, by using the following formatting:

Hope it helps someone (and #kudos to original authors)!

Cheers!

UniFi Network Application on Ubuntu Server on Raspberry Pi 3 (arm64)

Another catchy title, right? 😀

Decided that I want to move my UniFi network controller to Ubuntu based installation, which will be running on top of Raspberry Pi (3) device which was collecting dust.

UPDATE: If you’re having issues with Java 11 or path not being set, take a look at comment by G. Faria below.

This can be easily achieved by following these steps:

  • take a backup of your current configuration (my controller was offline, so I’ve just copied the last automatic backup from /var/lib/unifi/backup/autobackup)
  • prepare the SD card with OS installation (detailed info here) – I’ve selected Ubuntu Server 21.10:

  • with prepared SD card, boot Ubuntu Server on your Raspberry Pi device
  • first login is ubuntu/ubuntu, and you’ll need to change password immediately after
  • next, you’ll probably want to set your Raspberry Pi to use static IP configuration – I’m using netplan to set it up:
    • just in case, I removed all *.yaml files inside /etc/netplan/
    • create new netplan template (YAML file) called 00-eth0.yaml in /etc/netplan/ (watch those white spaces!)

    • apply the configuration (your IP address will be reconfigured, so you’ll also lose the current SSH connection, if connected remotely(!))

  • install the UniFi Network Application:
    • there is a nice official guide here, once you take care of prerequisites, but basically:

  • restore from backup and start using it:

Note: If you’ll check your unifi service status, you may see “WARN Unable to load properties from ‘/usr/lib/unifi/data/system.properties’“:

This can easily be resolved by enabling the built-in “uncomplicated firewall” (ufw) – don’t forget to open ports you’ll need when it’s active! (such as SSH (22/tcp), inform endpoint for your devices (8080/tcp), UniFi Network Application web (8443/tcp), etc.):

* can be done more restricted, if needed
** full list of ports is available at https://help.ui.com/hc/en-us/articles/218506997-UniFi-Ports-Used – add them if needed (for instance, 6789/tcp is used when testing upload/download with mobile app)
*** be careful – it’s a firewall!

Note: If your access point is shown offline/timeout, maybe you forgot to open up the “inform port” 8080/tcp in UFW (been there, of course 😀):

And that’s it – you now have the fully functioning UniFi controller/UniFi Network Application running on top of your Ubuntu Server powered Raspberry Pi device!

Cheers!

P.S. Enable the autobackup feature… it’s useful (sometimes)! 😀