Remote Desktop connection to an Entra ID-joined Windows Server with Entra ID credentials… quick and dirty

Connecting via Remote Desktop to an Entra ID-joined Windows machine, by using the Entra ID credentials, should be easy, right? It usually is… if you have covered all the prerequisites.

Multiple such guides are around, but none has listed all the steps needed (or I just haven’t found the right one) – I chose to follow this one, from my MVP colleague Tom Wechsler, available here.

Some points before we start:

In my case, all the required steps that allowed me to finally access my Entra ID-joined Windows (Server) machine in Azure via (publicly accessible) Remote Desktop (FOR DEMO PURPOSES), are the following:

Create an Entra ID-joined Windows virtual machine in Azure:

  • first, you will need to select a supported operating system (Windows 10/11, Windows Server Datacenter 2019/2022):
    • worked for me with both Windows Server 2019 Datacenter and Windows Server 2022 Datacenter

  • another thing needed during the creation of the machine itself is to select Login with Microsoft Entra ID under Management settings so that your machine would be automatically joined to the Entra ID and have the extension installed (note that this option automatically creates the system-assigned managed identity as well, and if not selected at the time of creation, it can also be added later, under virtual machine’s Extensions):

Create a user that will connect to the virtual machine:

  • the easiest thing would be to create a fresh, new Entra ID user, that will be used for connecting to this virtual machine (you can also use an existing user, but make sure it is not using MFA, which will prevent you from connecting) – my user will be called vmuser
  • try to sign in with this new user into the Azure portal if you need to change its password, or just to skip the MFA setup:

  • as the machine is Entra ID-joined, under the Access Control (IAM) settings of the virtual machine (in my case, temp-1), assign this user (vmuser) either the Virtual Machine User Login or the Virtual Machine Administrator Login role:

Connect to the virtual machine with a local admin account (created with the machine):

  • if you run dsregcmd /status command, the following should be configured already:
    • AzureAdJoined: YES
    • AzureAdPrt: NO
    • IsDeviceJoined: YES
  • next, go to the System Properties – Remote Desktop settings and disable the Network Level Authentication option (enabled by default):

On the computer you will be connecting from (not Entra ID-joined):

  • next, download the RDP connection file from the portal:

  • and then edit the downloaded RDP file (with Notepad) – it should look like the below (more on the available options):
    • remove the line with the username (as it will be provided on connection)
    • add the following two lines:
      • enablecredsspsupport:i:0
      • authentication level:i:2

  • try connecting to the virtual machine now, by using the edited RDP file, username, and password of your Entra ID account:
    • for the username, make sure you are using the AzureAD\upn-or-email-address format (in my case, AzureAD\[email protected])

  • your connection should be working, and you will have either the user or admin permissions on the system (depending on the assigned Entra ID role):

Hope this helps.

Cheers!

Using a self-hosted runner with GitHub Actions

As I was going through the excellent short course called Azure Infrastructure as Code with GitHub (by fellow MVP, Barbara Forbes), a thought appeared – what do I need to do to use my custom runner machine inside a pipeline for… I don’t know… security/privacy concerns, isolation, special requirements, different OS, control, price… or just to complicate things a bit?

Of course, GitHub supports this and it’s called a self-hosted runner.

So, what do I need to do to use this self-hosted runner with my GitHub Actions?

It’s relatively simple – there is an application package, which will be installed on your runner machine, and which will listen for and eventually do all the work defined in your workflow!

But first, let’s introduce my environment.

I have a simple GitHub Action (workflow), which creates a simple storage account on my Azure environment (there is actually no need to convert Bicep to ARM before deployment, but it seemed cool 😀). It’s currently using the „ubuntu-latest“ runner, provided by GitHub… which has also all the needed components inside (like Azure CLI, Azure PowerShell, …).

And it works fine. When there is a push to my GitHub repository, GitHub Actions starts and does what is needed on my Azure environment via this workflow:

And the mighty Bicep file (😀) it’s using for the deployment is:

Of course, this runs just fine on a standard (hosted) runner:

To run this workflow (successfully) not that much is needed.

First, I’ve created a new virtual machine (I’ll use a simple Ubuntu Hyper-V VM, no autoscaling, no… nothing) called hermes (god of speed 😀), with freshly installed Ubuntu 22.04.1-LTS (minimized).

After that, I went to the Settings of my GitHub repository and got the download and install scripts for the x64 Linux runner:

As you can see, I’ll be using crontab later to automatically (re)start my self-hosted runner.

If everything went well, you should see your runner “up and running” (😀) in the GitHub portal:

Next, I’ll use the following script to install all prerequisites for my workflow (like Azure CLI, Azure PowerShell, etc. – it really depends on your workflow and things you use):

Once this is done, my self-hosted runner hermes should be ready to run the workflow.

To try this, I need to make a slight update to my workflow file – line 12 inside the job configuration should be updated from “runs-on: ubuntu-latest” to “runs-on: self-hosted“.

So, my workflow YAML file now looks like this:

And once I push the configuration to my GitHub, my workflow automatically starts and runs on hermes, my self-hosted runner:

If we prepared our runner right, all is good! 😊

Of course, our resources are deployed successfully:

So, this is how you can use your own, self-hosted runner, to execute your GitHub Actions (workflows).

Cheers!

Installing Azure ATP Sensor… failed with 0x80070643

Another “short and sweet” one! 😀

I was installing a couple of Microsoft Defender for Identity (a.k.a. Azure Advanced Threat Protection, or Azure ATP) sensors, on Domain Controllers behind corporate proxies.

Everything went well on all of them… but the last one, of course!

Every other picked up the system proxy settings and installation went fine, but the last one failed… with a highly descriptive error saying “Installation failed. Error code: 0x80070643“:

It seems that this installation hasn’t picked up the system’s proxy settings (for whatever reason), and this information needed to be set manually during the installation.

So, after downloading and unpacking the Azure ATP Sensor installation, open Command Prompt (as Administrator, of course) and, from its folder, run this command (make sure you enter your proxy information and the right access key):

Hope it helps (helped me)! If it doesn’t, try reading this thread.

Cheers!

Playing around with Azure Stack HCI

Decided to have some fun with (nested) Microsoft Azure Stack HCI in my lab.

If you want to do the same, I’ve scripted most the stuff you need, so… maybe it will be useful.

Steps to prepare a brand new, shiny, nested Azure Stack HCI lab are (roughly):

  • prepare your (parent) Windows Server 2022 Hyper-V host (ensure enough resources are available)
    • it already hosts my Active Directory, DNS, DHCP, router, …,  VMs
    • everything will be saved locally to D:\AzureStackHCI
  • (optional) install Windows Admin Center (WAC) for easier management
    • download it and install with simple command:

  • obtain the Azure Stack HCI 60-day trial ISO image from here
  • make VHD(X) from the obtained ISO image:

    • note that I’m using Convert-WindowsImage.ps1 available here
    • which gives me nice, generalized Azure Stack HCI VHD(X), which we will “upgrade with things” and later use for VM creation
  • install prerequisites into VHD(X)
    • this one is fairly easy – install Windows roles and features directly to the VHD(X) itself:

    • NOTE: If you try to install the Hyper-V role later, it may fail as we’re running Azure Stack HCI on “normal” Windows Server, so it get’s confused with nested virtualization availability. With preinstaling it, we make sure it just works.
  • update VHD(X) with latest patches:

    • I have previously downloaded all the Azure Stack HCI patches available to D:\AzureStackHCI\Updates
  • add Unattend.xml to handle the “set password at first login issue”
    • it annoys me that I need to set up the initial password, so… simple Unattend.xml file, injected into VHD(X) should take care of this:

    • NOTE: Make sure you don’t use clear-text passwords in Unattend.xml file!
  • create Azure Stack HCI VMs
    • I’m creating two VMs from our prepared VHD(X), with a couple of additional data disks, few network adapters for different purposes, nested virtualization enabled, etc.:

  • set node networking, join them to the domain, prepare for cluster (by using PowerShell Direct):

  • create the Azure Stack HCI cluster:

  • register (optional) the Azure Stack HCI cluster:

  • create CSV(s) and virtual switches for child workloads (but add nodes/cluster to WAC before, if not using PowerShell)

  • play around with your new cluster
  • (optional) clean all/redeploy if needed:

And now you have fully functional, nested, 2-node Azure Stack HCI cluster – nothing too fancy, but you can extend it how you wish! 😊

You can begin exploring the Azure Stack HCI itself, use it with Azure Arc, or perhaps install AKS on Azure Stack HCI and play around with it. Or something else.

Cheers!

P.S. You can use these scripts also for stuff other than Azure Stack HCI, of course! 😉
P.P.S. Code is also available on my GitHub page.

What about this Bicep?

You’ve probably heard about Azure Resource Manager (ARM) – the deployment and management service/layer of Azure, which enables you to manage (create, configure, delete) your Azure resources. Also, you are probably aware that ARM uses so called ARM templates – basically, JSON files that actually define the infrastructure and configuration you want to deploy to Azure (think Infrastructure as Code, IaC).

So, if you have dealt with ARM/JSON in the past, you may have been finding it difficult to start with, and somewhat complex.

Bicep is here to help.

Here is a short overview of Bicep – basically, it’s a language which enables you easier deployment of Azure resources, without messing around (too much) with JSON. To be frank, it somehow reminds of Terraform, but it’s also different. It has many cool features, immediately supports all new Azure features and APIs, can be built (converted) into .json and deployed as such or it can be deployed straight away as .bicep, doesn’t require state file, it’s open and free, has great support in Visual Studio Code and much more. And it’s still in active development!

If you’re dealing with IaC and Azure, try it.

To show you the power (and simplicity) of Bicep, here is a short example of deploying Linux virtual machine in Azure (together with a resource group, virtual network, virtual network subnet, virtual NIC and network security group), done “the old way” (in JSON, which was actually converted from Bicep… it’s easier than writing JSON from the scratch) and then done via Bicep (“the right way”? 😀).

Additionally, you’ll see that I’ve tried to break stuff into modules – with more or less sucess. 😀

The ARM/JSON way (could be done nicer/shorter, with parameters inside .parameters.json… if you know what you’re doing – this is converted from Bicep and serves just for illustrative purposes):

 

The Bicep way:



Bicep seems a bit easier to read and shorter, right (while still doing basically the same thing)? 😀

If we deploy the .bicep files above (note that I’m deploying the “raw” .bicep file directly – which is cool!):


We finally get our resources:

So, where should you start if you’re new to Bicep?

I would certainly recommend starting with free and official Deploy and manage resources in Azure by using Bicep learning path on Microsoft Learn.

After that, you can probably pick up Freek Berson’s book Getting started with Bicep: Infrastructure as Code on Azure (first and only book on Bicep that I know of – really liked it because of the simple (yet effective) examples with storage accounts, it connects everything and flows naturally – building up “brick by brick” and not “jumping around”, just to show off what Bicep can do).

Another great resource are also the Bicep examples – there’s plenty to learn from them too!

Of course, you’ll also need to practice – install the Azure CLI or Azure PowerShell module, add Bicep and use Visual Studio Code for your first steps with creating, deleting, configuring and breaking stuff… powered by Bicep! 😀

Cheers!

Capturing network trace in Windows

Do you need to capture some network traffic on a Windows box for further analysis, but don’t want to install additional software just… everywhere?

I usually do.

If you didn’t know, Windows has built-in tool with which you can do just that – (among other things) capture network trace to a file for further analysis. The tool is called netsh.

So, how do you capture traffic with netsh?

It’s fairly easy (for more options, filters and such, you can always check the accompanying help content – netsh trace start ?):

If you look at the location where you’ve saved your trace, you’ll see two files – of those two files, MyTrace.etl is the one you want:

OK, but what do you do with it?

If you try to open it with, for example, WireShark, you’ll see it doesn’t work:

So… we have a trace file with which we can’t really do anything?!?

Not exactly!

If you have Microsoft Network Monitor (now archived, but can be found… on the Internet) or Microsoft Message Analyzer (now retired), you can open up and analyze your trace as you normally would:

If you already have WireShark on, let’s say, your workstation, and want to continue using it for the analysis, this trace needs to be converted to a format which WireShark understands (hope that one day we’ll have WireShark which opens such .etl files natively).

You can convert it by using the free tool called etl2pcapng.

It doesn’t require installation, and if you want to use the pre-compiled binaries, they are available under etl2pcapng releases.

So, convert your (netsh) MyTrace.etl to (WireShark’s) MyTrace.pcapng with this command:

Once converted, you can open the new file (MyTrace.pcapng) in WireShark, and do what you would usually do to analyze it:

Hope this helps!

Cheers!

Fixing Hyper-V virtual machine import with Compare-VM

Well, I was rearranging some stuff the other day, and come to an interesting “lesson learned”, which I’ll share. 🙂

In my lab, I’ve had a Hyper-V server running Windows 2012 R2, which I finally wanted to upgrade to something newer. I’ve decided to go with the latest Windows Server Insider Preview (SA 20180), just for fun.

When trying to do an in-place upgrade, I was presented with the message “it can’t be done“, which is fine – my existing installation is with GUI, the new one will be Core.

So, evacuate everything and reinstall.

In the process, I’ve also reorganized some stuff (machines were moved to another disk, not all files were on the same place, etc.).

Installed Windows, installed Hyper-V, created VM switches, but when I tried to import it all back (from PowerShell… because I had no GUI anymore), I was presented with an error.

Error during virtual machine import was (I know – could’ve used more specific Import-VM command, which will select all the right folders and required options, but… learned something new by doing it this way!):

So, the error says it all – “Please use Compare-VM to repair the virtual machine.” 🙂

But how?! 🙂

If you go to the docs page of Compare-VM, you can see how it’s used.

And, in my case, the whole process of repairing this virtual machine looks like this:

Hope this helps you as well!

Cheers!

Veeam Best Practices

My last post in 2019 was about Veeam Backup for Office 365 – I think it’s only fair to continue the story. 🙂

If you haven’t noticed this short post by Niels Engelen, you may be unaware that good people at Veeam put together a Best Practice Guide for Veeam Backup for Office 365!

Great thing about this guide is that it’s really a “live document”, which covers design, configuration and operations for VBO and it will be updated regularly, so make sure to bookmark it and check it from time to time!

Also, there is a Best Practice Guide for Veeam Backup & Replication, which should be bookmarked and checked regularly as well, in case you forgot about it! 🙂

Cheers!

Backing up Office 365 to S3 storage (Exoscale SOS) with Veeam

Are you backing up your Office 365? And… why not? 🙂

I’m not going into the lengthy and exhausting discussion of why you should take care of your data, even if it’s stored in something unbreakable like “the cloud”, at least not in this post. I would like to focus on one of the features of the new Veeam Backup for Office 365 v4, which was released just the other day. This feature is “object storage support“, as you may have guessed it already from the title of this fine post!

So, this means that you can take Amazon S3, Microsoft Azure Blob Storage or even IBM Cloud Object Storage and use it for your Veeam Backup for Office 365. And even better – you can use any S3-compatible storage to do the same! How cool is that?!

To test this, I decided to use the Exoscale SOS (also S3-compatible) storage for backups of my personal Office 365 via Veeam Backup for Office 365.

I’ve created a small environment to support this test (and later production, if it works as it should) and basically done the following:

  • created a standard Windows Server 2019 VM on top of Microsoft Azure, to hold my Veeam Backup for Office 365 installation
    (good people at Microsoft provided me Azure credits, so… why not?!)
  • downloaded Veeam Backup for Office 365
    (good people at Veeam provided me NFR license for it, so I’ve used it instead of Community Edition)
  • created an Exoscale SOS bucket for my backups
    (good people at Exoscale/A1TAG/A1.digital/A1HR provided me credits, so… why not?!)
  • installed Veeam Backup for Office 365
    (it’s a “Next-Next-Finish” type of installation, hard to get it wrong)
  • configured Veeam Backup for Office 365 (not so hard, if you know what you are doing and you’ve read the official docs)
    • added a new Object Storage Repository
    • added a new Backup Repository which offloads the backup data to the previously created Object Storage Repository
    • configured a custom AAD app (with the right permissions)
    • added a new Office 365 organization with AAD app and Global Admin account credentials (docs)
    • created a backup job for this Office 365 organization
    • started backing it all up

Now, a few tips on the “configuration part”:

  • Microsoft Azure:
    • no real prerequisites and tips here – simple Windows VM, on which I’m installing the downloaded software (there is a list of system requirements if want to make sure it’s all “by the book”)
  • Exoscale:
    • creating the Exoscale SOS bucket is relatively easy, once you have your account (you can request a trial here) – you choose the bucket name and zone in which data will be stored and… voilà:

    • if you need to make adjustments to the ACL of the bucket, you can (quick ACL with private setting is just fine for this one):

    • to access your bucket from Veeam, you’ll need your API keys, which you can find in the Account – Profile – API keys section:

    • one other thing you’ll need from this section is the Storage API Endpoint, which depends on the zone you’ve created your bucket in (mine was created inside AT-VIE-1 zone, so my endpoint is https://sos-at-vie-1.exo.io):

  • Office 365:
    • note: I’m using the Modern authentication option because of MFA on my tenant and… it’s the right way to do it!
    • for this, I created a custom application in Azure Active Directory (AAD) (under App registrations – New registration) (take a note of the Application (client) ID, as you will need it when configuring Veeam):

    • I’ve added a secret (which you should also take a note of, because you’ll need it later) to this app:

    • then, I’ve added the minimal required API permissions to this app (as per the official docs) – but note that the official docs have an error (at this time), which I reported to Veeam – you’ll need the SharePoint Online API access permissions even if you don’t use the certificate based authentication(!) – so, the permissions which work for me are:

    • UPDATE: Got back the word from Veeam development – additional SharePoint permissions may not be necessary after all, maybe I needed to wait a bit longer… will retry next time without those permissions. 🙂
    • after that, I’ve enabled the “legacy authentication protocols”, which is still a requirement (you can do it in Office 365 admin center – SharePoint admin center – Access Control – Apps that don’t use modern authentication – Allow access or via PowerShell command “Set-SPOTenant -LegacyAuthProtocolsEnabled $True”):

    • lastly, I’ve created an app password for my (global admin) account (which will also be required for Veeam configuration):

  • Veeam Backup for Office 365:
    • add a new Object Storage Repository:

    • add a new Backup Repository (connected to the created Object Storage Repository; this local repository will only store metadata – backup data will be offloaded to the object storage and can be encrypted, if needed):

    • add a new Office 365 organization:

    • create a backup job:

    • start backing up your Office 365 data:

Any questions/difficulties with your setup?
Leave them in the comments section, I’ll be happy to help (if I can).

Cheers!

Microsoft AZ-500 down, more to go

Another month, another Azure cert! 🙂

So, for the last couple of weeks, I was reading about, learning and playing around with Azure security technologies, mainly as a preparation for AZ-500 (Microsoft Azure Security Technologies) exam.

And then… today I took the exam and… PASSED!

I must say, with a few certificates under my sleeve, this exam was not the easiest I took. I was feeling prepared and still – passing it demanded concentration on the details and a bit of thinking! Nonetheless, it’s over now – one down, more to go!

Note that… by passing this exam, I’m not automatically an Azure security guru (!) – it just means that I know a thing or two about what Azure offers in terms of security and how it works. 🙂

What did I use to prepare?

There is a great book about Azure governance called Pro Azure Governance and Security, written by my MVP colleagues Peter De Tender, David Rendon and Samuel Erskine. It’s purpose is not to be an exam prep guide, but to tackle into the world of governance and security features available within Microsoft Azure (which are part of the exam, who would know).

There is also a great post, containing a bunch of helpful AZ-500 material from Stanislas Quastana, located here, and Thomas provided some useful links in his post here and even did a webinar on Azure Security Center (hosted by Altaro) the other day – you can find the recording here.

Of course, there is also the official exam page with skills measured and docs.com.

And… don’t forget to try things out yourself! There is also a free Azure subscription, you know?! 🙂

If you’ll be taking this exam – good luck, hope this resources help you!

Cheers!