Exporting the Managed Favorites from Edge

It’s time for another short “vibe story”.
Although, “vibe coding” part was not that short… oh, well. 😅

So, there is a feature in Microsoft Edge called Managed Favorites – basically, it’s a policy which adds some predefined (let’s say, work-related) favorites into your Edge browser. Which is pretty cool, if you’re using Microsoft Edge. If not, you would maybe also want to have the same favorites added to your other browser(s). With all other favorites, Export/Import would do the trick. But not with these – if you tried exporting them, you could have seen that they are not getting exported with all other favorites. 🤷‍♂️

What can be done then?

You could go to your admin and nicely ask him to provide you these favorites so that you can create/import them by yourself.

Or, you could go to your registry and read the list (it’s actually JSON… but saved as a single string) from HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\ManagedFavorites and create these favorites/bookmarks in your other browser(s):

Or, and this is the option I was vibe coding with my AI companion, you can get the Export-ManagedFavorites.ps1 script and run it to export them (as HTML) for you! 🙂
With managed favorites exported by the script, the toughest part is done, and you can easily import them into your favorite (pun intended) browser(s):

Maybe there are other options (probably there are), but I wanted to have this little PowerShell script for when I need it.

For more info and the script itself, go and check out my GitHub.

Cheers!

P.S. If you think I forgot about the “not that short vibe coding part” – my AI companion didn’t seem to understand my desire to have the final HTML formatted in a certain way… so, it took some time (and a few trips down the rabbit hole) to persuade it have it done the right way. 😅😀

Git vibes?

Another day, another project nobody really needs… but here it is! 😁

As I was reorganizing some stuff, mostly on multiple Git repositories (both, internal and GitHub), I noticed that sometimes I forget to save my work. So, to remember (and potentially stop this from happening), came up with a small and simple PowerShell script… helping me to remember. Of course, it was vibe-coded with the ChatGPT’s help.

The idea behind it – I have a local folder with multiple projects/repositories on which I’m working. As I mostly switch from one thing to another (or a computer to computer, or …), I sometimes do something and forget to save it on the remote Git instance. And as things with computers tend to happen… 🙂

Smart people of the Internet say (borrowed from https://mastodon.social/@nixCraft/111489234007874526):

So, to potentially stop forgetting (and losing my work), a simple PowerShell script (Check-GitRepos.ps1) goes into the local “projects” folder, gets the latest updates, asks and commits the local changes (if there are some). For now, it doesn’t create additional branches, commits to them, etc. – it may be a feature of the next version.

Current version is just fine for my personal “use case” – smart, simple and quick.

Examples of running it on a local folder:

So, not much else to add – it does what it’s supposed to do. And, as always, it’s available on my GitHub.

Cheers!

P.S. Yeah, I also thought about the question that presents itself – and who will remind me to run the script?! Oh, well… 🤷‍♂️😅

P.P.S. There is also git-fire, which may help with the emergencies.

The Need for (Internet) Speed… graph

Another one of more or less useful “projects” that got my attention was enabling Zabbix (while monitoring everything else) to also keep track of my Internet speed. There are sources on the Internet that tell you how (easy it is) to do it, but I was not that lucky – had issues trying to set it up. 🤷‍♂️🙂

 

What I’m using:
– Zabbix 7 (installed on Ubuntu 24.04)
(https://www.zabbix.com/download?os_distribution=ubuntu)
– Speedtest CLI by Ookla
(https://www.speedtest.net/apps/cli)
– some scripts

 

So, Zabbix is installed and working, no big surprise there.

 

Next thing we need to install is Speedtest CLI by Ookla – it’s a simple installation… sort of.
Installing it directly on Zabbix server.

(copy/paste from Ookla’s site – https://www.speedtest.net/apps/cli):

This usually works, but didn’t work for me. 🙂

Something cannot be found… hmmm… maybe if we look for “jammy Release”? 🤔

That (= changing “noble” to “jammy” in the sources list file) fixed it! Yaay! And now we can install speedtest:

If all goes well, we can check the version:

And we can test it couple of times manually, to see if it’s working:

 

Cool! This works! 🙂
Now the tough part – how can I make Zabbix pick it up?!

 

Let’s just say that the “normal and direct way” didn’t work that well (Speedtest tool was probably hitting timeouts on the Zabbix side, and had issues in delivering results to Zabbix – seen the timeout errors).

 

So, another solution was necessary – to work around the timeout errors, I’m actually running the Speedtest tool, store/cache data temporarily, and then Zabbix picks it up when it’s in the mood to do so. As I’m doing measurements every 15 minutes or so, there is plenty of time to pick the numbers up and not run into timeouts.

 

Solution is (relatively) simple:

– script that runs and stores/caches measurements (speedtest-cache.sh)
– systemd service and timer that run the script in 15-minute intervals (zabbix-speedtest.service, zabbix-speedtest.timer)
– script that reads measurements from the Zabbix side (speedtest-read.sh)
– Zabbix items representing measured values (template-speedtest-cache.xml)
– widget/graph that shows data in Zabbix

 

And here it is:
/usr/lib/zabbix/externalscripts/speedtest-cache.sh:

/etc/systemd/system/zabbix-speedtest.service:

/etc/systemd/system/zabbix-speedtest.timer:

/usr/lib/zabbix/externalscripts/speedtest-read.sh:

(manually imported) template-speedtest-cache.xml:

Added all the scripts, timers and services, fixed all the permissions, checked that I have data, and now my Internet Speed Monitor widget looks like this (it’s not much, but hey! 🙂):

Cheers!

P.S. I’m not a Zabbix expert, and I’m not responsible if this “solution” causes you any troubles!

P.P.S. Don’t just copy everything from the Internet! It’s full of bad people (and code)! 😉

Installing the new Veeam Software Appliance v13

It’s finally here – the Veeam Software Appliance, v13!

 

What is it?

A simple to deploy, hardened Veeam instance, which is not installed on Windows anymore, but comes with it’s own (Rocky) Linux – everything packed in a nice software appliance!

Very nice!

 

Naturally, should be installed ASAP! 🙂

 

Installation

So, without actually reading the manual, I went and installed it in my lab (that’s how easy is to start with it!). There will be plenty of time to read the manual once issues start… right?!

Installation has a few steps:

  • obtain the installation ISO image from here (or from your account page):
    • be careful to select Linux appliance, and not the Windows installation ISO

  • prepare hardware to install it to – for me, it’s a Windows Server 2025 Hyper-V VM (4 vCPU, 18 GB RAM, 2×240 GB HDD, SecureBoot (MS UEFI CA) enabled):
    • check the requirements here

  • once you selected and prepared your hardware, you can start the installation – it looks like this:

  • in case you didn’t read the hardware requirements, you may face this error (so, go back, re-read the hardware requirements and update your hardware):

  • and after this question, the (automatic) installation proceeds, with no more inputs required from your side:
    • while waiting for it to install, I recommend you take a look at the nice, shiny, new What’s new document!

  • installation took ~35 minutes on my machine

 

Initial configuration

After the automatic installation finished, there are a couple of initial settings that have to be taken care of:

  • accepting the necessary agreements:

  • assigning a hostname
  • setting up networking
  • configuring time source
  • setting up passwords for the admin accounts (Host Administrator and (optionally) Security Officer):
    • really liked the process of setting up the MFA for host administrator here (as SO is optional, MFA for this account will be setup later, inside the web interface)!
    • don’t use passwords that are too short… or the same! 😁

  • summary:

And… that’s it! Veeam is installed and initially configured, and now you can access it via browser:

  • host management at https://<vsa-ip-address>:10443/

  • Veeam Backup & Replication web interface https://<vsa-ip-address>/
    • (or just use the Windows console for full experience)

What a nice installation experience! Well done, Veeam people! 👏

Of course, next I’ll install my license, add rest of the infrastructure, setup my backup jobs, and connect it to (Veeam One) monitoring.

After all, it’s a “normal” Veeam solution we already know and work with.

Cheers!

Vibe adding the Cloudflare records

One of the summer night “lab sessions” produced this one – I was testing something and was in need to add a couple of new DNS records in my Cloudflare account. Of course, “the normal way” would be to login to the beautiful web-page doing just that, but I wanted to do it differently. Of course – with PowerShell. And of course – with the help of my vibe coding AI companion, ChatGPT. 🙂

So, the idea was born – let’s vibe produce the PowerShell script which will  check and add some (A, CNAME and TXT) records to my Cloudflare-hosted domain, backup the state before and after for… purposes, and report to me what was done in a nice, readable way.

The same thing I could have done via the web-page even faster, but… 🤷‍♂️

The result is script called Add-CfDNSRecord.ps1, which is available on my GitHub, and does just that! 🙂

And for the script to work, you will need the API token, which you can create as following:

  1. Login to your account at https://dash.cloudflare.com/
  2. Go to Profile (top right “user” menu)
  3. Go to API Tokens (left menu)
  4. Create Token
  5. You can use the Edit zone DNS template
  6. Configure settings of your token (constrained as possible in terms of zone, duration, etc.)
  7. Use this token with the script

This can be improved, but it does the job (for me).

Use at your own risk (like with everything you find online)!

Cheers!

Organize pictures and videos… the “vibe” way

The idea for this one came to mind one summer evening, when I was searching for something on my disks, and realized – it’s a mess.

So, started figuring out this mess by first organizing images and videos backed up from my phone(s) into folders. Phone backups are a nice thing… and usually it’s all in a single folder.

OK, there are options… but it is what it is – now I have a folder called like “Mobile-Backup-XXX”, with all files in it… no subfolders. 🤷‍♂️

Of course, when I opened this folder with thousands of files, and started moving them manually to respective subfolders, it soon became clear that I need help (OK, maybe that was obvious from the start 😁).

A whom do you call for help these days? Ghostbusters? 🤔

Well, no – the answer is always “AI”. More precisely, I called (free) ChatGPT.

Long story short, it helped me to write a nice PowerShell script which will take my folder with thousands of files and slightly organize it by moving those files into (sub)folders named by the date they were taken or created.

After some time, we got the script working, some logging was added, and it was ready for testing – tested it on a few folders, and then realized that sometimes it has issues with reading the “right” metadata, so we reengineered that part.

Some time later, after some other tiny things were polished, script was ready and doing it’s work just as I expected it! Nice!

Now, instead of a folder with thousands of images and videos, I have a folder with hundreds of subfolders… 😅

And if you put stuff into folders, you don’t have to look at it, and it doesn’t bother you anymore, right?! 😁

But OK – it’s a first step in organizing stuff! 😊

Could it be improved?! Of course! But… 🤷‍♂️

The script (Organize-PicsAndVids.ps1) is, as always, available on my GitHub.

Cheers!

P.S. This was also somewhat inspired by an episode from “Scott and Mark Learn To…” series of podcasts by Scott Hanselman and Mark Russinovich – make sure you subscribe and watch them regularly! They rock! 😊

Vibe coding while waiting on my packages

Recently, I was waiting on some packages to arrive via DHL Germany. As I also had some spare time, and it was raining outside, I was playing around with PowerShell and DHL’s API to see if I can get status of my packages in a nice (to me), formatted way.

TL;DR: I can. 🙂

The idea was to build a small script which will take in a list of tracking numbers from my incoming packages, check their latest statuses and any history it can find, and display it in a nice, formatted way.

So, it all started with exploring the API possibilities on DHL’s Developer website.

To be able to use their APIs, you must register (for a free account):

Next, you have to request API access, so that you get your API key and secret – you do this by creating an app and selecting APIs it will use (selected Shipment Tracking – Unified, as it sounded right):

Then you wait a bit for someone/something to approve your request (few minutes), and you are ready to go:

Now you have all the building blocks, and it’s time to code… finally. 🙂

I won’t explain the code line by line (it’s not that interesting and it could be written nicer), but will just say that I used the help of (the free) ChatGPT (or rather – it used my ideas?! Who knows… 😁), and “vibe-coded” the whole thing. There were errors, misunderstandings, etc., but we managed to get to something that does the job:

Just one thing – as I was looking at the outputs, I’ve seen these “detailed info” codes, and decided to explore what they mean – for this, I found a CSV file with the explanations, which I then decided to incorporate into my script… just for fun.

The script (Track-DHLShipment.ps1) is available on my GitHub.

What’s next? Don’t know… probably my packages will indeed arrive. 🙂

Cheers!

P.S. This was somewhat inspired by an episode from “Scott and Mark Learn To…” series of podcasts by Scott Hanselman and Mark Russinovich – make sure you subscribe and watch them regularly! They rock! 😊

Remote Desktop connection to an Entra ID-joined Windows Server with Entra ID credentials… quick and dirty

Connecting via Remote Desktop to an Entra ID-joined Windows machine, by using the Entra ID credentials, should be easy, right? It usually is… if you have covered all the prerequisites.

Multiple such guides are around, but none has listed all the steps needed (or I just haven’t found the right one) – I chose to follow this one, from my MVP colleague Tom Wechsler, available here.

Some points before we start:

In my case, all the required steps that allowed me to finally access my Entra ID-joined Windows (Server) machine in Azure via (publicly accessible) Remote Desktop (FOR DEMO PURPOSES), are the following:

Create an Entra ID-joined Windows virtual machine in Azure:

  • first, you will need to select a supported operating system (Windows 10/11, Windows Server Datacenter 2019/2022):
    • worked for me with both Windows Server 2019 Datacenter and Windows Server 2022 Datacenter

  • another thing needed during the creation of the machine itself is to select Login with Microsoft Entra ID under Management settings so that your machine would be automatically joined to the Entra ID and have the extension installed (note that this option automatically creates the system-assigned managed identity as well, and if not selected at the time of creation, it can also be added later, under virtual machine’s Extensions):

Create a user that will connect to the virtual machine:

  • the easiest thing would be to create a fresh, new Entra ID user, that will be used for connecting to this virtual machine (you can also use an existing user, but make sure it is not using MFA, which will prevent you from connecting) – my user will be called vmuser
  • try to sign in with this new user into the Azure portal if you need to change its password, or just to skip the MFA setup:

  • as the machine is Entra ID-joined, under the Access Control (IAM) settings of the virtual machine (in my case, temp-1), assign this user (vmuser) either the Virtual Machine User Login or the Virtual Machine Administrator Login role:

Connect to the virtual machine with a local admin account (created with the machine):

  • if you run dsregcmd /status command, the following should be configured already:
    • AzureAdJoined: YES
    • AzureAdPrt: NO
    • IsDeviceJoined: YES
  • next, go to the System Properties – Remote Desktop settings and disable the Network Level Authentication option (enabled by default):

On the computer you will be connecting from (not Entra ID-joined):

  • next, download the RDP connection file from the portal:

  • and then edit the downloaded RDP file (with Notepad) – it should look like the below (more on the available options):
    • remove the line with the username (as it will be provided on connection)
    • add the following two lines:
      • enablecredsspsupport:i:0
      • authentication level:i:2

  • try connecting to the virtual machine now, by using the edited RDP file, username, and password of your Entra ID account:
    • for the username, make sure you are using the AzureAD\upn-or-email-address format (in my case, AzureAD\[email protected])

  • your connection should be working, and you will have either the user or admin permissions on the system (depending on the assigned Entra ID role):

Hope this helps.

Cheers!

Challenge with FM radio signal… Raspberry Pi to the rescue!

Not so long ago (actually, a weekend or two ago), I was presented with a real-life issue – an issue that needs to be taken care of… ASAP!

Production was suffering. Production of high-quality foods in my mom’s kitchen, that is! 🙂

So, what was the issue?

To better help you understand the issue, we need to introduce you to the environment first – there’s my mom’s kitchen, from where many amazing dishes come out on a daily basis.

And there’s a small FM radio in this kitchen, providing her company when cooking alone – nothing special, but it’s an essential part of the kitchen (and the overall cooking process)!

About two weekends ago, the user (mom) starts complaining that the radio is having issues with the reception of her favorite FM station. It’s not good when users start complaining, of course. Especially if they are the important ones!

If this isn’t taken care of, production (of food) may suffer! 🙂

So, let’s solve the issue.

As nothing has changed from the FM radio perspective, it seems that the issue is somewhere else. After a short research, it seems that adding a new frequency to the user’s favorite radio station somehow impacted the remaining two (one of which we were using)… and now we’re having bad reception.

Tried to switch to the other two frequencies… didn’t help. This station is transmitted in at least three frequencies, but none of it provides us a good reception anymore.

Even tried with another antenna… no luck.

Switching to another radio station… is not an option. 🙂

When I was thinking about other options, I remembered that this radio station also streams over the Internet (like the example I’m using below)!

Splendid!

As I had this spare Raspberry Pi just standing there, collecting dust, an idea was born – turn it into the “Internet radio”!

The initial solution needs to be basic as possible, headless, work as soon as connected, wireless (as much as possible), and stream the radio station in question. Rather than ditching the FM radio, I’ll use it for the output part – so, Raspberry Pi’s 3,5mm output as an input to the AUX IN of the FM radio, using its amplifier and speakers (switching to AUX input is just one click away, which is fine).

I started by preparing my Raspberry Pi:

  • downloaded Raspberry Pi Imager
  • used it to download and customize the Raspbian image (Raspberry PI OS (32-bit)):
    • set hostname
    • enabled SSH
    • set username and password
    • configured wireless LAN
    • configured locale settings
  • booted my Raspberry Pi and did additional configuration via the included raspi-config utility:
    • configured System Options – Boot/Auto Login – Console Autologin
    • configured some other tiny things (like extending the storage, etc.)

Now it seems that I’m prepared for bringing up the “software part”.

After some reading and trying things out, I decided to go with VLC Player.

Now I just need to make it play what I want, play it on power on and without any other interaction.

Luckily, it’s not thaaat hard! 🙂

Great! That works if I manually start it… and there are no issues.

For the autostart part, I’m choosing to run it as a service, so:

And… that’s it!

With a few hits and misses, there’s finally a simple wireless Internet radio, which starts playing once Raspberry Pi powers on (and connects to WiFi, and waits for 30 seconds, of course)! No more bad FM reception and the user is satisfied! 🙂

Cheers!

Using a self-hosted runner with GitHub Actions

As I was going through the excellent short course called Azure Infrastructure as Code with GitHub (by fellow MVP, Barbara Forbes), a thought appeared – what do I need to do to use my custom runner machine inside a pipeline for… I don’t know… security/privacy concerns, isolation, special requirements, different OS, control, price… or just to complicate things a bit?

Of course, GitHub supports this and it’s called a self-hosted runner.

So, what do I need to do to use this self-hosted runner with my GitHub Actions?

It’s relatively simple – there is an application package, which will be installed on your runner machine, and which will listen for and eventually do all the work defined in your workflow!

But first, let’s introduce my environment.

I have a simple GitHub Action (workflow), which creates a simple storage account on my Azure environment (there is actually no need to convert Bicep to ARM before deployment, but it seemed cool 😀). It’s currently using the „ubuntu-latest“ runner, provided by GitHub… which has also all the needed components inside (like Azure CLI, Azure PowerShell, …).

And it works fine. When there is a push to my GitHub repository, GitHub Actions starts and does what is needed on my Azure environment via this workflow:

And the mighty Bicep file (😀) it’s using for the deployment is:

Of course, this runs just fine on a standard (hosted) runner:

To run this workflow (successfully) not that much is needed.

First, I’ve created a new virtual machine (I’ll use a simple Ubuntu Hyper-V VM, no autoscaling, no… nothing) called hermes (god of speed 😀), with freshly installed Ubuntu 22.04.1-LTS (minimized).

After that, I went to the Settings of my GitHub repository and got the download and install scripts for the x64 Linux runner:

As you can see, I’ll be using crontab later to automatically (re)start my self-hosted runner.

If everything went well, you should see your runner “up and running” (😀) in the GitHub portal:

Next, I’ll use the following script to install all prerequisites for my workflow (like Azure CLI, Azure PowerShell, etc. – it really depends on your workflow and things you use):

Once this is done, my self-hosted runner hermes should be ready to run the workflow.

To try this, I need to make a slight update to my workflow file – line 12 inside the job configuration should be updated from “runs-on: ubuntu-latest” to “runs-on: self-hosted“.

So, my workflow YAML file now looks like this:

And once I push the configuration to my GitHub, my workflow automatically starts and runs on hermes, my self-hosted runner:

If we prepared our runner right, all is good! 😊

Of course, our resources are deployed successfully:

So, this is how you can use your own, self-hosted runner, to execute your GitHub Actions (workflows).

Cheers!