Network share feeds in WAC

You know about (and actively using) the Windows Admin Center (WAC), right?! 😉

While it’s great for managing your Microsoft infrastructure, it can also be extended with different extensions. You can even write and use your internal, custom extensions, which do… well, whatever you make them do. And you can read all about that here.

But let’s go back to the subject of today’s post – extensions can be installed via different feeds, either official or unofficial, provided by Microsoft or 3rd-party. You can easily add new feeds or remove existent by providing the feed location, which can be either a NuGet feed URL or a file share location, as stated in the official docs.

Using a file share location is easy:

  • you choose/create a folder:

  • share it (\\<my_server_name>\WACExtensions in my case):

  • and add it to your feeds – I’ll use the “PowerShell way”:
# import the WAC Extensions PowerShell module
Import-Module "C:\Program Files\Windows Admin Center\PowerShell\Modules\ExtensionTools"

# list current feeds
Get-Feed -GatewayEndpoint ""

# add new extension feed (network share)
Add-Feed -GatewayEndpoint "" -Feed "\\<my_server_name>\WACExtensions"

# check if added successfully
Get-Feed -GatewayEndpoint ""

But no!

My feed seems to be added successfully, but it’s not showing in the list!

You can try the same through the web interface – it’s almost the same (OK, you’ll get the errors):

And permissions are fine, don’t worry. 😉

Why’s that?!

The catch here is that we added an empty folder/share – when adding this share, WAC intelligently looked into the folder, found nothing and (successfully) didn’t add our share to the feed list, as it’s empty. And yes, it also forgot to mention it when using PowerShell.

So, what can be done?

The workaround/solution is rather simple – just make sure you don’t add an empty feed/folder.

Just for fun – I’ve downloaded the HPE Extension for WAC, moved it into the WACExtensions shared folder and tried to add the feed again:

# add new extension feed (network share)
Add-Feed -GatewayEndpoint "" -Feed "\\<my_server_name>\WACExtensions"

# check if added successfully
Get-Feed -GatewayEndpoint ""
### \\<my_server_name>\WACExtensions

And – it worked! 😊


Having fun with Helm and file encoding

Had some spare time, so I’ve tried to learn a bit more about Helm, the package manager for Kubernetes.

I’ve decided to follow the relatively new Pluralsight course called – Kubernetes Package Administration with Helm, done by my MVP colleague Andrew Pruski. And it was great – not too long, clear and easy to follow, with only a handful of prerequisites if you want to follow along! Great job!

Of course, there is also the nice, official documentation.

But why am I writing this post?

I was normally following this course on my Windows 10 laptop, using Visual Studio Code, as suggested, and also using PowerShell terminal, with Helm v3.3.1.

It all went well until the part when we are creating our Helm Chart, more specifically – when we’re filling up our deployment.yaml and service.yaml files. Suggested (and simplest) method is to use the simple output redirection (with “>“), like this:

kubectl create deployment nginx --image=nginx --dry-run=client --output=yaml > .\ourchart\templates\deployment.yaml

But, this gave me the following error when trying to deploy the chart:

helm install ourchart .\ourchart
# Error: unable to build kubernetes objects from release manifest: error parsing : error converting YAML to JSON: yaml: invalid leading UTF-8 octet

It’s quite obvious – Helm works with UTF-8, and my .yaml files seem to be encoded differently. Quick look at the bottom of my VSCode confirms it:

How can I fix it?

As I’m using PowerShell, it’s pretty easy – instead of doing the simple output redirection (“>“), I pipe output to Out-File cmdlet with -Encoding UTF8 option, in all cases, which takes care of the encoding (and sets it to UTF-8 with BOM, which is just fine for Helm):

kubectl create deployment nginx --image=nginx --dry-run=client --output=yaml | Out-File .\ourchart\templates\deployment.yaml -Encoding UTF8

So, long story short – if you run into the error above (Error: unable to build kubernetes objects from release manifest: error parsing : error converting YAML to JSON: yaml: invalid leading UTF-8 octet”), remember to check your file’s encoding (and change it to UTF-8, if needed)! 🙂


P.S. Thanks to good people at Pluralsight for providing me a complimentary subscription!

Fixing Hyper-V virtual machine import with Compare-VM

Well, I was rearranging some stuff the other day, and come to an interesting “lesson learned”, which I’ll share. 🙂

In my lab, I’ve had a Hyper-V server running Windows 2012 R2, which I finally wanted to upgrade to something newer. I’ve decided to go with the latest Windows Server Insider Preview (SA 20180), just for fun.

When trying to do an in-place upgrade, I was presented with the message “it can’t be done“, which is fine – my existing installation is with GUI, the new one will be Core.

So, evacuate everything and reinstall.

In the process, I’ve also reorganized some stuff (machines were moved to another disk, not all files were on the same place, etc.).

Installed Windows, installed Hyper-V, created VM switches, but when I tried to import it all back (from PowerShell… because I had no GUI anymore), I was presented with an error.

Error during virtual machine import was (I know – could’ve used more specific Import-VM command, which will select all the right folders and required options, but… learned something new by doing it this way!):

Import-VM -Path 'D:\VMs\azshci\azshci1\Virtual Machines\5381F80D-C752-42E0-AE26-6402B019B785.vmcx'
# Import-VM : Unable to import virtual machine due to configuration errors.  Please use Compare-VM to repair the virtual machine.
# At line:1 char:1
# + Import-VM -Path 'D:\VMs\azshci\azshci1\Virtual Machines\5381F80D-C752-42E0-AE26-6402B019B785.vmcx'
# + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#     + CategoryInfo          : InvalidOperation: (:) [Import-VM], VirtualizationException
#     + FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.ImportVM

So, the error says it all – “Please use Compare-VM to repair the virtual machine.” 🙂

But how?! 🙂

If you go to the docs page of Compare-VM, you can see how it’s used.

And, in my case, the whole process of repairing this virtual machine looks like this:

# create an incompatibility report for a virtual machine
$report = Compare-VM -Path 'D:\VMs\azshci\azshci1\Virtual Machines\5381F80D-C752-42E0-AE26-6402B019B785.vmcx'

# check the created report as some incompatibilities were found
# VM                 : VirtualMachine (Name = 'azshci1') [Id = '5381F80D-C752-42E0-AE26-6402B019B785']
# OperationType      : ImportVirtualMachine
# ...
# Incompatibilities  : {40010, 40010, 40010, 40010}
# ...

# check what exactly is incompatible (VHD locations, in this case)
$report.Incompatibilities.Source | Format-Table
# VMName  ControllerType ControllerNumber ControllerLocation DiskNumber Path
# ------  -------------- ---------------- ------------------ ---------- ----
# azshci1 SCSI           0                0                             C:\VMs\azshci\azshci1\azshci1_0_0.vhdx
# azshci1 SCSI           1                0                             C:\VMs\azshci\azshci1\azshci1_1_0.vhdx
# azshci1 SCSI           1                1                             C:\VMs\azshci\azshci1\azshci1_1_1.vhdx
# azshci1 SCSI           1                2                             C:\VMs\azshci\azshci1\azshci1_1_2.vhdx

# fixing the VHD locations (by replacing C: with D: for all VHDs)
$report.Incompatibilities[0].Source | Set-VMHardDiskDrive -Path 'D:\VMs\azshci\azshci1\azshci1_0_0.vhdx'
$report.Incompatibilities[1].Source | Set-VMHardDiskDrive -Path 'D:\VMs\azshci\azshci1\azshci1_1_0.vhdx'
$report.Incompatibilities[2].Source | Set-VMHardDiskDrive -Path 'D:\VMs\azshci\azshci1\azshci1_1_1.vhdx'
$report.Incompatibilities[3].Source | Set-VMHardDiskDrive -Path 'D:\VMs\azshci\azshci1\azshci1_1_2.vhdx'

# recheck if all looks fine
$report.Incompatibilities.Source | Format-Table
# VMName  ControllerType ControllerNumber ControllerLocation DiskNumber Path
# ------  -------------- ---------------- ------------------ ---------- ----
# azshci1 SCSI           0                0                             D:\VMs\azshci\azshci1\azshci1_0_0.vhdx
# azshci1 SCSI           1                0                             D:\VMs\azshci\azshci1\azshci1_1_0.vhdx
# azshci1 SCSI           1                1                             D:\VMs\azshci\azshci1\azshci1_1_1.vhdx
# azshci1 SCSI           1                2                             D:\VMs\azshci\azshci1\azshci1_1_2.vhdx

# import the fixed virtual machine (success!)
Import-VM -CompatibilityReport $report
# Name    State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
# ----    ----- ----------- ----------------- ------   ------             -------
# azshci1 Off   0           0                 00:00:00 Operating normally 9.0

Hope this helps you as well!


Fixing things with… terraform destroy

I like Terraform, because it’s so clean, fast and elegant; OK, I also suck at it, but hey – I’m trying! 🙂

The long story

Usually, Terraform and its providers are very good at doing things in the order they should be done. But sometimes people do come up with silly ideas, and mine was such (of course) – I’ve decided to rename something and it broke things. Just a little.

I have a simple lab in Azure, with a couple of virtual machines behind the Azure Load Balancer, no big deal. All this is being deployed (and redeployed) via Terraform, using the official azurerm provider. It’s actually a Standard SKU Azure Load Balancer (don’t ask why), with a single backend pool and a few probes and rules. Nothing special.

I’ve deployed this lab a few days ago (thanks to good people at Microsoft, I have some free credits to spare), everything worked just fine, but today I’ve got the wild idea – I’ve decided to rename my backend pool.

With all the automation in place, this shouldn’t be a problem… one would think. 🙂

So, as I’ve updated my code and during the make it so phase (terraform apply), I’ve got some errors (truncated, with only the useful stuff):

Error Creating/Updating LoadBalancer: network.LoadBalancersClient
Failure sending request:
Original Error:
Code="InvalidResourceReference... .../k8sthw-lb/backendAddressPools/k8sthw-lb-pool referenced by resource
/subscriptions/fake-azure-subscription-id/resourceGroups/k8sthw-rg/providers/Microsoft.Network/... ... or verify that both resources are in the same region." Details=[]


After going through these errors, I’ve realized that my resources are indeed in the same region, but existing rules are referencing the current backend pool by name and actually blocking Terraform in renaming the backend pool.

There are a couple of options at this stage – you can destroy your deployment and run it again (as I normally would) and it should all be fine. Or you can try to fix only the dependent resources and make it work as part of the existing deployment.

With some spare time at my hands, I’ve tried to fix it using the second one and it actually worked.


Terraform has a nice option for destroying just some parts of the deployment.

If you look at help for the terraform destroy command, you can see the target option:

terraform destroy -help
# ...
# -target=resource Resource to target. Operation will be limited to this
#                  resource and its dependencies. This flag can be used
#                  multiple times.
# ...

And if you run it to fix your issues, you’ll get a nice red warning saying that this is only for exceptional situations (and they mean it!):

terraform destroy -target azurerm_lb_outbound_rule.k8sthw-lb-out
# Warning: Resource targeting is in effect
# You are creating a plan with the -target option, which means that the result
# of this plan may not represent all of the changes requested by the current
# configuration.
# The -target option is not for routine use, and is provided only for
# exceptional situations such as recovering from errors or mistakes, or when
# Terraform specifically suggests to use it as part of an error message.
# Do you really want to destroy all resources?
# Terraform will destroy all your managed infrastructure, as shown above.
# There is no undo. Only 'yes' will be accepted to confirm.
# Enter a value: yes
# ...


So… BE CAREFUL! (can’t stress this enough!)


Anyhow, I’ve destroyed rules which were preventing my rename operation:

terraform destroy -target azurerm_lb_outbound_rule.k8sthw-lb-out -target azurerm_lb_rule.k8sthw-lb-80-rule -target azurerm_lb_rule.k8sthw-lb-443-rule --auto-approve
# ...
# Destroy complete! Resources: 3 destroyed.

And then another terraform apply –auto-approve recreated everything that was needed, and finally – my backend pool got renamed:

# ...
# azurerm_lb_backend_address_pool.k8sthw-controllers-lb-pool: Refreshing state... [id=/subscriptions/fake-azure-subscription-id/resourceGroups/k8sthw-rg/providers/Micro...
# ...
# Apply complete! Resources: 2 added, 0 changed, 3 destroyed.

Another idea I’ve had was to taint the resources (terraform taint -help), which would probably be a lot nicer. Oh, well… maybe next time. 🙂

As things are constantly improving, it shouldn’t be long until this is fixed (should it even be fixed?!). Until then, hope this will help you with similar issues!


CRC – OpenShift 4 cluster in your pocket

… but only if you have large pockets, that is! 🙂

I suppose that, by now, you’ve already heard of minishift – tool which helps you run a single-node OpenShift 3.x cluster inside a virtual machine (on your preferred hypervisor), which is extremely convenient if you need to do some local development, testing, etc. Of course, you can always deploy what you need on top of Azure or any other cloud provider, but having OpenShift cluster locally can have its benefits.

And what if you need, let’s say, an OpenShift 4.x cluster “to go”?

Don’t worry, there is a solution for you as well! It’s called CodeReady Containers (CRC) (version 1.9 at the time of writing), and is basically a single node OpenShift 4.x cluster, inside a virtual machine (sounds a lot like minishift, doesn’t it?).

So, how can you get your hands on CRC and make it work?

There are a couple of steps involved:

  • download the OpenShift tools (as I’m using Windows, so I’ll use this one):

  • unzip all to a location you like (mine is C:\PocketOpenShift):

  • (optional) add this location to PATH for easier usage (as already mentined, I’m using Windows & PowerShell):
[System.Environment]::SetEnvironmentVariable("PATH", $Env:Path + ";C:\PocketOpenShift", "Machine")

  • run “crc setup” which will prepare the environment:
crc setup
# INFO Checking if oc binary is cached
# INFO Checking if podman remote binary is cached
# INFO Checking if CRC bundle is cached in '$HOME/.crc'
# INFO Checking if running as normal user
# INFO Checking Windows 10 release
# INFO Checking if Hyper-V is installed and operational
# INFO Checking if user is a member of the Hyper-V Administrators group
# INFO Checking if Hyper-V service is enabled
# INFO Checking if the Hyper-V virtual switch exist
# INFO Found Virtual Switch to use: Default Switch
# Setup is complete, you can now run 'crc start' to start the OpenShift cluster
  • run “crc start” (with one or more options, if needed – as you can see, I’ll be using custom number of vCPUs, amount of RAM and a custom nameserver):
crc start --cpus 4 --memory 8192 --nameserver --pull-secret-file ".\pull-secret.txt"
# INFO Checking if oc binary is cached
# INFO Checking if podman remote binary is cached
# INFO Checking if running as normal user
# INFO Checking Windows 10 release
# INFO Checking if Hyper-V is installed and operational
# INFO Checking if user is a member of the Hyper-V Administrators group
# INFO Checking if Hyper-V service is enabled
# INFO Checking if the Hyper-V virtual switch exist
# INFO Found Virtual Switch to use: Default Switch
# INFO Loading bundle: crc_hyperv_4.3.10.crcbundle ...
# INFO Checking size of the disk image C:\Users\tomica\.crc\cache\crc_hyperv_4.3.10\crc.vhdx ...
# INFO Creating CodeReady Containers VM for OpenShift 4.3.10...
# INFO Verifying validity of the cluster certificates ...
# INFO Adding as nameserver to the instance ...
# INFO Will run as admin: add dns server address to interface vEthernet (Default Switch)
# INFO Check internal and public DNS query ...
# INFO Check DNS query from host ...
# INFO Copying kubeconfig file to instance dir ...
# INFO Adding user's pull secret ...
# INFO Updating cluster ID ...
# INFO Starting OpenShift cluster ... [waiting 3m]
# INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
# INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
# INFO To login as an admin, run 'oc login -u kubeadmin -p <a_secret_password> https://api.crc.testing:6443'
# INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
# Started the OpenShift cluster
# WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
  • at any time, you can check the status of your cluster by using “crc status” command:
crc status
# CRC VM:          Running
# OpenShift:       Running (v4.3.10)
# Disk Usage:      10.67GB of 32.72GB (Inside the CRC VM)
# Cache Usage:     14.27GB
# Cache Directory: C:\Users\tomica\.crc\cache
  • once it is up, you can use “oc login” or console (bring it up with “crc console“) to connect to it, either as an (kube)admin or a developer, and continue working as you would normally with any other OpenShift cluster:
oc login -u kubeadmin -p <a_secret_password> https://api.crc.testing:6443
# The server uses a certificate signed by an unknown authority.
# You can bypass the certificate check, but any data you send to the server could be intercepted by others.
# Use insecure connections? (y/n): <strong>y</strong>
# Login successful.
# You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects'
# Using project "default".
# Welcome! See 'oc help' to get started.

  • one other thing I like to do is to enable monitoring and other disabled stuff (note though – your VM should have 12+ GB RAM) – you can do it with two commands – first one lists all that is disabled, and the second one, with the index at the end, enables it (also note that in the official documentation there is an issue with “” and (they are switched), if you’re working in PowerShell):
oc get clusterversion version -ojsonpath="{range .spec.overrides[*]}{.name}{'\n'}{end}" | % {$nl++;"`t$($nl-1) `t $_"};$nl=0
# 0 cluster-monitoring-operator
# 1 machine-config-operator
# 2 etcd-quorum-guard
# 3 machine-api-operator
# 4 cluster-autoscaler-operator
# 5 insights-operator
# 6 prometheus-k8s

oc patch clusterversion/version --type="json" -p "[{'op':'remove', 'path':'/spec/overrides/0'}]" -oyaml
# ...
# state: Completed
# ...
  • monitoring should now be working as well:

And that’s it – you’re ready to work on your own “pocket OpenShift cluster”! 🙂

Of course, don’t forget that there is also the official documentation, including the Getting Started guide and Release Notes and Known Issues document. So… take a look!


Something new – Google Cloud Certified: Associate Cloud Engineer (ACE)

So, I’ve decided to end last year in style – as my Christmas gift to me, I’ve purchased Linux Academy (monthly full-access plan) and taken their course to prepare for the Google Cloud Certified: Associate Cloud Engineer (ACE).

And yesterday, I’ve taken and passed the certification exam (OK, still waiting on the official e-mail from Google; it takes 7-10 days, as it said on the exam). 🙂

UPDATE: It’s here!

This brought me another nice badge (as this is “a thing” now):


Well, I’ve been curious.

I know – “curiosity killed the cat”… except, this time, it didn’t! 🙂

Let’s get serious – I wanted to better understand how and why would someone use Google Cloud (GCP) instead of (or even in conjunction with), I don’t know, Microsoft Azure or AWS or even Exoscale.

The only fair way to do this would be to learn about the platforms, spend some time using them and decide about their strengths and weaknesses (for your specific scenarios)… while keeping in mind that things move and change pretty fast now (something you’ll be missing today in, for example, Microsoft Azure, will perhaps be there in a matter of months, so… nothing is really final)! And this is exactly what I did/do/will keep doing – trying to stay on top of things.


I’ve already mentioned that I’ve taken the course, but here is a short list of what I’ve used, with links:

  • Linux Academy course (they say it adds about +30% on top of your GCP knowledge – sessions are nice, updated regularly, practice tests and hands-on labs certainly help a lot(!)… and you can take a monthly subscription)
  • hands-on experience in using GCP (open up a free account, add custom domain, set up identity provider, bring up some Kubernetes clusters, store some files, …, just play around a bit and try the things out by yourself!)
  • exam guide (it’s nice to read what they’ll test you on)
  • official documentation (yes, you’ll need to read something as well… sorry)
  • practice exam (it’s there, it’s free, try it out… more than once!)

What’s next?

Considering Google Cloud – I’ll probably take a peek at Professional Cloud Architect certification as well.

Considering other learning/certifications – as I’ve done all of the Microsoft Azure exams (that were available at the time), maybe I’ll continue with deepening my knowledge about AWS, who knows… it all depends on my next adventures.

Considering everything else – we’ll see where I’ll end up in 2020 (I’m pretty sure I won’t be doing what/where I was doing in 2019, but we’ll see ;))!

Happy 2020!

Veeam Best Practices

My last post in 2019 was about Veeam Backup for Office 365 – I think it’s only fair to continue the story. 🙂

If you haven’t noticed this short post by Niels Engelen, you may be unaware that good people at Veeam put together a Best Practice Guide for Veeam Backup for Office 365!

Great thing about this guide is that it’s really a “live document”, which covers design, configuration and operations for VBO and it will be updated regularly, so make sure to bookmark it and check it from time to time!

Also, there is a Best Practice Guide for Veeam Backup & Replication, which should be bookmarked and checked regularly as well, in case you forgot about it! 🙂


Backing up Office 365 to S3 storage (Exoscale SOS) with Veeam

Are you backing up your Office 365? And… why not? 🙂

I’m not going into the lengthy and exhausting discussion of why you should take care of your data, even if it’s stored in something unbreakable like “the cloud”, at least not in this post. I would like to focus on one of the features of the new Veeam Backup for Office 365 v4, which was released just the other day. This feature is “object storage support“, as you may have guessed it already from the title of this fine post!

So, this means that you can take Amazon S3, Microsoft Azure Blob Storage or even IBM Cloud Object Storage and use it for your Veeam Backup for Office 365. And even better – you can use any S3-compatible storage to do the same! How cool is that?!

To test this, I decided to use the Exoscale SOS (also S3-compatible) storage for backups of my personal Office 365 via Veeam Backup for Office 365.

I’ve created a small environment to support this test (and later production, if it works as it should) and basically done the following:

  • created a standard Windows Server 2019 VM on top of Microsoft Azure, to hold my Veeam Backup for Office 365 installation
    (good people at Microsoft provided me Azure credits, so… why not?!)
  • downloaded Veeam Backup for Office 365
    (good people at Veeam provided me NFR license for it, so I’ve used it instead of Community Edition)
  • created an Exoscale SOS bucket for my backups
    (good people at Exoscale/A1TAG/ provided me credits, so… why not?!)
  • installed Veeam Backup for Office 365
    (it’s a “Next-Next-Finish” type of installation, hard to get it wrong)
  • configured Veeam Backup for Office 365 (not so hard, if you know what you are doing and you’ve read the official docs)
    • added a new Object Storage Repository
    • added a new Backup Repository which offloads the backup data to the previously created Object Storage Repository
    • configured a custom AAD app (with the right permissions)
    • added a new Office 365 organization with AAD app and Global Admin account credentials (docs)
    • created a backup job for this Office 365 organization
    • started backing it all up

Now, a few tips on the “configuration part”:

  • Microsoft Azure:
    • no real prerequisites and tips here – simple Windows VM, on which I’m installing the downloaded software (there is a list of system requirements if want to make sure it’s all “by the book”)
  • Exoscale:
    • creating the Exoscale SOS bucket is relatively easy, once you have your account (you can request a trial here) – you choose the bucket name and zone in which data will be stored and… voilà:

    • if you need to make adjustments to the ACL of the bucket, you can (quick ACL with private setting is just fine for this one):

    • to access your bucket from Veeam, you’ll need your API keys, which you can find in the Account – Profile – API keys section:

    • one other thing you’ll need from this section is the Storage API Endpoint, which depends on the zone you’ve created your bucket in (mine was created inside AT-VIE-1 zone, so my endpoint is

  • Office 365:
    • note: I’m using the Modern authentication option because of MFA on my tenant and… it’s the right way to do it!
    • for this, I created a custom application in Azure Active Directory (AAD) (under App registrations – New registration) (take a note of the Application (client) ID, as you will need it when configuring Veeam):

    • I’ve added a secret (which you should also take a note of, because you’ll need it later) to this app:

    • then, I’ve added the minimal required API permissions to this app (as per the official docs) – but note that the official docs have an error (at this time), which I reported to Veeam – you’ll need the SharePoint Online API access permissions even if you don’t use the certificate based authentication(!) – so, the permissions which work for me are:

    • UPDATE: Got back the word from Veeam development – additional SharePoint permissions may not be necessary after all, maybe I needed to wait a bit longer… will retry next time without those permissions. 🙂
    • after that, I’ve enabled the “legacy authentication protocols”, which is still a requirement (you can do it in Office 365 admin center – SharePoint admin center – Access Control – Apps that don’t use modern authentication – Allow access or via PowerShell command “Set-SPOTenant -LegacyAuthProtocolsEnabled $True”):

    • lastly, I’ve created an app password for my (global admin) account (which will also be required for Veeam configuration):

  • Veeam Backup for Office 365:
    • add a new Object Storage Repository:

    • add a new Backup Repository (connected to the created Object Storage Repository; this local repository will only store metadata – backup data will be offloaded to the object storage and can be encrypted, if needed):

    • add a new Office 365 organization:

    • create a backup job:

    • start backing up your Office 365 data:

Any questions/difficulties with your setup?
Leave them in the comments section, I’ll be happy to help (if I can).


Deploying Kubernetes on top of Azure Stack (Development Kit)

If you had a chance to deploy Azure Stack or Azure Stack Development Kit (ASDK) in your environment, maybe you’ve asked yourself “OK, but what should I do with it now?“.

Well, one of many things you “can do with it” is offer your users to deploy Kubernetes clusters on top of it (at least, that was what I did the other day… on my ASDK deployment) – in short, official documentation has you pretty much covered. I know, Azure enables it as well… and the process here is similar, or – the same.

The main thing you have to decide at the beginning, is if you’ll use Azure AD or ADFS for identity management (the same as with Azure Stack deployment, if you remember, from my previous posts). Why – because the installation steps differ a bit.

Once you decide it (or you ask your Azure Stack administrator how it’s done in your case), you can proceed with the installation – I assume you have your Azure Stack/ASDK up and running.

Next, in the admin portal (https://adminportal.local.azurestack.external/), you’ll need to add the prerequisites from Azure Marketplace (for this, if you remember, your Azure Stack/ASDK has to be registered):

Once done, you’re ready to set up the service principal, to which you’ll then assign the required permissions on both – the Azure side and on the Azure Stack side! (don’t forget this detail… it is well documented, but easy to overlook)

In case you don’t give your service principal the required permissions on both “sides”, you’ll probably get the “error 12” and your deployment will fail:

And you can see details in the log:

So… be careful with service principal and permissions! 🙂

Next thing you’ll need to make sure of is that you create a plan and an offer, but set your quotas right! It depends on your Kubernetes cluster deployment settings, but if you’ll go with the defaults, the default quotas (disk, in particular) need to be expanded!

If not, you’ll probably get this error:

If you were careful while reading the official docs (with a few “lessons learned” in this post), and you’ve made it to here… you’re probably set to deploy your first Kubernetes cluster on top of your Azure Stack/ASDK.

In the user portal (https://portal.local.azurestack.external/), you now have the option to deploy something called Kubernetes Cluster (preview):

Here you really can’t miss much – you’ll give your deployment a brand new (or empty) resource group, user details (together with your public SSH key, of course), DNS prefix, number and size of nodes and service principal details:

After that, your deployment starts and runs for some time (it, again, depends on your hardware and settings you’ve chosen for your cluster). Hopefully, it will end with this message:

If all is good, you can SSH into one of your master nodes and see the details of your cluster:

One other thing that would be nice to have is the Kubernetes dashboard – the process of enabling it is well documented here:

And – you’re done!

You now have your own Kubernetes cluster deployment on top of your Azure Stack/ASDK! How cool is that?! 🙂

One last thing to note – currently, this is in preview (as it says on the template), but… it works. 🙂