blog.kaniski.eu I just wanna learn!

12Jan/200

Something new – Google Cloud Certified: Associate Cloud Engineer (ACE)

So, I've decided to end last year in style - as my Christmas gift to me, I've purchased Linux Academy (monthly full-access plan) and taken their course to prepare for the Google Cloud Certified: Associate Cloud Engineer (ACE).

And yesterday, I've taken and passed the certification exam (OK, still waiting on the official e-mail from Google; it takes 7-10 days, as it said on the exam). ūüôā

UPDATE: It's here!

This brought me another nice badge (as this is "a thing" now):

Why?

Well, I've been curious.

I know - "curiosity killed the cat"... except, this time, it didn't! ūüôā

Let's get serious - I wanted to better understand how and why would someone use Google Cloud (GCP) instead of (or even in conjunction with), I don't know, Microsoft Azure or AWS or even Exoscale.

The only fair way to do this would be to learn about the platforms, spend some time using them and decide about their strengths and weaknesses (for your specific scenarios)... while keeping in mind that things move and change pretty fast now (something you'll be missing today in, for example, Microsoft Azure, will perhaps be there in a matter of months, so... nothing is really final)! And this is exactly what I did/do/will keep doing - trying to stay on top of things.

How?

I've already mentioned that I've taken the course, but here is a short list of what I've used, with links:

  • Linux Academy course (they say it adds about +30% on top of your GCP knowledge - sessions are nice, updated regularly, practice tests and hands-on labs certainly help a lot(!)... and you can take a monthly subscription)
  • hands-on experience in using GCP (open up a free account, add custom domain, set up identity provider, bring up some Kubernetes clusters, store some files, ..., just play around a bit and try the things out by yourself!)
  • exam guide (it's nice to read what they'll test you on)
  • official documentation (yes, you'll need to read something as well... sorry)
  • practice exam (it's there, it's free, try it out... more than once!)

What's next?

Considering Google Cloud - I'll probably take a peek at Professional Cloud Architect certification as well.

Considering other learning/certifications - as I've done all of the Microsoft Azure exams (that were available at the time), maybe I'll continue with deepening my knowledge about AWS, who knows... it all depends on my next adventures.

Considering everything else - we'll see where I'll end up in 2020 (I'm pretty sure I won't be doing what/where I was doing in 2019, but we'll see ;))!

Happy 2020!

7Jan/200

Veeam Best Practices

My last post in 2019 was about Veeam Backup for Office 365 - I think it's only fair to continue the story. ūüôā

If you haven't noticed this short post by Niels Engelen, you may be unaware that good people at Veeam put together a Best Practice Guide for Veeam Backup for Office 365!

Great thing about this guide is that it's really a "live document", which covers design, configuration and operations for VBO and it will be updated regularly, so make sure to bookmark it and check it from time to time!

Also, there is a Best Practice Guide for Veeam Backup & Replication, which should be bookmarked and checked regularly as well, in case you forgot about it! ūüôā

Cheers!

27Nov/190

Backing up Office 365 to S3 storage (Exoscale SOS) with Veeam

Are you backing up your Office 365? And... why not? ūüôā

I'm not going into the lengthy and exhausting discussion of why you should take care of your data, even if it's stored in something unbreakable like "the cloud", at least not in this post. I would like to focus on one of the features of the new Veeam Backup for Office 365 v4, which was released just the other day. This feature is "object storage support", as you may have guessed it already from the title of this fine post!

So, this means that you can take Amazon S3, Microsoft Azure Blob Storage or even IBM Cloud Object Storage and use it for your Veeam Backup for Office 365. And even better - you can use any S3-compatible storage to do the same! How cool is that?!

To test this, I decided to use the Exoscale SOS (also S3-compatible) storage for backups of my personal Office 365 via Veeam Backup for Office 365.

I've created a small environment to support this test (and later production, if it works as it should) and basically done the following:

  • created a standard Windows Server 2019 VM on top of Microsoft Azure, to hold my Veeam Backup for Office 365 installation
    (good people at Microsoft provided me Azure credits, so... why not?!)
  • downloaded Veeam Backup for Office 365
    (good people at Veeam provided me NFR license for it, so I've used it instead of Community Edition)
  • created an Exoscale SOS bucket for my backups
    (good people at Exoscale/A1TAG/A1.digital/A1HR provided me credits, so... why not?!)
  • installed Veeam Backup for Office 365
    (it's a "Next-Next-Finish" type of installation, hard to get it wrong)
  • configured Veeam Backup for Office 365 (not so hard, if you know what you are doing and you've read the official docs)
    • added a new Object Storage Repository
    • added a new Backup Repository which offloads the backup data to the previously created Object Storage Repository
    • configured a custom AAD app (with the right permissions)
    • added a new Office 365 organization with AAD app and Global Admin account credentials (docs)
    • created a backup job for this Office 365 organization
    • started backing it all up

Now, a few tips on the "configuration part":

  • Microsoft Azure:
    • no real prerequisites and tips here - simple Windows VM, on which I'm installing the downloaded software (there is a list of system requirements if want to make sure it's all "by the book")
  • Exoscale:
    • creating the Exoscale SOS bucket is relatively easy, once you have your account (you can request a trial here) - you choose the bucket name and zone in which data will be stored and... voil√†:

    • if you need to make adjustments to the ACL of the bucket, you can (quick ACL with private setting is just fine for this one):

    • to access your bucket from Veeam, you'll need your API keys, which you can find in the Account - Profile - API keys section:

    • one other thing you'll need from this section is the Storage API Endpoint, which depends on the zone you've created your bucket in (mine was created inside AT-VIE-1 zone, so my endpoint is https://sos-at-vie-1.exo.io):

  • Office 365:
    • note: I'm using the Modern authentication option because of MFA on my tenant and... it's the right way to do it!
    • for this, I created a custom application in Azure Active Directory (AAD) (under App registrations - New registration) (take a note of the Application (client) ID, as you will need it when configuring Veeam):

    • I've added a secret (which you should also take a note of, because you'll need it later) to this app:

    • then, I've added the minimal required API permissions to this app (as per the official docs) - but¬†note that the official docs have an error (at this time), which I reported to Veeam - you'll need the SharePoint Online API access permissions even if you don't use the certificate based authentication(!) - so, the permissions which work for me are:

    • UPDATE: Got back the word from Veeam development - additional SharePoint permissions may not be necessary after all, maybe I needed to wait a bit longer... will retry next time without those permissions. ūüôā
    • after that, I've enabled the "legacy authentication protocols", which is still a requirement (you can do it in Office 365 admin center - SharePoint admin center - Access Control - Apps that don't use modern authentication - Allow access or via PowerShell command "Set-SPOTenant -LegacyAuthProtocolsEnabled $True"):

    • lastly, I've created an app password for my (global admin) account (which will also be required for Veeam configuration):

  • Veeam Backup for Office 365:
    • add a new Object Storage Repository:

    • add a new Backup Repository (connected to the created Object Storage Repository; this local repository will only store metadata - backup data will be offloaded to the object storage and can be encrypted, if needed):

    • add a new Office 365 organization:

    • create a backup job:

    • start backing up your Office 365 data:

Any questions/difficulties with your setup?
Leave them in the comments section, I'll be happy to help (if I can).

Cheers!

7Aug/190

Deploying Kubernetes on top of Azure Stack (Development Kit)

If you had a chance to deploy Azure Stack or Azure Stack Development Kit (ASDK) in your environment, maybe you've asked yourself "OK, but what should I do with it now?".

Well, one of many things you "can do with it" is offer your users to deploy Kubernetes clusters on top of it (at least, that was what I did the other day... on my ASDK deployment) - in short, official documentation has you pretty much covered. I know, Azure enables it as well... and the process here is similar, or - the same.

The main thing you have to decide at the beginning, is if you'll use Azure AD or ADFS for identity management (the same as with Azure Stack deployment, if you remember, from my previous posts). Why - because the installation steps differ a bit.

Once you decide it (or you ask your Azure Stack administrator how it's done in your case), you can proceed with the installation - I assume you have your Azure Stack/ASDK up and running.

Next, in the admin portal (https://adminportal.local.azurestack.external/), you'll need to add the prerequisites from Azure Marketplace (for this, if you remember, your Azure Stack/ASDK has to be registered):

Once done, you're ready to set up the service principal, to which you'll then assign the required permissions on both - the Azure side and on the Azure Stack side! (don't forget this detail... it is well documented, but easy to overlook)

In case you don't give your service principal the required permissions on both "sides", you'll probably get the "error 12" and your deployment will fail:

And you can see details in the log:

So... be careful with service principal and permissions! ūüôā

Next thing you'll need to make sure of is that you create a plan and an offer, but set your quotas right! It depends on your Kubernetes cluster deployment settings, but if you'll go with the defaults, the default quotas (disk, in particular) need to be expanded!

If not, you'll probably get this error:

If you were careful while reading the official docs (with a few "lessons learned" in this post), and you've made it to here... you're probably set to deploy your first Kubernetes cluster on top of your Azure Stack/ASDK.

In the user portal (https://portal.local.azurestack.external/), you now have the option to deploy something called Kubernetes Cluster (preview):

Here you really can't miss much - you'll give your deployment a brand new (or empty) resource group, user details (together with your public SSH key, of course), DNS prefix, number and size of nodes and service principal details:

After that, your deployment starts and runs for some time (it, again, depends on your hardware and settings you've chosen for your cluster). Hopefully, it will end with this message:

If all is good, you can SSH into one of your master nodes and see the details of your cluster:

One other thing that would be nice to have is the Kubernetes dashboard - the process of enabling it is well documented here:

And - you're done!

You now have your own Kubernetes cluster deployment on top of your Azure Stack/ASDK! How cool is that?! ūüôā

One last thing to note - currently, this is in preview (as it says on the template), but... it works. ūüôā

Cheers!

1Aug/190

Microsoft AZ-500 down, more to go

Another month, another Azure cert! ūüôā

So, for the last couple of weeks, I was reading about, learning and playing around with Azure security technologies, mainly as a preparation for AZ-500 (Microsoft Azure Security Technologies) exam.

And then... today I took the exam and... PASSED!

I must say, with a few certificates under my sleeve, this exam was not the easiest I took. I was feeling prepared and still - passing it demanded concentration on the details and a bit of thinking! Nonetheless, it's over now - one down, more to go!

Note that... by passing this exam, I'm not automatically an Azure security guru (!) - it just means that I know a thing or two about what Azure offers in terms of security and how it works. ūüôā

What did I use to prepare?

There is a great book about Azure governance called Pro Azure Governance and Security, written by my MVP colleagues Peter De Tender, David Rendon and Samuel Erskine. It's purpose is not to be an exam prep guide, but to tackle into the world of governance and security features available within Microsoft Azure (which are part of the exam, who would know).

There is also a great post, containing a bunch of helpful AZ-500 material from Stanislas Quastana, located here, and Thomas provided some useful links in his post here and even did a webinar on Azure Security Center (hosted by Altaro) the other day - you can find the recording here.

Of course, there is also the official exam page with skills measured and docs.com.

And... don't forget to try things out yourself! There is also a free Azure subscription, you know?! ūüôā

If you'll be taking this exam - good luck, hope this resources help you!

Cheers!

23Jul/190

Figuring out your public IP address with PowerShell

Sometimes, you need to know your public IP address because of... reasons. My particular reason was creating firewall rule to limit SSH only from my current public IP address, to a machine on the Internet. And how to do it?

You can always use free services like What Is My IP?, which shows you your public IP address in a nice form:

But there are also other ways - if you're running Linux (or WSL) and do a Google search for the command that can help you, you'll probably get this (https://askubuntu.com/questions/95910/command-for-determining-my-public-ip?noredirect=1&lq=1):

And if you're using Windows, PowerShell is here to help you! I like "oneliners", even if they are not always easy to read:

I'm sure that my friend Aleksandar (PowerShell guru & Microsoft MVP) has a better way, but for me, this works just fine. ūüôā

Hope it helps!

Cheers!

15Mar/190

Creating some virtual machines in Azure with PowerShell

The other day I was creating some Linux virtual machines (I know, I know...) and, with Azure being my preferred hosting platform, I've decided to create this machines by using a simple PowerShell script. Not because I'm so good at PowerShell, but because I like it... and sometimes I really don't like clicking through the wizard to create multiple machines.

I wanted to create multiple machines with ease, each with "static" IP address from the provided subnet, accessible via the Internet (SSH, HTTP) and running the latest Ubuntu Linux, of course.

So, I was browsing through the official documentation (a.k.a. docs.com, more specifically https://docs.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-powershell), and I've come up with this (my version of the official docs):

If this helps you with similar task - you're welcome.

Cheers!

17Dec/180

Renewing the expired Office Online/Web Apps Server farm certificate

Certificates sometimes expire... it happens! ūüôā

But what happens if the certificate for your Office Online Server (OOS) or Office Web Apps Server (OWAS) farm expires and your farm is not available anymore?

Obviously, OOS farm and your Skype for Business, Exchange & SharePoint integration stops working. Next thing to do will be to renew the expired certificate.

But how?

My MVP colleague Andi Kr√ľger did a nice¬†blog post on updating the farm certificate, and it's fairly simple - Set-OfficeWebAppsFarm -CertificateName "RenewedOOSInternalCertificate"¬†should do the trick... if your farm is running.

If things got out of hand and your farm is not running anymore and you cannot use the Set-OfficeWebAppsFarm cmdlet (you'll see that Office Online (WACSM) service is Stopped and cannot be brought back up with the expired certificate and your machine is showing that it's no longer part of the farm), you'll need to take a different approach, because you'll be getting errors when running the above mentioned command (like "It does not appear this machine is part of an Office Online Server farm." or similar).

WACSM Service is Stopped and and your machine is showing that it's no longer part of the farm

One of the possible solutions would be:

  • make a note of the Friendly Name of your old (expired) certificate (MMC or PowerShell) (in my case it's called "OOSInternalCertificate")
  • remove the expired certificate
  • renew/request/install the new certificate
  • change the Friendly Name of a new certificate to match the previous one
  • start the Office Online (WACSM) service or restart the machine
  • (copy the certificate/do the procedure on other farm members, if needed)

Everything is back normal

Your farm operations should now be restored and you can run Get-OfficeWebAppsFarm cmdlet normally:

Or you can open up the farm's discovery URL - if it's rendering again, everything should be OK (in my case "https://oos.myfarm.local/hosting/discovery"):

Even the discovery works

Cheers!

12Dec/182

Counters missing when machines accessed remotely

Not so long ago, we observed an issue with remotely accessing the PhysicalDisk counters on several machines, more specifically - there were none. ūüôā

To be clear - if you opened up the Performance Monitor (perfmon.exe) on the affected machine, you can see all the counters, including the PhysicalDisk counters. But, if you opened up the Performance Monitor on a different machine and tried to access PhysicalDisk counters of the first machine over network, they aren't shown anymore... but others (like CPU and Memory) are still there and can be used!

Counters shown normally on local computer and in local Performance Monitor

The same counters not visible from remote machine's Performance Monitor

So... why? ūüôā

At first, we thought that our monitoring software went berserk, but no - the PhysicalDisk counters on a remote machine were missing even we were using the built-in Performance Monitor tool (PhysicalDisk counters weren't shown).

Next - maybe it's something on the network? Of course, network is never the issue, but still... (wasn't an issue here as well, because other counters worked without any issues)

Next, we thought, it's related to the version of Windows accessing from, or the version at the destination - as we found out, too many different versions were impacted to hold that theory, so... no.

One thing we are not sure is if it's caused by some of the "not so recent security patches".

As we found the solution for our issue, what exactly caused it in the first place is not so important right now... Solution is simple - you actually need to run one command to re-register the system performance libraries with WMI (winmgmt /resyncperf) and then reboot the affected machine.

So, the commands you need are:

After that, we can access all the needed counters (PhysicalDisk) remotely again:

Counters shown normally from remote computer and in local Performance Monitor

Cheers!

P.S. Don't forget to reboot the affected machine! ūüôā

5Dec/180

Software Management in Linux (Packt)

Learn software management with advanced Linux administration in this tutorial by Frederik Vos, a Linux trainer and evangelist and a senior technical trainer of virtualization technologies, such as Citrix XenServer and VMware vSphere.

-- post by Frederik Vos, provided by Packt --

Software management

In the old days, installing software was a matter of extracting an archive to a filesystem. There were several problems with this approach:

  • It was difficult to remove the software if the files were copied into directories that were also used by another software
  • It was difficult to upgrade software, maybe because the files were still in use or were renamed
  • It was difficult to handle shared libraries

That's why Linux distributions invented software managers.

The RPM software manager

In 1997, Red Hat released the first version of their package manager, RPM. Other distributions such as SUSE adopted this package manager. RPM is the name of the rpm utility, as well as the name of the format and the filename extension.

The RPM package contains the following:

  • A CPIO archive
  • Metadata with information about the software, such as a description and dependencies
  • Scriptlets for pre and post-installation scripts

In the past, Linux administrators used the rpm utility to install/update and remove software on a Linux system. If there was a dependency, the rpm command was able to tell exactly which other packages you needed to install. However, the rpm utility couldn’t fix the dependencies or possible conflicts between packages.

Nowadays, the rpm utility isn’t used any longer to install or remove software; instead, you use more advanced software installers. After the installation of software with yum (Red Hat/CentOS) or zypper (SUSE), all the metadata goes into a database. Querying this rpm database with the rpm command can be very handy.

A list of the most common rpm query parameters are as follows:

Parameter Description
-qa List all the installed packages.
-qi <software> List information.
-qc <software> List the installed configuration files.
-qd <software> List the installed documentation and examples.
-ql <software> List all the installed files.
-qf <filename> Shows the package that installed this file
-V <software> Verifies the integrity/changes after the installation of a package; use -va to do it for all installed software.
-qp Use this parameter together with other parameters if the package is not already installed. It's especially useful if you combine this parameter with --script to investigate the pre and post-installation scripts in the package.

The following screenshot is an example of getting information about the installed SSH server package:

The output of the -V parameter indicates that the modification time has changed since the installation. Now, make another change in the sshd_config file:

If you verify the installed package again, there is an S added to the output, indicating that the file size is different, and a T, indicating that the modification time has changed:

Other possible characters in the output are as follows:

S File size
M Mode (permissions)
5 Checksum
D Major/minor on devices
L Readlink mismatch
U User ownership
G Group ownership
T Modification time
P Capabilities

For text files, the diff command can help show the differences between the backup in the /tmp directory and the configuration in /etc/ssh:

You can also restore the original file as follows:

The DPKG software manager

The Debian distribution doesn't use the RPM format; instead, it uses the DEB format invented in 1995. The format is in use on all Debian and Ubuntu-based distributions.

A DEB package contains:

  • A file,¬†debian-binary, with the version of the package
  • An archive file, control.tar, with metadata (package name, version, dependencies, and maintainer)
  • An archive file, data.tar, containing the actual software

Management of DEB packages can be done with the dpkg utility. Like rpm, the utility is not in use any longer to install software. Instead, the more advanced apt command is used. All the metadata goes into a database, which can be queried with dpkg or dpkg-query.

The important parameters of dpkg-query are as follows:

-l Lists all the packages without parameters, but you can use wildcards, for example, dpkg -l *ssh*
-L <package> Lists files in an installed package
-p <package> Shows information about the package
-s <package> Shows the state of the package

The first column from the output of dpkg -l also shows a status as follows:

The first character in the first column is the desired action, the second is the actual state of the package, and a possible third character indicates an error flag (R). ii means that the package is installed.

The possible desired states are as follows:

  • (u) unknown
  • (h) hold
  • (r) remove
  • (p) urge

The important package states are as follows:

  • n(ot) installed
  • H(a)lf installed
  • Hal(F) configured

Software management with YUM

Your Update Manager or Yellowdog Updater Modified (YUM) is a modern software management tool that was introduced by Red Hat in Enterprise Linux version 5, replacing the up2date utility. It is currently in use in all Red Hat-based distributions but will be replaced with dnf, which is used by Fedora. The good news is that dnf is syntax-compatible with yum.

Yum is responsible for:

  • Installing software, including dependencies
  • Updating software
  • Removing software
  • Listing and searching for software

The important basic parameters are as follows:

Command Description 
yum search Search for software based on package name/summary
yum provides Search for software based on a filename in a package
yum install Install software
yum info Information and status
yum update Update all software
yum remove Remove software

You can also install patterns of software, for instance, the pattern or group File and Print Server is a convenient way to install the NFS and Samba file servers together with the Cups print server:

Command Description
yum groups list List the available groups.
yum groups install Install a group.
yum groups info Information about a group, including the group names that are in use by the Anaconda installer. This information is important for unattended installations.
yum groups update Update software within a group.
yum groups remove Remove the installed group.

Another nice feature of yum is working with history:

Command Description
yum history list List the tasks executed by yum
yum history info <number> List the content of a specific task
yum history undo <number> Undo the task; a redo is also available

The yum command uses repositories to be able to do all the software management. To list the currently configured repositories, use:

To add another repository, you'll need the yum-config-manager tool, which creates and modifies the configuration files in /etc/yum.repos.d. For instance, if you want to add a repository to install Microsoft SQL Server, use the following:

The yum functionality can be extended with plugins, for instance, to select the fastest mirror, enabling the filesystem / LVM snapshots and running yum as a scheduled task (cron).

Software management with Zypp

SUSE, like Red Hat, uses RPM for package management. But instead of using yum, they use another toolset with Zypp (also known as libZypp) as backend. Software management can be done with the graphical configuration software YaST or the command-line interface tool Zypper. The important basic parameters are as follows:

Command Description
zypper search Search for software
zypper install Install software
zypper remove Remove software
zypper update Update software
zypper dist-upgrade Perform a distribution upgrade
zypper info Show information

There is a search option to search for a command, what-provides, but it's very limited. If you don't know the package name, there is a utility called cnf instead. Before you can use cnf, you'll need to install scout; this way, the package properties can be searched:

After this, you can use cnf:

If you want to update your system to a new distribution version, you have to modify the repositories first. For instance, if you want to update from SUSE LEAP 42.3 to version 15.0, execute the following procedure:

  1. First, install the available updates for your current version:

  1. Update to the latest version in the 42.3.x releases:

  1. Modify the repository configuration:

  1. Initialize the new repositories:

  1. Install the new distribution:

  1. Now, reboot after the distribution upgrade.

Besides installing packages, you can also install the following:

  • patterns: Groups of packages, for instance, to install a complete web server including PHP and MySQL (also known as a lamp)
  • patches: Incremental updates for a package
  • products: Installation of an additional product

To list the available patterns, use:

To install them, use:

The same procedure applies to patches and products. Zypper uses online repositories to view the currently configured repositories:

You can add repositories with the addrepo parameter, for instance, to add a community repository for the latest PowerShell version on LEAP 15.0:

If you add a repository, you must always refresh the repositories:

Software management with apt

In Debian/Ubuntu-based distributions, software management is done via the apt utility, which is a recent replacement for the utilities, apt-get and apt-cache.

The most-used commands include:

Command Description
apt list List packages
apt search Search in descriptions
apt install Install a package
apt show Show package details
apt remove Remove a package
apt update Update catalog of available packages
apt upgrade Upgrade the installed software
apt edit-sources Edit the repository configuration

Repositories are configured in /etc/apt/sources.list and files in the /etc/apt/sources.list.d/ directory. Alternatively, there is a command, apt-add-repository, available:

The apt repositories have the concept of release classes:

  • Old stable, tested in the previous version of a distribution
  • Stable
  • Testing
  • Unstable

They also have the concept of components:

  • Main: Tested and provided with support and updates
  • Contrib: Tested and provided with support and updates, but there are dependencies that are not in main, but for instance, in non-free
  • Non-free: Software that isn't compliant¬†with¬†the Debian Social Contract Guidelines (https://www.debian.org/social_contract#guidelines)

Ubuntu adds some extra components:

  • Universe: Community provided, no support, updates possible
  • Restricted: Proprietary device drivers
  • Multiverse: Software restricted by copyright or legal issues

If you found this article interesting, you can explore Frederik Vos’ Hands-On Linux Administration on Azure to administer Linux on Azure. Hands-On Linux Administration on Azure will help you efficiently run Linux-based workloads in Azure and make the most of the important tools required for deployment.

Cheers!