blog.kaniski.eu I just wanna learn!

29Jun/200

Fixing things with… terraform destroy

I like Terraform, because it's so clean, fast and elegant; OK, I also suck at it, but hey - I'm trying! ūüôā

The long story

Usually, Terraform and its providers are very good at doing things in the order they should be done. But sometimes people do come up with silly ideas, and mine was such (of course) - I've decided to rename something and it broke things. Just a little.

I have a simple lab in Azure, with a couple of virtual machines behind the Azure Load Balancer, no big deal. All this is being deployed (and redeployed) via Terraform, using the official azurerm provider. It's actually a Standard SKU Azure Load Balancer (don't ask why), with a single backend pool and a few probes and rules. Nothing special.

I've deployed this lab a few days ago (thanks to good people at Microsoft, I have some free credits to spare), everything worked just fine, but today I've got the wild idea - I've decided to rename my backend pool.

With all the automation in place, this shouldn't be a problem... one would think. ūüôā

So, as I've updated my code and during the make it so phase (terraform apply), I've got some errors (truncated, with only the useful stuff):

Issue

After going through these errors, I've realized that my resources are indeed in the same region, but existing rules are referencing the current backend pool by name and actually blocking Terraform in renaming the backend pool.

There are a couple of options at this stage - you can destroy your deployment and run it again (as I normally would) and it should all be fine. Or you can try to fix only the dependent resources and make it work as part of the existing deployment.

With some spare time at my hands, I've tried to fix it using the second one and it actually worked.

Resolution

Terraform has a nice option for destroying just some parts of the deployment.

If you look at help for the terraform destroy command, you can see the target option:

And if you run it to fix your issues, you'll get a nice red warning saying that this is only for exceptional situations (and they mean it!):

 

So... BE CAREFUL! (can't stress this enough!)

 

Anyhow, I've destroyed rules which were preventing my rename operation:

And then another terraform apply --auto-approve recreated everything that was needed, and finally - my backend pool got renamed:

Another idea I've had was to taint the resources (terraform taint -help), which would probably be a lot nicer. Oh, well... maybe next time. ūüôā

As things are constantly improving, it shouldn't be long until this is fixed (should it even be fixed?!). Until then, hope this will help you with similar issues!

Cheers!

1May/200

CRC – OpenShift 4 cluster in your pocket

... but only if you have large pockets, that is! ūüôā

I suppose that, by now, you've already heard of minishift - tool which helps you run a single-node OpenShift 3.x cluster inside a virtual machine (on your preferred hypervisor), which is extremely convenient if you need to do some local development, testing, etc. Of course, you can always deploy what you need on top of Azure or any other cloud provider, but having OpenShift cluster locally can have its benefits.

And what if you need, let's say, an OpenShift 4.x cluster "to go"?

Don't worry, there is a solution for you as well! It's called CodeReady Containers (CRC) (version 1.9 at the time of writing), and is basically a single node OpenShift 4.x cluster, inside a virtual machine (sounds a lot like minishift, doesn't it?).

So, how can you get your hands on CRC and make it work?

There are a couple of steps involved:

  • download the OpenShift tools (as I'm using Windows, so I'll use this one):

  • unzip all to a location you like (mine is C:\PocketOpenShift):

  • (optional) add this location to PATH for easier usage (as already mentined, I'm using Windows & PowerShell):

  • run "crc setup" which will prepare the environment:

  • run "crc start" (with one or more options, if needed - as you can see, I'll be using custom number of vCPUs, amount of RAM and a custom nameserver):

  • at any time, you can check the status of your cluster by using "crc status" command:

  • once it is up, you can use "oc login" or console (bring it up with "crc console") to connect to it, either as an (kube)admin or a developer, and continue working as you would normally with any other OpenShift cluster:

  • one other thing I like to do is to enable monitoring and other disabled stuff (note though - your VM should have 12+ GB RAM) - you can do it with two commands - first one lists all that is disabled, and the second one, with the index at the end, enables it (also note that in the official documentation there is an issue with "" and '' (they are switched), if you're working in PowerShell):

  • monitoring should now be working as well:

And that's it - you're ready to work on your own "pocket OpenShift cluster"! ūüôā

Of course, don't forget that there is also the official documentation, including the Getting Started guide and Release Notes and Known Issues document. So... take a look!

Cheers!

27Nov/190

Backing up Office 365 to S3 storage (Exoscale SOS) with Veeam

Are you backing up your Office 365? And... why not? ūüôā

I'm not going into the lengthy and exhausting discussion of why you should take care of your data, even if it's stored in something unbreakable like "the cloud", at least not in this post. I would like to focus on one of the features of the new Veeam Backup for Office 365 v4, which was released just the other day. This feature is "object storage support", as you may have guessed it already from the title of this fine post!

So, this means that you can take Amazon S3, Microsoft Azure Blob Storage or even IBM Cloud Object Storage and use it for your Veeam Backup for Office 365. And even better - you can use any S3-compatible storage to do the same! How cool is that?!

To test this, I decided to use the Exoscale SOS (also S3-compatible) storage for backups of my personal Office 365 via Veeam Backup for Office 365.

I've created a small environment to support this test (and later production, if it works as it should) and basically done the following:

  • created a standard Windows Server 2019 VM on top of Microsoft Azure, to hold my Veeam Backup for Office 365 installation
    (good people at Microsoft provided me Azure credits, so... why not?!)
  • downloaded Veeam Backup for Office 365
    (good people at Veeam provided me NFR license for it, so I've used it instead of Community Edition)
  • created an Exoscale SOS bucket for my backups
    (good people at Exoscale/A1TAG/A1.digital/A1HR provided me credits, so... why not?!)
  • installed Veeam Backup for Office 365
    (it's a "Next-Next-Finish" type of installation, hard to get it wrong)
  • configured Veeam Backup for Office 365 (not so hard, if you know what you are doing and you've read the official docs)
    • added a new Object Storage Repository
    • added a new Backup Repository which offloads the backup data to the previously created Object Storage Repository
    • configured a custom AAD app (with the right permissions)
    • added a new Office 365 organization with AAD app and Global Admin account credentials (docs)
    • created a backup job for this Office 365 organization
    • started backing it all up

Now, a few tips on the "configuration part":

  • Microsoft Azure:
    • no real prerequisites and tips here - simple Windows VM, on which I'm installing the downloaded software (there is a list of system requirements if want to make sure it's all "by the book")
  • Exoscale:
    • creating the Exoscale SOS bucket is relatively easy, once you have your account (you can request a trial here) - you choose the bucket name and zone in which data will be stored and... voil√†:

    • if you need to make adjustments to the ACL of the bucket, you can (quick ACL with private setting is just fine for this one):

    • to access your bucket from Veeam, you'll need your API keys, which you can find in the Account - Profile - API keys section:

    • one other thing you'll need from this section is the Storage API Endpoint, which depends on the zone you've created your bucket in (mine was created inside AT-VIE-1 zone, so my endpoint is https://sos-at-vie-1.exo.io):

  • Office 365:
    • note: I'm using the Modern authentication option because of MFA on my tenant and... it's the right way to do it!
    • for this, I created a custom application in Azure Active Directory (AAD) (under App registrations - New registration) (take a note of the Application (client) ID, as you will need it when configuring Veeam):

    • I've added a secret (which you should also take a note of, because you'll need it later) to this app:

    • then, I've added the minimal required API permissions to this app (as per the official docs) - but¬†note that the official docs have an error (at this time), which I reported to Veeam - you'll need the SharePoint Online API access permissions even if you don't use the certificate based authentication(!) - so, the permissions which work for me are:

    • UPDATE: Got back the word from Veeam development - additional SharePoint permissions may not be necessary after all, maybe I needed to wait a bit longer... will retry next time without those permissions. ūüôā
    • after that, I've enabled the "legacy authentication protocols", which is still a requirement (you can do it in Office 365 admin center - SharePoint admin center - Access Control - Apps that don't use modern authentication - Allow access or via PowerShell command "Set-SPOTenant -LegacyAuthProtocolsEnabled $True"):

    • lastly, I've created an app password for my (global admin) account (which will also be required for Veeam configuration):

  • Veeam Backup for Office 365:
    • add a new Object Storage Repository:

    • add a new Backup Repository (connected to the created Object Storage Repository; this local repository will only store metadata - backup data will be offloaded to the object storage and can be encrypted, if needed):

    • add a new Office 365 organization:

    • create a backup job:

    • start backing up your Office 365 data:

Any questions/difficulties with your setup?
Leave them in the comments section, I'll be happy to help (if I can).

Cheers!

15Mar/190

Creating some virtual machines in Azure with PowerShell

The other day I was creating some Linux virtual machines (I know, I know...) and, with Azure being my preferred hosting platform, I've decided to create this machines by using a simple PowerShell script. Not because I'm so good at PowerShell, but because I like it... and sometimes I really don't like clicking through the wizard to create multiple machines.

I wanted to create multiple machines with ease, each with "static" IP address from the provided subnet, accessible via the Internet (SSH, HTTP) and running the latest Ubuntu Linux, of course.

So, I was browsing through the official documentation (a.k.a. docs.com, more specifically https://docs.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-powershell), and I've come up with this (my version of the official docs):

If this helps you with similar task - you're welcome.

Cheers!

17Dec/182

Renewing the expired Office Online/Web Apps Server farm certificate

Certificates sometimes expire... it happens! ūüôā

But what happens if the certificate for your Office Online Server (OOS) or Office Web Apps Server (OWAS) farm expires and your farm is not available anymore?

Obviously, OOS farm and your Skype for Business, Exchange & SharePoint integration stops working. Next thing to do will be to renew the expired certificate.

But how?

My MVP colleague Andi Kr√ľger did a nice¬†blog post on updating the farm certificate, and it's fairly simple - Set-OfficeWebAppsFarm -CertificateName "RenewedOOSInternalCertificate"¬†should do the trick... if your farm is running.

If things got out of hand and your farm is not running anymore and you cannot use the Set-OfficeWebAppsFarm cmdlet (you'll see that Office Online (WACSM) service is Stopped and cannot be brought back up with the expired certificate and your machine is showing that it's no longer part of the farm), you'll need to take a different approach, because you'll be getting errors when running the above mentioned command (like "It does not appear this machine is part of an Office Online Server farm." or similar).

WACSM Service is Stopped and and your machine is showing that it's no longer part of the farm

One of the possible solutions would be:

  • make a note of the Friendly Name of your old (expired) certificate (MMC or PowerShell) (in my case it's called "OOSInternalCertificate")
  • remove the expired certificate
  • renew/request/install the new certificate
  • change the Friendly Name of a new certificate to match the previous one
  • start the Office Online (WACSM) service or restart the machine
  • (copy the certificate/do the procedure on other farm members, if needed)

Everything is back normal

Your farm operations should now be restored and you can run Get-OfficeWebAppsFarm cmdlet normally:

Or you can open up the farm's discovery URL - if it's rendering again, everything should be OK (in my case "https://oos.myfarm.local/hosting/discovery"):

Even the discovery works

Cheers!

12Dec/182

Counters missing when machines accessed remotely

Not so long ago, we observed an issue with remotely accessing the PhysicalDisk counters on several machines, more specifically - there were none. ūüôā

To be clear - if you opened up the Performance Monitor (perfmon.exe) on the affected machine, you can see all the counters, including the PhysicalDisk counters. But, if you opened up the Performance Monitor on a different machine and tried to access PhysicalDisk counters of the first machine over network, they aren't shown anymore... but others (like CPU and Memory) are still there and can be used!

Counters shown normally on local computer and in local Performance Monitor

The same counters not visible from remote machine's Performance Monitor

So... why? ūüôā

At first, we thought that our monitoring software went berserk, but no - the PhysicalDisk counters on a remote machine were missing even we were using the built-in Performance Monitor tool (PhysicalDisk counters weren't shown).

Next - maybe it's something on the network? Of course, network is never the issue, but still... (wasn't an issue here as well, because other counters worked without any issues)

Next, we thought, it's related to the version of Windows accessing from, or the version at the destination - as we found out, too many different versions were impacted to hold that theory, so... no.

One thing we are not sure is if it's caused by some of the "not so recent security patches".

As we found the solution for our issue, what exactly caused it in the first place is not so important right now... Solution is simple - you actually need to run one command to re-register the system performance libraries with WMI (winmgmt /resyncperf) and then reboot the affected machine.

So, the commands you need are:

After that, we can access all the needed counters (PhysicalDisk) remotely again:

Counters shown normally from remote computer and in local Performance Monitor

Cheers!

P.S. Don't forget to reboot the affected machine! ūüôā

5Dec/180

Software Management in Linux (Packt)

Learn software management with advanced Linux administration in this tutorial by Frederik Vos, a Linux trainer and evangelist and a senior technical trainer of virtualization technologies, such as Citrix XenServer and VMware vSphere.

-- post by Frederik Vos, provided by Packt --

Software management

In the old days, installing software was a matter of extracting an archive to a filesystem. There were several problems with this approach:

  • It was difficult to remove the software if the files were copied into directories that were also used by another software
  • It was difficult to upgrade software, maybe because the files were still in use or were renamed
  • It was difficult to handle shared libraries

That's why Linux distributions invented software managers.

The RPM software manager

In 1997, Red Hat released the first version of their package manager, RPM. Other distributions such as SUSE adopted this package manager. RPM is the name of the rpm utility, as well as the name of the format and the filename extension.

The RPM package contains the following:

  • A CPIO archive
  • Metadata with information about the software, such as a description and dependencies
  • Scriptlets for pre and post-installation scripts

In the past, Linux administrators used the rpm utility to install/update and remove software on a Linux system. If there was a dependency, the rpm command was able to tell exactly which other packages you needed to install. However, the rpm utility couldn’t fix the dependencies or possible conflicts between packages.

Nowadays, the rpm utility isn’t used any longer to install or remove software; instead, you use more advanced software installers. After the installation of software with yum (Red Hat/CentOS) or zypper (SUSE), all the metadata goes into a database. Querying this rpm database with the rpm command can be very handy.

A list of the most common rpm query parameters are as follows:

Parameter Description
-qa List all the installed packages.
-qi <software> List information.
-qc <software> List the installed configuration files.
-qd <software> List the installed documentation and examples.
-ql <software> List all the installed files.
-qf <filename> Shows the package that installed this file
-V <software> Verifies the integrity/changes after the installation of a package; use -va to do it for all installed software.
-qp Use this parameter together with other parameters if the package is not already installed. It's especially useful if you combine this parameter with --script to investigate the pre and post-installation scripts in the package.

The following screenshot is an example of getting information about the installed SSH server package:

The output of the -V parameter indicates that the modification time has changed since the installation. Now, make another change in the sshd_config file:

If you verify the installed package again, there is an S added to the output, indicating that the file size is different, and a T, indicating that the modification time has changed:

Other possible characters in the output are as follows:

S File size
M Mode (permissions)
5 Checksum
D Major/minor on devices
L Readlink mismatch
U User ownership
G Group ownership
T Modification time
P Capabilities

For text files, the diff command can help show the differences between the backup in the /tmp directory and the configuration in /etc/ssh:

You can also restore the original file as follows:

The DPKG software manager

The Debian distribution doesn't use the RPM format; instead, it uses the DEB format invented in 1995. The format is in use on all Debian and Ubuntu-based distributions.

A DEB package contains:

  • A file,¬†debian-binary, with the version of the package
  • An archive file, control.tar, with metadata (package name, version, dependencies, and maintainer)
  • An archive file, data.tar, containing the actual software

Management of DEB packages can be done with the dpkg utility. Like rpm, the utility is not in use any longer to install software. Instead, the more advanced apt command is used. All the metadata goes into a database, which can be queried with dpkg or dpkg-query.

The important parameters of dpkg-query are as follows:

-l Lists all the packages without parameters, but you can use wildcards, for example, dpkg -l *ssh*
-L <package> Lists files in an installed package
-p <package> Shows information about the package
-s <package> Shows the state of the package

The first column from the output of dpkg -l also shows a status as follows:

The first character in the first column is the desired action, the second is the actual state of the package, and a possible third character indicates an error flag (R). ii means that the package is installed.

The possible desired states are as follows:

  • (u) unknown
  • (h) hold
  • (r) remove
  • (p) urge

The important package states are as follows:

  • n(ot) installed
  • H(a)lf installed
  • Hal(F) configured

Software management with YUM

Your Update Manager or Yellowdog Updater Modified (YUM) is a modern software management tool that was introduced by Red Hat in Enterprise Linux version 5, replacing the up2date utility. It is currently in use in all Red Hat-based distributions but will be replaced with dnf, which is used by Fedora. The good news is that dnf is syntax-compatible with yum.

Yum is responsible for:

  • Installing software, including dependencies
  • Updating software
  • Removing software
  • Listing and searching for software

The important basic parameters are as follows:

Command Description 
yum search Search for software based on package name/summary
yum provides Search for software based on a filename in a package
yum install Install software
yum info Information and status
yum update Update all software
yum remove Remove software

You can also install patterns of software, for instance, the pattern or group File and Print Server is a convenient way to install the NFS and Samba file servers together with the Cups print server:

Command Description
yum groups list List the available groups.
yum groups install Install a group.
yum groups info Information about a group, including the group names that are in use by the Anaconda installer. This information is important for unattended installations.
yum groups update Update software within a group.
yum groups remove Remove the installed group.

Another nice feature of yum is working with history:

Command Description
yum history list List the tasks executed by yum
yum history info <number> List the content of a specific task
yum history undo <number> Undo the task; a redo is also available

The yum command uses repositories to be able to do all the software management. To list the currently configured repositories, use:

To add another repository, you'll need the yum-config-manager tool, which creates and modifies the configuration files in /etc/yum.repos.d. For instance, if you want to add a repository to install Microsoft SQL Server, use the following:

The yum functionality can be extended with plugins, for instance, to select the fastest mirror, enabling the filesystem / LVM snapshots and running yum as a scheduled task (cron).

Software management with Zypp

SUSE, like Red Hat, uses RPM for package management. But instead of using yum, they use another toolset with Zypp (also known as libZypp) as backend. Software management can be done with the graphical configuration software YaST or the command-line interface tool Zypper. The important basic parameters are as follows:

Command Description
zypper search Search for software
zypper install Install software
zypper remove Remove software
zypper update Update software
zypper dist-upgrade Perform a distribution upgrade
zypper info Show information

There is a search option to search for a command, what-provides, but it's very limited. If you don't know the package name, there is a utility called cnf instead. Before you can use cnf, you'll need to install scout; this way, the package properties can be searched:

After this, you can use cnf:

If you want to update your system to a new distribution version, you have to modify the repositories first. For instance, if you want to update from SUSE LEAP 42.3 to version 15.0, execute the following procedure:

  1. First, install the available updates for your current version:

  1. Update to the latest version in the 42.3.x releases:

  1. Modify the repository configuration:

  1. Initialize the new repositories:

  1. Install the new distribution:

  1. Now, reboot after the distribution upgrade.

Besides installing packages, you can also install the following:

  • patterns: Groups of packages, for instance, to install a complete web server including PHP and MySQL (also known as a lamp)
  • patches: Incremental updates for a package
  • products: Installation of an additional product

To list the available patterns, use:

To install them, use:

The same procedure applies to patches and products. Zypper uses online repositories to view the currently configured repositories:

You can add repositories with the addrepo parameter, for instance, to add a community repository for the latest PowerShell version on LEAP 15.0:

If you add a repository, you must always refresh the repositories:

Software management with apt

In Debian/Ubuntu-based distributions, software management is done via the apt utility, which is a recent replacement for the utilities, apt-get and apt-cache.

The most-used commands include:

Command Description
apt list List packages
apt search Search in descriptions
apt install Install a package
apt show Show package details
apt remove Remove a package
apt update Update catalog of available packages
apt upgrade Upgrade the installed software
apt edit-sources Edit the repository configuration

Repositories are configured in /etc/apt/sources.list and files in the /etc/apt/sources.list.d/ directory. Alternatively, there is a command, apt-add-repository, available:

The apt repositories have the concept of release classes:

  • Old stable, tested in the previous version of a distribution
  • Stable
  • Testing
  • Unstable

They also have the concept of components:

  • Main: Tested and provided with support and updates
  • Contrib: Tested and provided with support and updates, but there are dependencies that are not in main, but for instance, in non-free
  • Non-free: Software that isn't compliant¬†with¬†the Debian Social Contract Guidelines (https://www.debian.org/social_contract#guidelines)

Ubuntu adds some extra components:

  • Universe: Community provided, no support, updates possible
  • Restricted: Proprietary device drivers
  • Multiverse: Software restricted by copyright or legal issues

If you found this article interesting, you can explore Frederik Vos’ Hands-On Linux Administration on Azure to administer Linux on Azure. Hands-On Linux Administration on Azure will help you efficiently run Linux-based workloads in Azure and make the most of the important tools required for deployment.

Cheers!

22Nov/180

How to Work with Aggregate Functions in Cosmos DB SQL (Packt)

Learn how to work with aggregate functions in this article by Gastón C. Hillar, an independent consultant, a freelance author, and a speaker who has been working with computers since he was 8 years old and Daron Yöndem, a Microsoft Regional Director and a Microsoft MVP for 11 years.

-- post by Gastón C. Hillar and Daron Yöndem, provided by Packt --

Working with aggregate functions

Cosmos DB SQL provides support for aggregations in the SELECT clause. For example, the following query will use the SUM aggregate function to sum all the values in the expression and calculate the total number of levels in the filtered games. The query uses the ARRAY_LENGTH built-in function to calculate the length of the levels array for each game and use it as an argument for the SUM aggregate function.

The code file for this article can be found at https://bit.ly/2FB9DDg. The sample is included in the sql_queries/videogame_1_17.sql file:

The following lines show the results of the query. Notice that the element of the array includes a key named $1:

Whenever you use an expression in the SELECT clause that is not a property name and you don't specify the desired alias name, Cosmos DB generates a key that starts with the $ prefix and continues with a number that starts in 1. Hence, if you have three expressions that aren't property names and don't include their desired aliases, Cosmos DB will use $1, $2 and $3 for the properties in the output results.

If you only want to generate the value without a key in the result, you can use the VALUE keyword. The following query uses this keyword. The code file for the sample is included in sql_queries/videogame_1_18.sql file:

The following lines show the results of the query. Notice that the element of the array doesn't include the key:

It is also possible to achieve the same goal using the COUNT aggregate function combined with the In keyword. The following query uses the COUNT aggregate function to count the number of items in the expression and calculate the total number of levels in the iterated levels for all the games. The code file for the sample is included in the sql_queries/videogame_1_19.sql file:

The following lines show the results of the query. Notice that the query specified the desired alias:

Now you want to calculate the average tower power for the levels defined in the video games. The towerPower property is not defined for all the levels and it is only available for the levels of the game whose id is equal to 1. Whenever you use the AVG aggregate function to calculate an average for an expression, only the documents that have the property will be a part of the average calculation. Hence, the levels that don't have the towerPower property won't generate an impact on the average.

The following query uses the AVG aggregate function combined with the IN keyword to iterate all the levels of the games that have the towerPower property and compute its average value. The code file for the sample is included in the sql_queries/videogame_1_20.sql file:

The following lines show the results of the query. Notice that the query specified the desired alias:

If you found this article interesting, you can explore Guide to NoSQL with Azure Cosmos DB to create scalable applications by taking advantage of NoSQL document databases on the cloud with .NET Core. Guide to NoSQL with Azure Cosmos DB will help build an application that works with a Cosmos DB NoSQL document database with C#, the .NET Core SDK, LINQ, and JSON.

Cheers!

28Sep/184

Creating a function in Azure (Packt)

Functions in a serverless architecture consist of logic that serves a single, well-defined purpose. They are executed using an ephemeral compute service and can be scaled automatically based on demand. Azure Functions is Microsoft’s solution for serverless functions.

-- post by Joseph Ingeno, provided by Packt --

In my book, Software Architect’s Handbook, I discuss serverless architecture and the use of functions. In this blog post, we will create a new function in Azure. Functions in Azure provide you with a choice of programming language (C#, JavaScript, Java, F#, with others coming in the future) and allow you to bring in dependencies from the NuGet and npm package managers. The runtime that powers Azure Functions can be found on GitHub.

There are different ways that a function for Azure can be created, such as using Visual Studio, Visual Studio Code, or the Azure command line interface (Azure CLI). However, so as to make this demo not require the installation of any tools, we will create a function through the Azure portal.

The following steps will be necessary to create a function in the Azure portal:

  • Logging in to the Azure portal
  • Creating a function app
  • Creating a function in the new function app
  • Testing the function
  • Customizing the function
  • Cleaning up resources

 

Logging in to the Azure portal

If you do not already have one, the first step is to create a free Azure account by navigating to https://azure.microsoft.com. Each account receives some free credit to allow you to try out Azure services.

Once you have an account, you can log in to the Azure portal by going to http://portal.azure.com.

 

Creating a function app

A function app in Azure is a container that hosts the execution of individual functions. Before we create a function, we must create a function app that will host it.

  1. In the top left corner, select the Create a resource option, followed by Compute.

  1. Select Function App.

  1. Create a function app by providing values for the various settings:
    • App Name: A unique name for the function app
    • Subscription: The subscription under which the function app will be created
    • Resource Group: A resource group is a container that holds resources related to the Azure solution. Create a new resource group or use an existing one for the function
    • Hosting Plan: The hosting plan for the function app; The consumption plan is charged on a pay-per-execution basis and dynamically allows resources based on the app‚Äôs load. The App Service Plan lets you define a capacity allocation with predictable costs and scale.
    • Location: The region where your function app will execute; Select one near you or near other services that your function will need to access.
    • Storage: An Azure storage account is used to store and access your Azure Storage data objects. Create a new storage account or select an existing one to be used by the function

  1. Click the Create button to deploy the new function app. Once the application has been deployed, you will receive a notification.

 

Creating a function

Once your new function app has been created, we can create a function to be used in the container. We’ll be creating an HTTP triggered function that can be executed via an HTTP call.

  1. If Function Apps is already under your Favorites, simply select it.

  1. If it isn’t already in your favorites, select All services. Find Function Apps and click the star next to it to make it one of your favorites. After doing so, select Function Apps.

  1. You should see the function app that you created previously. Click the Create New icon next to Functions under your function app in order to create a new function.

  1. To create a new function, we can either select a premade one or create one on our own. In this case, we will use one of the premade ones.

Select Webhook + API and select one of the languages. In this example, I have elected to use C#. Click the Create this function button when you are finished, which will create the function.

If C# was selected as the language, the code for the premade function is as follows:

 

Testing the function

Now that the function has been created, we can test it to confirm that it works properly.

  1. Your function has now been created and should be visible in the Functions section underneath your function app.

  1. With your function selected, click the Get function URL link to get the URL for your function.

  1. The previous step will cause the Get function URL dialog box to be displayed. Select default (Function key) for the Key drop-down, and then click Copy to copy the URL.

The URL contains a key that is required, by default, to access the function over HTTP.

  1. Paste the URL into a browser. As we saw when looking at the code, there is a query string parameter for a name Add &name=<yourname> to the end of the URL, replacing <yourname> with your actual name. Press the Enter key to execute the request.

If the name that you passed was ‚ÄúJoe,‚ÄĚ depending on the browser that you used you would either see the string ‚ÄúHello Joe‚ÄĚ or you would see it in XML format.

Trace information is written to logs during function execution. With the focus being placed on your function in the Azure portal, the contents of the logs can be viewed at the bottom of the page.

 

Customizing the function

You may want to customize the function that you just created in terms of things like the HTTP methods that are supported by the function and the routing template. Under your specific function, select the Integrate option.

There are various options for the HTTP trigger that can be customized.

Allowed HTTP methods

You can configure your function to support all HTTP methods (in which case the Selected HTTP methods section with its checkboxes is hidden) or selected HTTP methods. In the Selected HTTP methods section, you can choose which HTTP methods you want the function to support (GET, POST, DELETE, HEAD, PATCH, PUT, OPTIONS, and TRACE).

Authorization level

You can choose between the following authorization levels:

  • Function
  • Admin
  • Anonymous

With Function selected as the authorization level, an API key is required to access the function. This key was being passed as part of the URL:

If we set the authorization level to Anonymous, our function will be accessible without an API key. This makes the URL look as follows:

Route template

You can change the route that is used in order to invoke the function.¬† For example, if you were to enter ‚Äú/HelloTrigger‚ÄĚ in the Route template textbox, instead of using the function name in the URL as is the case by default, the enter route template value will be used instead. The resulting URL is as follows:

The ‚Äúapi‚ÄĚ part of the URL is part of the base path prefix and is handled by a global setting.

Cleaning up resources

Resource groups in Azure are containers that hold resources such as function apps, functions, and storage accounts. Deleting a resource group deletes everything that it contains. Once you are done with the demo, you may want to delete the resource group that was used so you are not billed for any resources.

  1. In the Azure portal, go to the relevant resource group page. One of the ways that this can be done is by selecting Resource groups and then selecting a resource group.

  1. An alternative approach is to select our function app as we had done previously and select its resource group from there.

  1. With the resource group in focus, select the Delete resource group

  1. Follow the instructions and click the Delete

It can take a few minutes, but eventually, the resource group and its contents will be deleted. A notification will appear to let you know when it is done.

Joseph Ingeno is a software architect who has designed and developed many different software applications. During his career, he has worked on projects for a number of different business domains, using a variety of technologies, programming languages, and frameworks. His book, Software Architect’s Handbook, was published by Packt and is now available. The Software Architect’s Handbook is a comprehensive guide to help developers, architects, and senior programmers advance their career in the software architecture domain.

Cheers!

23Aug/180

Creating, Debugging and Deploying an Azure Function (Packt)

You can learn how to create, debug, and deploy an Azure Function by reading this tutorial by Daniel Bass, a developer who develops complex backend systems entirely on Azure, making heavy use of event-driven Azure Functions and Azure Data Lake.

Serverless programming has been a buzzword in technology for a while now, first implemented for arbitrary code by Amazon on Amazon Web Services (AWS) in 2014. The term normally refers to snippets of backend code running in environments that are wholly managed by the cloud provider, totally invisible to developers. This approach has some astounding benefits, enabling an entirely new paradigm of computing architecture.

This article will focus on Microsoft's serverless product, Azure Functions. In this article, you’ll create an Azure Function in Visual Studio, debug it locally, and deploy it to an Azure cloud instance. You can refer to https://github.com/TrainingByPackt/Serverless-Architectures-with-Azure/tree/master/Lesson%201 to access the complete code for this article.

To develop Azure Functions for production, you need a computer running Windows and Visual Studio 2015 or later; however, the smoothest experience is present in Visual Studio 2017, version 15.4 or later. If your computer can run Visual Studio, it can handle the Azure Function development.

Creating Your First Function to Receive and Process Data from an HTTP Request

Before you begin, confirm that you have Visual Studio 2017 version 15.4 installed; if not, download and install it. Visual Studio 2017 has a comprehensive suite of Azure tools, including Azure Function development. To do so, perform the following steps:

1. Open the Visual Studio Installer, which will show you the version of Visual Studio that you have installed and allow you to select the Azure Workflow and install it; if it is missing, then update Visual Studio, if required, to the latest version:

2. Click on Modify, select the Azure development workload, and click on Modify again:

Now, you can create a new Azure Function as a part of your serverless architecture that listens to HTTP requests to a certain address as its trigger. Begin by implementing the following steps:

1. Create a new solution. The example is called BeginningAzureServerlessArchitecture, which is a logical wrapper for several functions that will get deployed to this namespace.

2. Use the Visual C# | Cloud | Azure Function Select the Empty trigger type and leave the default options, but set storage to None. This will create a Function App, which is a logical wrapper for several functions that will get deployed and scaled together:

3. You now have a solution with two files in it: host.json and local.settings.json. The local.settings.json file is used solely for local development, where it stores all details on connections to other Azure services.

It is important to note that when uploading something to a public repository, be very careful not to commit unencrypted connection settings - by default they will be unencrypted. host.json is the only file required to configure any functions running as a part of your Function App. This file can have settings that control the function timeout, security settings for every function, and a lot more.

4. Now, right-click on the project and select Add New Item. Once again, choose the Azure Function template:

5. On the next screen, select Http trigger with parameters, and set the Access rights to Anonymous. Right-click on your solution and select Enable NuGet Package Restore:

6. You will now have a C# file called PostTransactions.cs. It consists of a single method, Run, with an awful lot in the method signature: an attribute and an annotation. Some of this will be familiar to you if you are an experienced C# developer, and it is important to understand this signature.

Configuration as code is an important modern development practice. Rather than having servers reconfigured or configured manually by developers before code is deployed to them, configuration as code dictates that the entire configuration required to deploy an application to production is included in the source code.

This allows for variable replacement by your build/release agent, as you will (understandably) want slightly different settings, depending on your environment. Azure Functions implement this principle, with a configuration split between the host.json file for app-wide configurations and app settings, and the Run method signature for individual functions. Therefore, you can deploy an Azure Function to production with only the code that you find in the GitHub repository:

Outcome

You created an Azure Function, understood the roles the different files play, and learned about configuration as code.

The FunctionName annotation defines the name of the function within the Function App. This can be used for triggering your function, or it can be kept separate. The first parameter is an HttpRequestMessage object with an HttpTrigger attribute. This is what varies when you choose different triggers (for example, a timer trigger will have an object with a TimerTrigger attribute).

This attribute has several arguments. The first is the authorization level. Do you remember setting this when you created the function? It was called Access rights in the template. This defines the level of authorization that the function will demand of HTTP requests. The five levels are shown in the following table:

Authorization level Required information
Anonymous No key required; anyone with the path can call it an unlimited number of times.
User Need a valid token, generated by a user that has AD permission to trigger the Function App. Useful for high-security environments, where each service needs to manage its own security. Generally, token-based authentication is much more desirable than key-based.
Function Need the function key‚ÄĒa unique key created for each function in a Function App upon deployment. Any host key will also work. The most common form of authorization for basic deployments.
System Need the master key‚ÄĒa key at the Function App level (called a¬†host key) that cannot be deleted but can be renewed.
Admin Need any host key.

One thing to bear in mind is that if you set a function to be high security and use System or Admin authorization, then any client that you give that key to will also be able to access any other functions in the Function App (if they can work out the path). Make sure that you separate high-security functions into different apps.

The next parameters are GET and POST, which define the HTTP verbs that will activate the function. Generally, from a microservices architecture point of view, you should only have one verb to prevent you from having to do bug-prone switching logic inside of the function. You can simply create four separate functions if you want GET,  POST,  PUT, and  DELETE on an artifact.

Finally, there is a string assigned to the property route. This is the only bit of routing logic that the function itself can see, and it simply defies the subpath from the Function App. It accepts WebAPI syntax, which you can see in the curly braces, / {name}. This will assign any text that appears where the curly braces are to a parameter called name.

This completes the HttpTrigger object. The three parameters left in the method signature are an HttpRequestMessage object, which allows you to access the HttpRequestMessage that triggered the function; a string parameter called name, which is what the string in the curly braces in the path will get bound to; and a TraceWriter for logging.

The current logic of the Function App can be seen in the following example, and you should see that it will take whatever name is put into it and send back an HTTP response saying Hello to that name.

Debugging an Azure Function

You now have a working Azure Function that can be deployed to Azure or run locally. You’ll first host and debug the function locally, to show the development cycle in action.

Debug an Azure Function

In this section, you'll run an Azure Function locally and debug it. You can develop new functions and test the functionality before deploying to the public cloud. And to ensure that it happens correctly, you'll require the single function created directly from the HTTP trigger with the parameters template.

Currently, your machine does not have the correct runtime to run an Azure Function, so you need to download it:

1. Click on the Play button in Visual Studio, and a dialog box should ask you if you want to download Azure Functions Core Tools - click on Yes. A Windows CMD window will open, with the lightning bolt logo of Azure Functions. It will bootstrap the environment and attach the debugger from Visual Studio. It will then list the endpoints the Function App is listening on.

2. Open the Postman app and copy and paste the endpoint into it, selecting either a POST or GET verb. You should get the response Hello {name}. Try changing the {name} in the path to your name, and you will see a different response. You can download Postman at https://www.getpostman.com/.

3. Create a debug point in the Run method by clicking in the margin to the left of the code:

4. Use Postman to send the request.

5. You are now able to use the standard Visual Studio debugging features and inspect the different objects as shown in the following screenshot:

6. Set your verb to POST, and add a message in the payload. See if you can find the verb in the HttpRequestMessage object in debug mode. It should be in the method property.

Outcome

You have debugged an Azure Function and tested it using Postman. As you can see from running the function locally, you, the developer, do not need to write any of the usual boilerplate code for message handling or routing. You don't even need to use ASP.NET controllers, or set up middleware. The Azure Functions container handles absolutely everything, leaving your code to simply do the business logic.

Activity: Improving Your Function

In this activity, you’ll add a JSON payload to the request and write code to parse that message into a C# object.

Prerequisites

You’ll require a function created from the HTTP trigger with the parameters template.

Scenario

You are creating a personal finance application that allows users to add their own transactions, integrate with other applications, and perhaps allow their credit card to directly log transactions. It will be able to scale elastically to any number of users, saving money when you don't have any users.

Aim

Parse a JSON payload into a C# object, starting your RESTful API.

Steps for Completion

1. Change the Route to transactions.

2. Remove the get Remove the String parameter called name:

3. Add the Newtonsoft.json package, if it isn't already present. You can do this by right-clicking on Solution | Manage NuGet packages | Browse | Newtonsoft.json.

4. Right-click on the project and add a folder called Models, and then add a C# class called Transaction. Add two properties to this class: a DateTime property called ExecutionTime, and a Decimal property called Amount:

5. Use DeserializeObject<Transaction>(message).Result() to de-serialize the HttpRequestMessage into an instantiation of this class. To do this, you need to import the Models namespace and Newtonsoft.json. This will parse the JSON payload and use the Amount property to file the corresponding property on the Transaction object:

6. Change the return message to use a property of the new Transaction object, for example, You entered a transaction of £47.32!. Go to Postman and open the Body tab and select raw.

7. Enter the following JSON object:

8. Run locally to test. Make sure that you change the endpoint to /transactions in Postman.

Outcome

You learned how to access the¬†HttpRequestMessage, and you will have a function that can read a JSON message and turn it into a C# object. During this subtopic, you debugged an Azure Function. Visual Studio only allows this through downloading¬†azure-functions-core-tools. Unfortunately, it doesn't make it available on the general command line‚ÄĒonly through command windows started in Visual Studio. If you want to use it independently, then you have to download it using npm. If you need to download¬†azure-functions-core-tools¬†separately, you can use npm to get it - npm install -g azure-functions-core-tools¬†for version 1 (fully supported) and¬†npm install -g [email protected]¬†for version 2 (beta). You can then use the debug setup to set Visual Studio to call an external program with the command¬†func host start¬†when you click on the¬†Debug¬†button.

This package is a lot more than just a debug environment, however; it actually has a CLI for everything you could possibly need in the Azure Function development. Open up a command window (in Visual Studio, if you haven't downloaded it independently) and type func help; you should see a full list of everything the CLI can do. Notable commands are func host start, which starts the local debug environment, and func azure {functionappname} fetch-app-settings, which lets you download the app settings of a function deployed to Azure so that you can test integration locally, as well. These need to be run in the same folder as the host.json file.

Deploying an Azure Function

An Azure Function is obviously geared towards being hosted on the Azure cloud, rather than locally or on your own computer. Visual Studio comes with a complete suite of tools to deploy and manage Azure services, and this includes full support for Azure Functions. The azure-functions-core-tools CLI that you downloaded to provide a local debug environment also has a set of tools for interacting with Azure Functions in the cloud, if you prefer CLIs.

It is possible to run Azure Functions on your own servers, using the Azure Functions runtime. This is a good way to utilize the sunk cost that you have already spent on servers, combined with the unlimited scale that Azure offers (if demand exceeds your server capacity). It's probably only worth it in terms of cost if you have a significant amount of unused Windows server time because this solution inevitably requires more management than normal Azure Function deployments. To deploy to Azure, you’ll need an Azure login with a valid Azure subscription.

Deploying to Azure

In this section, you'll deploy your first function to the public cloud and learn how to call it. You'll go live with your Azure Function start creating your serverless architecture. And to ensure that it happens correctly, you'll need a function project and a valid Azure subscription. You can begin by implementing the following steps:

1. Right-click on your project and select Publish.... Now select Azure Function App | Create New as shown in the following screenshot:

2. Enter a memorable name and create a resource group and a consumption app service plan to match the following:

3. Click on the Publish button to publish your function.

4. Open a browser, navigate to http://portal.azure.com, and find your function. You can use the search bar and search the name of your function. Click on your Function App, and then click on the function name. Click on Get function URL in the upper-right corner, paste the address of your function in Postman, and test it. If you have a paid subscription, these executions will cost a small amount of money - you are only charged for the compute resources that you actually use. On a free account, you get a million executions for free.

Outcome

You now have a fully deployed and working Azure Function in the cloud.

This is not the recommended way to deploy to production. Azure Resource Manager (ARM) templates are the recommended way to deploy to production. ARM templates are JavaScript Object Notation (JSON) files. The resources that you want to deploy are declaratively described within JSON. An ARM template is idempotent, which means it can be run as many times as required, and the output will be the same each and every time. Azure handles the execution and targets the changes that need to be run.

If you found this article interesting, explore Daniel Bass’ Beginning Serverless Architectures with Microsoft Azure to quickly get up and running with your own serverless development on Microsoft Azure. This book will provide you with the context you need to get started on a larger project of your own, leaving you equipped with everything you need to migrate to a cloud-first serverless solution.

Cheers!

P.S. This post was provided by good people at Packt Publishing. Thank you, Ron!