CRC – OpenShift 4 cluster in your pocket

… but only if you have large pockets, that is! ūüôā

I suppose that, by now, you’ve already heard of minishift – tool which helps you run a single-node OpenShift 3.x cluster inside a virtual machine (on your preferred hypervisor), which is extremely convenient if you need to do some local development, testing, etc. Of course, you can always deploy what you need on top of Azure or any other cloud provider, but having OpenShift cluster locally can have its benefits.

And what if you need, let’s say, an OpenShift 4.x cluster “to go”?

Don’t worry, there is a solution for you as well! It’s called CodeReady Containers (CRC) (version 1.9 at the time of writing), and is basically a single node OpenShift 4.x cluster, inside a virtual machine (sounds a lot like minishift, doesn’t it?).

So, how can you get your hands on CRC and make it work?

There are a couple of steps involved:

  • download the OpenShift tools (as I’m using Windows, so I’ll use this one):

  • unzip all to a location you like (mine is C:\PocketOpenShift):

  • (optional) add this location to PATH for easier usage (as already mentined, I’m using Windows & PowerShell):
[System.Environment]::SetEnvironmentVariable("PATH", $Env:Path + ";C:\PocketOpenShift", "Machine")

  • run “crc setup” which will prepare the environment:
crc setup
# INFO Checking if oc binary is cached
# INFO Checking if podman remote binary is cached
# INFO Checking if CRC bundle is cached in '$HOME/.crc'
# INFO Checking if running as normal user
# INFO Checking Windows 10 release
# INFO Checking if Hyper-V is installed and operational
# INFO Checking if user is a member of the Hyper-V Administrators group
# INFO Checking if Hyper-V service is enabled
# INFO Checking if the Hyper-V virtual switch exist
# INFO Found Virtual Switch to use: Default Switch
# Setup is complete, you can now run 'crc start' to start the OpenShift cluster
  • run “crc start” (with one or more options, if needed – as you can see, I’ll be using custom number of vCPUs, amount of RAM and a custom nameserver):
crc start --cpus 4 --memory 8192 --nameserver 8.8.8.8 --pull-secret-file ".\pull-secret.txt"
# INFO Checking if oc binary is cached
# INFO Checking if podman remote binary is cached
# INFO Checking if running as normal user
# INFO Checking Windows 10 release
# INFO Checking if Hyper-V is installed and operational
# INFO Checking if user is a member of the Hyper-V Administrators group
# INFO Checking if Hyper-V service is enabled
# INFO Checking if the Hyper-V virtual switch exist
# INFO Found Virtual Switch to use: Default Switch
# INFO Loading bundle: crc_hyperv_4.3.10.crcbundle ...
# INFO Checking size of the disk image C:\Users\tomica\.crc\cache\crc_hyperv_4.3.10\crc.vhdx ...
# INFO Creating CodeReady Containers VM for OpenShift 4.3.10...
# INFO Verifying validity of the cluster certificates ...
# INFO Adding 8.8.8.8 as nameserver to the instance ...
# INFO Will run as admin: add dns server address to interface vEthernet (Default Switch)
# INFO Check internal and public DNS query ...
# INFO Check DNS query from host ...
# INFO Copying kubeconfig file to instance dir ...
# INFO Adding user's pull secret ...
# INFO Updating cluster ID ...
# INFO Starting OpenShift cluster ... [waiting 3m]
# INFO
# INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
# INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
# INFO To login as an admin, run 'oc login -u kubeadmin -p <a_secret_password> https://api.crc.testing:6443'
# INFO
# INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
# Started the OpenShift cluster
# WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
  • at any time, you can check the status of your cluster by using “crc status” command:
crc status
# CRC VM:          Running
# OpenShift:       Running (v4.3.10)
# Disk Usage:      10.67GB of 32.72GB (Inside the CRC VM)
# Cache Usage:     14.27GB
# Cache Directory: C:\Users\tomica\.crc\cache
  • once it is up, you can use “oc login” or console (bring it up with “crc console“) to connect to it, either as an (kube)admin or a developer, and continue working as you would normally with any other OpenShift cluster:
oc login -u kubeadmin -p <a_secret_password> https://api.crc.testing:6443
# The server uses a certificate signed by an unknown authority.
# You can bypass the certificate check, but any data you send to the server could be intercepted by others.
# Use insecure connections? (y/n): <strong>y</strong>
# 
# Login successful.
# 
# You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects'
# 
# Using project "default".
# Welcome! See 'oc help' to get started.

  • one other thing I like to do is to enable monitoring and other disabled stuff (note though – your VM should have 12+ GB RAM) – you can do it with two commands – first one lists all that is disabled, and the second one, with the index at the end, enables it (also note that in the official documentation there is an issue with “” and (they are switched), if you’re working in PowerShell):
oc get clusterversion version -ojsonpath="{range .spec.overrides[*]}{.name}{'\n'}{end}" | % {$nl++;"`t$($nl-1) `t $_"};$nl=0
# 0 cluster-monitoring-operator
# 1 machine-config-operator
# 2 etcd-quorum-guard
# 3 machine-api-operator
# 4 cluster-autoscaler-operator
# 5 insights-operator
# 6 prometheus-k8s

oc patch clusterversion/version --type="json" -p "[{'op':'remove', 'path':'/spec/overrides/0'}]" -oyaml
# ...
# state: Completed
# ...
  • monitoring should now be working as well:

And that’s it – you’re ready to work on your own “pocket OpenShift cluster”! ūüôā

Of course, don’t forget that there is also the official documentation, including the Getting Started guide and Release Notes and Known Issues document. So… take a look!

Cheers!

Software Management in Linux (Packt)

Learn software management with advanced Linux administration in this tutorial by Frederik Vos, a Linux trainer and evangelist and a senior technical trainer of virtualization technologies, such as Citrix XenServer and VMware vSphere.

— post by Frederik Vos, provided by Packt —

Software management

In the old days, installing software was a matter of extracting an archive to a filesystem. There were several problems with this approach:

  • It was difficult to remove the software if the files were copied into directories that were also used by another software
  • It was difficult to upgrade software, maybe because the files were still in use or were renamed
  • It was difficult to handle shared libraries

That’s why Linux distributions invented software managers.

The RPM software manager

In 1997, Red Hat released the first version of their package manager, RPM. Other distributions such as SUSE adopted this package manager. RPM is the name of the rpm utility, as well as the name of the format and the filename extension.

The RPM package contains the following:

  • A CPIO archive
  • Metadata with information about the software, such as a description and dependencies
  • Scriptlets for pre and post-installation scripts

In the past, Linux administrators used the rpm utility to install/update and remove software on a Linux system. If there was a dependency, the rpm command was able to tell exactly which other packages you needed to install. However, the rpm utility couldn’t fix the dependencies or possible conflicts between packages.

Nowadays, the rpm utility isn’t used any longer to install or remove software; instead, you use more advanced software installers. After the installation of software with yum (Red Hat/CentOS) or zypper (SUSE), all the metadata goes into a database. Querying this rpm database with the rpm command can be very handy.

A list of the most common rpm query parameters are as follows:

Parameter Description
-qa List all the installed packages.
-qi <software> List information.
-qc <software> List the installed configuration files.
-qd <software> List the installed documentation and examples.
-ql <software> List all the installed files.
-qf <filename> Shows the package that installed this file
-V <software> Verifies the integrity/changes after the installation of a package; use -va to do it for all installed software.
-qp Use this parameter together with other parameters if the package is not already installed. It’s especially useful if you combine this parameter with¬†–script¬†to investigate the pre and post-installation scripts in the package.

The following screenshot is an example of getting information about the installed SSH server package:

The output of the -V parameter indicates that the modification time has changed since the installation. Now, make another change in the sshd_config file:

sudo cp /etc/ssh/sshd_config /tmp
sudo sed -i 's/#Port 22/Port 22/' /etc/ssh/sshd_config

If you verify the installed package again, there is an S added to the output, indicating that the file size is different, and a T, indicating that the modification time has changed:

Other possible characters in the output are as follows:

S File size
M Mode (permissions)
5 Checksum
D Major/minor on devices
L Readlink mismatch
U User ownership
G Group ownership
T Modification time
P Capabilities

For text files, the diff command can help show the differences between the backup in the /tmp directory and the configuration in /etc/ssh:

sudo diff /etc/ssh/sshd_config /tmp/sshd_config

You can also restore the original file as follows:

sudo cp /tmp/sshd_config /etc/ssh/sshd_config

The DPKG software manager

The Debian distribution doesn’t¬†use¬†the RPM format; instead, it¬†uses¬†the DEB format invented in 1995. The format is in use on all Debian and Ubuntu-based distributions.

A DEB package contains:

  • A file,¬†debian-binary, with the version of the package
  • An archive file, control.tar, with metadata (package name, version, dependencies, and maintainer)
  • An archive file, data.tar, containing the actual software

Management of DEB packages can be done with the dpkg utility. Like rpm, the utility is not in use any longer to install software. Instead, the more advanced apt command is used. All the metadata goes into a database, which can be queried with dpkg or dpkg-query.

The important parameters of dpkg-query are as follows:

-l Lists all the packages without parameters, but you can use wildcards, for example, dpkg -l *ssh*
-L <package> Lists files in an installed package
-p <package> Shows information about the package
-s <package> Shows the state of the package

The first column from the output of dpkg -l also shows a status as follows:

The first character in the first column is the desired action, the second is the actual state of the package, and a possible third character indicates an error flag (R). ii means that the package is installed.

The possible desired states are as follows:

  • (u) unknown
  • (h) hold
  • (r) remove
  • (p) urge

The important package states are as follows:

  • n(ot) installed
  • H(a)lf installed
  • Hal(F) configured

Software management with YUM

Your Update Manager or Yellowdog Updater Modified (YUM) is a modern software management tool that was introduced by Red Hat in Enterprise Linux version 5, replacing the up2date utility. It is currently in use in all Red Hat-based distributions but will be replaced with dnf, which is used by Fedora. The good news is that dnf is syntax-compatible with yum.

Yum is responsible for:

  • Installing software, including dependencies
  • Updating software
  • Removing software
  • Listing and searching for software

The important basic parameters are as follows:

Command Description 
yum search Search for software based on package name/summary
yum provides Search for software based on a filename in a package
yum install Install software
yum info Information and status
yum update Update all software
yum remove Remove software

You can also install patterns of software, for instance, the pattern or group File and Print Server is a convenient way to install the NFS and Samba file servers together with the Cups print server:

Command Description
yum groups list List the available groups.
yum groups install Install a group.
yum groups info Information about a group, including the group names that are in use by the Anaconda installer. This information is important for unattended installations.
yum groups update Update software within a group.
yum groups remove Remove the installed group.

Another nice feature of yum is working with history:

Command Description
yum history list List the tasks executed by yum
yum history info <number> List the content of a specific task
yum history undo <number> Undo the task; a redo is also available

The yum command uses repositories to be able to do all the software management. To list the currently configured repositories, use:

yum repolist

To add another repository, you’ll need the¬†yum-config-manager¬†tool, which creates and modifies the configuration files in¬†/etc/yum.repos.d. For instance, if you want to add a repository to install Microsoft SQL Server, use the following:

yum-config-manager --add-repo \
  https://packages.microsoft.com/config/rhel/7/\
  mssql-server-2017.repo

The yum functionality can be extended with plugins, for instance, to select the fastest mirror, enabling the filesystem / LVM snapshots and running yum as a scheduled task (cron).

Software management with Zypp

SUSE, like Red Hat, uses RPM for package management. But instead of using yum, they use another toolset with Zypp (also known as libZypp) as backend. Software management can be done with the graphical configuration software YaST or the command-line interface tool Zypper. The important basic parameters are as follows:

Command Description
zypper search Search for software
zypper install Install software
zypper remove Remove software
zypper update Update software
zypper dist-upgrade Perform a distribution upgrade
zypper info Show information

There is a search option to search for a command,¬†what-provides, but it’s very limited. If you don’t know the package name, there is a utility called¬†cnf¬†instead. Before you can use¬†cnf, you’ll need to install¬†scout; this way, the package properties can be searched:

sudo zypper install scout

After this, you can use cnf:

If you want to update your system to a new distribution version, you have to modify the repositories first. For instance, if you want to update from SUSE LEAP 42.3 to version 15.0, execute the following procedure:

  1. First, install the available updates for your current version:
sudo zypper update
  1. Update to the latest version in the 42.3.x releases:
sudo zypper dist-upgrade
  1. Modify the repository configuration:
sudo sed -i 's/42.3/15.0/g' /etc/zypp/repos.d/repo*.repo
  1. Initialize the new repositories:
sudo zypper refresh
  1. Install the new distribution:
sudo zypper dist-upgrade
  1. Now, reboot after the distribution upgrade.

Besides installing packages, you can also install the following:

  • patterns: Groups of packages, for instance, to install a complete web server including PHP and MySQL (also known as a lamp)
  • patches: Incremental updates for a package
  • products: Installation of an additional product

To list the available patterns, use:

zypper patterns

To install them, use:

sudo zypper install --type pattern <pattern>

The same procedure applies to patches and products. Zypper uses online repositories to view the currently configured repositories:

sudo zypper repos

You can add repositories with the addrepo parameter, for instance, to add a community repository for the latest PowerShell version on LEAP 15.0:

sudo zypper addrepo \
  https://download.opensuse.org/repositories\
  /home:/aaptel:/powershell-stuff/openSUSE_Leap_15.0/\
  home:aaptel:powershell-stuff.repo

If you add a repository, you must always refresh the repositories:

sudo zypper refresh

Software management with apt

In Debian/Ubuntu-based distributions, software management is done via the apt utility, which is a recent replacement for the utilities, apt-get and apt-cache.

The most-used commands include:

Command Description
apt list List packages
apt search Search in descriptions
apt install Install a package
apt show Show package details
apt remove Remove a package
apt update Update catalog of available packages
apt upgrade Upgrade the installed software
apt edit-sources Edit the repository configuration

Repositories are configured in /etc/apt/sources.list and files in the /etc/apt/sources.list.d/ directory. Alternatively, there is a command, apt-add-repository, available:

apt-add-repository \
  'deb http://myserver/path/to/repo stable'

The apt repositories have the concept of release classes:

  • Old stable, tested in the previous version of a distribution
  • Stable
  • Testing
  • Unstable

They also have the concept of components:

  • Main: Tested and provided with support and updates
  • Contrib: Tested and provided with support and updates, but there are dependencies that are not in main, but for instance, in non-free
  • Non-free: Software that isn’t compliant¬†with¬†the Debian Social Contract Guidelines (https://www.debian.org/social_contract#guidelines)

Ubuntu adds some extra components:

  • Universe: Community provided, no support, updates possible
  • Restricted: Proprietary device drivers
  • Multiverse: Software restricted by copyright or legal issues

If you found this article interesting, you can explore Frederik Vos’ Hands-On Linux Administration on Azure to administer Linux on Azure. Hands-On Linux Administration on Azure will help you efficiently run Linux-based workloads in Azure and make the most of the important tools required for deployment.

Cheers!