Incus Cluster 101 for Higher Availability Containers/VM's

Clustering provides higher availability than just a single standalone incus server. An Incus Cluster consists of one bootstrap server and at least two other cluster members. In this tutorial, we see how to create a very basic Incus Cluster.

You can create a cluster with only two members, but the loss of one member results in a lack of quorum for the distributed database and since we are trying to provide the best availability, my recommendation is to always form an incus cluster with three or more member nodes.

The servers participating in a cluster can be physical servers or virtual machines. For availability, again, you would not want cluster members that are VM’s to be hosted on the same physical server.

The initial cluster member that is used to form or start a cluster is called the bootstrap server. The bootstrap server can be a new server on which you have not yet run “incus admin init”. The bootstrap server can also be an existing standalone Incus Server that you convert into a cluster bootstrap member.

For our purposes, we are going to convert my “vmscloud-incus” incus standalone server to an incus bootstrap server. A bootstrap server is special because all of the containers, profiles, and images that exist as a standalone incus server will be preserved when it becomes the bootstrap server.

Unfortunately, additional cluster members that are added after the bootstrap server will lose any configured containers, profiles and images since only the bootstrap data is preserved. That means that if you do add an existing incus server to an existing cluster, you will need to export all of its containers and profiles and import them into the cluster again.

To start with, our standalone server, which will become the bootstrap cluster member has three containers on it.

My first member server (second server in the cluster) will be “vmsfog-incus” and it has two containers on it. Unfortunately, these containers will be lost when this server is converted from a standalone incus server to a member server in the cluster.

The third member of the cluster is incus-member and it will be a member server just like vmsfog-incus. However, I have installed incus on the incus-member server, but I have not ever run “incus admin init” on this server and so it has no containers on it. This would be the more typical way of adding member servers, since as I noted any containers, profiles or images would not be retained when a server is joined as a member to an existing cluster.

To convert vmscloud-incus from a standalone incus server to a cluster bootstrap server, I set my cluster address to be the address of the vmsloud-incus server at its management port. Again, this is a simple example:

incus config set cluster.https_address 172.16.1.219:8443

Now, I can make it into a cluster:

incus cluster enable vmscloud-incus

The command to verify it is a cluster is:

incus cluster list

The vmsfog-incus incus server is not yet part of the cluster.

incus cluster list

image

Since vmsfog-incus has a couple containers, I need to stop them and delete them because vmsfog-incus cannot become a member server if it has existing containers.

image

I must also remove the “root” device from the incus default pool:

incus profile device remove default root

I must also remove the default ethernet device that is used inside containers.

incus profile device remote default eth0

image

We must also remove the bridgeprofile you may have created in my “incus step by step” tutorial:

incus profile delete bridgeprofile

We also have to delete the incusbr0 bridge device and the pointer for the default storage pool:

incus network rm incusbr0
incus storage rm default

image

Now we go to the bootstrap node and initiate an add command to get a cluster join token.

incus cluster add vmsfog-incus

Now, back on vmsfog-incus, we can join it to the cluster using the token we just acquired from the bootstrap server. Cluster joins require “sudo”:

sudo incus admin init

Now if I show the cluster from either vmscloud-incus or vmsfog-incus it shows both members of the cluster.

incus cluster list

Note that in the listing above that the URL for vmscloud-incus is listed as an address and vmsfog-incus is listed as a DNS name. This is all due to if you used a DNS name when you performed "incus admin init. It’s also important for cluster communication to have all cluster members list the dns entires of all other cluster members on each node of the cluster for proper cluster communication.

As an example, here’s my host table:

sudo nano /etc/hosts

Next, I want to join incus-member. The incus-member incus server has incus installed, but I have never performed “incus admin init” on it and so adding it as a member server is much easier.

First I set the cluster address on incus-member to be the address of incus-member.

incus config set cluster.https_address 172.16.1.74:8443

image

Now I join the cluster with an “incus admin init”, but first I log into either vmscloud-incus or vmsfog-incus and get a join token with:

incus cluster add incus-member

Now I go back to incus-member.

In the tutorial this join initially fails because vmscloud-incus and vmsfog-incus were running incus v6.0 and incus-member was newly installed and running incus 6.1. Incus server versions must be the same before a new member server can be added.

After upgrading all three servers to incus v6.1, the join worked by repeating the “incus admin init” as above and now we can see all three servers.

incus cluster list

If you watched my video entitled LXConsole Web Interface for Incus you can see that the Web GUI also sees the cluster.

The summary screen also shows how many cluster members are online.

The beauty of the cluster is that if one of the cluster members goes down, the incus containers and incus virtual machines are all still reachable. Adding a container from any cluster member has it showing from the point of view of all of the cluster members.

The higher availability that incus clusters offer may be the right solution for those applications that you need to be online even if a single server goes down.

Hi Scott, I’m trying to understand where the higher availability comes from. The docs seem to indicate that high availability on an incus cluster requires either (a) multiple instances running simultaneously behind an external load balancer, or (b) a remote storage pool (Ceph or lvmlockd) set up separately which allows cluster healing through automatic evacuation of instances of an offline member. Am I missing something? In your example above, if you take vmscloud-incus offline, will your instances still be online?

Yes, my instances will still be online. I have tested that many times.