The inventory file may include the following blocks:
all
kuma
kuma_k0s
For each host, you must specify the FQDN in the <host name
>.<domain
> format and, if necessary, an IPv4 or IPv6 address. The KUMA Core domain name and its subdomains may not start with a numeral.
Example: hosts: hostname.example.com: ip: 0.0.0.0 |
The 'all' block
In this block, you can specify the variables that apply to all hosts listed in the inventory file, including the implicitly specified localhost on which the installation is started. Variables can be overridden at the level of host groups or individual hosts.
Example of overriding variables in the inventory file
The table below lists all possible variables in the vars
section and their descriptions.
List of possible variables in the 'vars' section
Variable |
Description |
---|---|
|
Method used to connect to target machines. Possible values:
|
|
User name used to connect to target machines and install components. If root login is blocked on the target machines, choose a user that has the right to establish SSH connections and elevate privileges using su or sudo. |
|
This variable specifies if you want to elevate the privileges of the user that is used to install KUMA components. Possible values:
|
|
Method for elevating the privileges of the user that is used to install KUMA components. You must specify |
|
Path to the private key in the /<path>/.ssh/id_rsa format. You must specify this variable if you want to use a key file other than the default key file (~/.ssh/id_rsa). |
|
This variable specifies whether you want to deploy KUMA components in a Kubernetes cluster. Possible values:
If you do not specify this variable, it defaults to |
|
This variable specifies whether you want to migrate KUMA Core to a new Kubernetes cluster. You need to specify this variable only if Possible values:
If you do not specify this variable, it defaults to |
|
This variable specifies whether the installer must perform the steps to configure the firewall on the hosts. Possible values:
If you do not specify this variable, it defaults to |
|
This variable specifies whether the machines must be registered in the DNS zone of your organization. The installer automatically adds the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines on which KUMA components are installed. The specified IP addresses must be unique. Possible values:
If you do not specify this variable, it defaults to |
|
This variable specifies whether predefined services are created during the installation of KUMA. You need to specify this variable if you want to create demo services independently of the single/distributed/k0s inventory file. Possible values:
If you do not specify this variable, it defaults to |
|
This variable specifies whether KUMA is being installed in an environment with limited computational resources. This variable is not specified in any of the inventory file templates. Possible values:
If you do not specify this variable, it defaults to |
The 'kuma' block
In this block, you can specify the settings of KUMA components deployed outside of the Kubernetes cluster. The kuma
block can contain the following sections:
vars
contains variables that apply to all hosts specified in the kuma
block. children
contains groups of settings for components:kuma_core
contains settings of the KUMA Core. raft_node_addr
is the FQDN on which you want raft to listen for signals from other nodes. This setting must be specified if server uses multiple FQDNs. This value must be specified in the <FQDN>:<port> format. If this setting is not specified explicitly, <FQDN> defaults to the FQDN of the host on which the KUMA Core is deployed, and <port> defaults to 7209. You can specify an address of your choosing to adapt the KUMA Core to the configuration of your infrastructure.kuma_core_peers
contains settings of additional KUMA Cores. The installer prepares the servers specified in the kuma_core_peers group for the subsequent installation of additional KUMA Core services. You must specify an even number of servers in this group. If the kuma_core_peers group is missing from the inventory file or is empty, the normal installation or upgrade procedure with a single KUMA Core is performed. Only one KUMA Core service can be installed on each server in the kuma_core_peers group.Using the kuma_core_peers group for different installer use cases
raft_node_addr
is the FQDN on which you want raft to listen for signals from other nodes. This setting must be specified if server uses multiple FQDNs. This value must be specified in the <FQDN>:<port> format. kuma_collector
contains settings of KUMA collectors. You can specify multiple hosts.kuma_correlator
contains settings of KUMA correlators. You can specify multiple hosts.kuma_storage
contains settings of KUMA storage nodes. You can specify multiple hosts as well as shard, replica, and keeper IDs for hosts using the following settings:shard
is the shard ID.replica
is the replica ID.keeper
is the keeper ID.The specified shard, replica, and keeper IDs are used only if you are deploying demo services as part of a fresh KUMA installation. In other cases, the shard, replica, and keeper IDs that you specified in the KUMA web interface when creating a resource set for the storage are used.
The 'kuma_k0s' block
In this block, you can specify the settings of the Kubernetes cluster that ensures high availability of KUMA. This block is specified only in an inventory file based on k0s.inventory.yml.template.
For test and demo installations in environments with limited computational resources, you must also set low_resources: true
in the all
block. In this case, the minimum size of the KUMA Core installation directory is reduced to 4 GB and the limitations of other computational resources are ignored.
For each host in the kuma_k0s
block, a unique FQDN and IP address must be specified in the ansible_host
variable, except for the host in the kuma_lb
section. For the host in the kuma_lb
section, only the FQDN must be specified. Hosts must be unique within a group.
For a demo installation, you may combine a controller with a worker node. Such a configuration does not provide high availability of the KUMA Core and is only intended for demonstrating the functionality or for testing the software environment.
The minimal configuration that ensures high availability is 3 controllers, 2 worker nodes, and 1 nginx load balancer. In production, we recommend using dedicated worker nodes and controllers. If a cluster controller is under workload and the pod with the KUMA Core is hosted on the controller, if the controller goes down, access to the KUMA Core will be completely lost.
The kuma_k0s
block can contain the following sections:
vars
contains variables that apply to all hosts specified in the kuma
block.сhildren
contains settings of the Kubernetes cluster that provides high availability of KUMA.The following table lists possible variables in the vars
section and their descriptions.
List of possible variables in the vars
section
Group of variables |
Description |
|
---|---|---|
|
FQDN of the load balancer. You can install the nginx load balancer or a third-party TCP load balancer. If you are installing the nginx load balancer, you can set If you are installing a third-party TCP load balancer, you must manually configure it before installing KUMA. |
|
|
The host that acts as the primary controller of the cluster. |
Groups for specifying the primary controller. You only need to specify a host for one group. |
|
A host that combines the role of the primary controller and a worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |
|
|
Hosts that act as controllers in the cluster. |
Groups for specifying secondary controllers. |
|
Hosts that combine the roles of controller and worker node in the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |
|
|
Worker nodes of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |
|
|
If multiple network interfaces are being used on the worker nodes of the cluster at the same time, the For example, if you want to use only network interfaces named ethN (where N is the number of the network interface) for communication between worker nodes of the cluster, you can specify the variable as follows:
This makes the cluster use a network interface with a name that matches the If the network interface name on each worker node is the same, for example eth0, you can specify the variable without a mask:
For more information, please refer to the Calico Open Source documentation. |