Edit the inventory file settings in k0s.inventory.yml.Example inventory file for a distributed installation in a high availability configuration with 3 controllers, 2 worker nodes, and 1 balancer
all:
vars:
ansible_connection: ssh
ansible_user: root
deploy_to_k8s: true
need_transfer: false
generate_etc_hosts: false
deploy_example_services: false
kuma:
children:
kuma_core:
hosts:
kuma-core.example.com:
mongo_log_archives_number: 14
mongo_log_frequency_rotation: daily
mongo_log_file_size: 1G
kuma_collector:
hosts:
kuma-collector.example.com:
kuma_correlator:
hosts:
kuma-correlator.example.com:
kuma_storage:
hosts:
kuma-storage-cluster1.server1.example.com
kuma-storage-cluster1.server2.example.com
kuma-storage-cluster1.server3.example.com
kuma-storage-cluster1.server4.example.com
kuma-storage-cluster1.server5.example.com
kuma-storage-cluster1.server6.example.com
kuma-storage-cluster1.server7.example.com
kuma_k0s:
children:
kuma_lb:
hosts:
kuma-lb.example.com:
kuma_managed_lb: true
kuma_control_plane_master:
hosts:
kuma_cpm.example.com:
ansible_host: 10.0.1.10
kuma_control_plane_master_worker:
kuma_control_plane:
hosts:
kuma_cp2.example.com:
ansible_host: 10.0.1.11
kuma_cp3.example.com:
ansible_host: 10.0.1.12
kuma_control_plane_worker:
kuma_worker:
hosts:
kuma-w1.example.com:
ansible_host: 10.0.2.11
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
kuma-w2.example.com:
ansible_host: 10.0.2.12
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
For such a configuration, specify the parameters as follows: need_transfer: false, deploy_example_services: false; in the kuma_storage section, list the servers for the storage cluster. After the installation is complete, you can use the KUMA web interface to assign the shard, replica and keeper roles to the servers specified in the inventory.
Example inventory file for migrating the Core from a distributed installation to a Kubernetes cluster to ensure high availability
all:
vars:
ansible_connection: ssh
ansible_user: root
deploy_to_k8s: true
need_transfer: true
generate_etc_hosts: false
deploy_example_services: false
kuma:
children:
kuma_core:
hosts:
kuma-core.example.com:
mongo_log_archives_number: 14
mongo_log_frequency_rotation: daily
mongo_log_file_size: 1G
kuma_collector:
hosts:
kuma-collector.example.com:
kuma_correlator:
hosts:
kuma-correlator.example.com:
kuma_storage:
hosts:
kuma-storage-cluster1.server1.example.com
kuma-storage-cluster1.server2.example.com
kuma-storage-cluster1.server3.example.com
kuma-storage-cluster1.server4.example.com
kuma-storage-cluster1.server5.example.com
kuma-storage-cluster1.server6.example.com
kuma-storage-cluster1.server7.example.com
kuma_k0s:
children:
kuma_lb:
hosts:
kuma-lb.example.com:
kuma_managed_lb: true
kuma_control_plane_master:
hosts:
kuma_cpm.example.com:
ansible_host: 10.0.1.10
kuma_control_plane_master_worker:
kuma_control_plane:
hosts:
kuma_cp2.example.com:
ansible_host: 10.0.1.11
kuma_cp3.example.com:
ansible_host: 10.0.1.12
kuma_control_plane_worker:
kuma_worker:
hosts:
kuma-w1.example.com:
ansible_host: 10.0.2.11
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
kuma-w2.example.com:
ansible_host: 10.0.2.12
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used in the distributed.inventory.yml file when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the k0s.inventory.yml inventory file, set deploy_to_k8s: true, need_transfer: true, deploy_example_services: false.