Kaspersky Unified Monitoring and Analysis Platform

Migrating the KUMA Core to a new Kubernetes cluster

To migrate KUMA Core to a new Kubernetes cluster:

  1. Prepare the k0s.inventory.yml inventory file.

    The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.

  2. Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.

Migrating the KUMA Core to a new Kubernetes cluster

When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.

Resolving the KUMA Core migration error

Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job step. In this case, the following error message is recorded in the log of core-transfer migration tasks:

cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory

To prevent this error, before you start migrating the KUMA Core:

  1. Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
  2. In the core-transfer-job.yaml.j2 file, find the following lines:

    cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&

    cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&

  3. Edit these lines as follows, making sure you keep the indentation (number of space characters):

    cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&

    cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&

  4. Save the changes to the file.

You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.

If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.

To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:

  1. On any controller of the cluster, delete the Ingress object by running the following command:

    sudo k0s kubectl delete daemonset/ingress -n ingress

  2. Check if a migration job exists in the cluster:

    sudo k0s kubectl get jobs -n kuma

  3. If a migration job exists, delete it:

    sudo k0s kubectl delete job core-transfer -n kuma

  4. Go to the console of a host from the kuma_core group.
  5. Start the KUMA Core services by running the following commands:

    sudo systemctl start kuma-mongodb

    sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000

  6. Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:

    sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000

  7. Make sure that the kuma_core group has access to the KUMA interface by host FQDN.

    Other hosts do not need to be running.

  8. Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
  9. In the core-transfer-job.yaml.j2 file, find the following lines:

    cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&

    cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&

  10. Edit these lines as follows, making sure you keep the indentation (number of space characters):

    cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&

    cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&

  11. Save the changes to the file.

You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.

If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.

For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.

On the Core host, the installer does the following:

  • Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
  • Deletes the internal certificate of the Core.
  • Deletes the certificate files of all other components and deletes their records from MongoDB.
  • Deletes the following directories:
    • /opt/kaspersky/kuma/core/bin
    • /opt/kaspersky/kuma/core/certificates
    • /opt/kaspersky/kuma/core/log
    • /opt/kaspersky/kuma/core/logs
    • /opt/kaspersky/kuma/grafana/bin
    • /opt/kaspersky/kuma/mongodb/bin
    • /opt/kaspersky/kuma/mongodb/log
    • /opt/kaspersky/kuma/victoria-metrics/bin
  • Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
  • On the Core host, it moves the following directories:
    • /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
    • /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
    • /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
    • /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved

After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.

If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).

If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.

If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.

See also:

Distributed installation in a high availability configuration

Page top
[Topic 244734]