Kaspersky Unified Monitoring and Analysis Platform

Modifying the configuration of KUMA

The KUMA configuration can be modified in the following ways.

  • Expanding an all-in-one installation to a distributed installation.

    To expand an all-in-one installation to a distributed installation:

    1. Create a backup copy of KUMA.
    2. Remove the pre-installed correlator, collector, and storage services from the server.
      1. In the KUMA web interface, under ResourcesActive services, select a service and click Copy ID. On the server where the services were installed, run the service removal command:

        sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID copied from the KUMA web interface> --uninstall

        Repeat the removal command for each service.

      2. Then remove the services in the KUMA web interface:

      As a result, only the KUMA Core remains on the initial installation server.

    3. Prepare the distributed.inventory.yml inventory file and in that file, specify the initial all-in-one initial installation server in the kuma_core group.

      In this way, the KUMA Core remains on the original server, and you can deploy the other components on other servers. In the inventory file, specify the servers on which you want to install the KUMA components.

      Example inventory file for expanding an all-in-one installation to a distributed installation

      all:

      vars:

      deploy_to_k8s: false

      need_transfer: false

      generate_etc_hosts: false

      deploy_example_services: false

      no_firewall_actions: false

      kuma:

      vars:

      ansible_connection: ssh

      ansible_user: root

      children:

      kuma_core:

      hosts:

      kuma-core-1.example.com:

      ip: 0.0.0.0

      mongo_log_archives_number: 14

      mongo_log_frequency_rotation: daily

      mongo_log_file_size: 1G

      kuma_collector:

      hosts:

      kuma-collector-1.example.com:

      ip: 0.0.0.0

      kuma_correlator:

      hosts:

      kuma-correlator-1.example.com:

      ip: 0.0.0.0

      kuma_storage:

      hosts:

      kuma-storage-cluster1-server1.example.com:

      ip: 0.0.0.0

      shard: 1

      replica: 1

      keeper: 0

      kuma-storage-cluster1-server2.example.com:

      ip: 0.0.0.0

      shard: 1

      replica: 2

      keeper: 0

      kuma-storage-cluster1-server3.example.com:

      ip: 0.0.0.0

      shard: 2

      replica: 1

      keeper: 0

      kuma-storage-cluster1-server4.example.com:

      ip: 0.0.0.0

      shard: 2

      replica: 2

      keeper: 0

      kuma-storage-cluster1-server5.example.com:

      ip: 0.0.0.0

      shard: 0

      replica: 0

      keeper: 1

      kuma-storage-cluster1-server6.example.com:

      ip: 0.0.0.0

      shard: 0

      replica: 0

      keeper: 2

      kuma-storage-cluster1-server7.example.com:

      ip: 0.0.0.0

      shard: 0

      replica: 0

      keeper: 3

    4. Create and install the storage, collector, correlator, and agent services on other machines.
      1. After you specify the settings ​in all sections of the distributed.inventory.yml file, run the installer on the control machine.

        sudo ./install.sh distributed.inventory.yml

        This command creates files necessary to install the KUMA components (storage, collectors, correlators) on each target machine specified in distributed.inventory.yml.

      2. Create storage, collector, and correlator services.

    The expansion of the installation is completed.

  • Adding servers for collectors to a distributed installation.

    The following instructions describe adding one or more servers to an existing infrastructure to then install collectors on these servers to balance the load. You can use these instructions as an example and adapt them according to your needs.

    To add servers to a distributed installation:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the control machine, go to the directory with the extracted KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Create an inventory file named expand.inventory.yml by copying the expand.inventory.yml.template file:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_collector section.

      Example expand.inventory.yml inventory file for adding collector servers

      kuma:

      vars:

      ansible_connection: ssh

      ansible_user: root

      children:

      kuma_collector:

      kuma-additional-collector1.example.com

      kuma-additional-collector2.example.com

      kuma_correlator:

      kuma_storage:

      hosts:

    5. On the test machine, run the following command as root from the directory with the unpacked installer:

      ./expand.sh expand.inventory.yml

      This command creates files for creating and installing the collector on each target machine specified in the expand.inventory.yml inventory file.

    6. Create and install the collectors. A KUMA collector consists of a client part and a server part, therefore creating a collector involves two steps.
      1. Creating the client part of the collector, which includes a resource set and the collector service.

        To create a resource set for a collector, in the KUMA web interface, under ResourcesCollectors, click Add collector and edit the settings. For more details, see Creating a collector.

        At the last step of the configuration wizard, after you click Create and save, a resource set for the collector is created and the collector service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.

      2. Creating the server part of the collector.
      1. On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameters are filled in automatically.

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The collector service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      2. Run the same command on each target machine specified in the expand.inventory.yml inventory file.
    7. Add the new servers to the distributed.inventory.yml inventory file so that it has up-to-date information in case you need to upgrade KUMA.

    Servers are successfully added.

  • Adding servers for correlators to a distributed installation.

    The following instructions describe adding one or more servers to an existing infrastructure to then install correlators on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.

    To add servers to a distributed installation:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the control machine, go to the directory with the extracted KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Create an inventory file named expand.inventory.yml by copying the expand.inventory.yml.template file:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_correlator section.

      Example expand.inventory.yml inventory file for adding correlator servers

      kuma:

      vars:

      ansible_connection: ssh

      ansible_user: root

      children:

      kuma_collector:

      kuma_correlator:

      kuma-additional-correlator1.example.com

      kuma-additional-correlator2.example.com

      kuma_storage:

      hosts:

    5. On the test machine, run the following command as root from the directory with the unpacked installer:

      ./expand.sh expand.inventory.yml

      This command creates files for creating and installing the correlator on each target machine specified in the expand.inventory.yml inventory file.

    6. Create and install the correlators. A KUMA correlator consists of a client part and a server part, therefore creating a correlator involves two steps.
      1. Creating the client part of the correlator, which includes a resource set and the correlator service.

        To create a resource set for a correlator, in the KUMA web interface, under ResourcesCorrelators, click Add correlator and edit the settings. For more details, see Creating a correlator.

        At the last step of the configuration wizard, after you click Create and save, a resource set for the correlator is created and the correlator service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.

      2. Creating the server part of the correlator.
      1. On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameter values are assigned automatically.

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The correlator service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      2. Run the same command on each target machine specified in the expand.inventory.yml inventory file.
    7. Add the new servers to the distributed.inventory.yml inventory file so that it has up-to-date information in case you need to upgrade KUMA.

    Servers are successfully added.

  • Adding servers to an existing storage cluster.

    The following instructions describe adding multiple servers to an existing storage cluster. You can use these instructions as an example and adapt them to your requirements.

    To add servers to an existing storage cluster:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the control machine, go to the directory with the extracted KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Create an inventory file named expand.inventory.yml by copying the expand.inventory.yml.template file:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN; you will assign the roles of shards and replicas later in the KUMA web interface as you follow the steps of these instructions. You can adapt this example according to your needs.

      Example expand.inventory.yml inventory file for adding servers to an existing storage cluster

      kuma:

      vars:

      ansible_connection: ssh

      ansible_user: root

      children:

      kuma_collector:

      kuma_correlator:

      kuma_storage:

      hosts:

      kuma-storage-cluster1-server8.example.com:

      kuma-storage-cluster1-server9.example.com:

      kuma-storage-cluster1-server10.example.com:

      kuma-storage-cluster1-server11.example.com:

    5. On the test machine, run the following command as root from the directory with the unpacked installer:

      ./expand.sh expand.inventory.yml

      Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.

    6. You do not need to create a separate storage because you are adding servers to an existing storage cluster. Edit the storage settings of the existing cluster:
      1. In the Resources → Storages section, select an existing storage and open the storage for editing.
      2. In the ClickHouse cluster nodes section, click Add nodes and specify roles in the fields for the new node. The following example describes how to specify IDs to add two shards, containing two replicas each, to an existing cluster. You can adapt this example according to your needs.

        Example:

        ClickHouse cluster nodes

        <existing nodes>

        FQDN: kuma-storage-cluster1server8.example.com

        Shard ID: 1

        Replica ID: 1

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server9.example.com

        Shard ID: 1

        Replica ID: 2

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server9.example.com

        Shard ID: 2

        Replica ID: 1

        Keeper ID: 0

        FQDN: kuma-storage-cluster1server10.example.com

        Shard ID: 2

        Replica ID: 2

        Keeper ID: 0

      3. Save the storage settings.

        Now you can create storage services for each ClickHouse cluster node.

    7. To create a storage service, in the KUMA web interface, in the ResourcesActive services section, click Add service.

      This opens the Choose a service window; in that window, select the storage you edited at the previous step and click Create service. Do the same for each ClickHouse storage node you are adding.

      As a result, the number of created services must be the same as the number of nodes being added to the ClickHouse cluster, for example, four services for four nodes. The created storage services are displayed in the KUMA web interface in the ResourcesActive services section.

    8. Now storage services must be installed on each server by using the service ID.
      1. In the KUMA web interface, in the ResourcesActive services section, select the storage service that you need and click Copy ID.

        The service ID is copied to the clipboard; you need it for running the service installation command.

      2. Compose and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The storage service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      3. Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
    9. To apply changes to a running cluster, in the KUMA web interface, under ResourcesActive services, select the check boxes next to all storage services in the cluster that you are expanding and click Update configuration. Changes are applied without stopping services.
    10. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    Servers are successfully added to a storage cluster.

  • Adding another storage cluster.

    The following instructions describe adding an extra storage cluster to an existing infrastructure. You can use these instructions as an example and adapt them to suit your needs.

    To add a storage cluster:

    1. Ensure that the target machines meet hardware, software, and installation requirements.
    2. On the control machine, go to the directory with the extracted KUMA installer by running the following command:

      cd kuma-ansible-installer

    3. Create an inventory file named expand.inventory.yml by copying the expand.inventory.yml.template file:

      cp expand.inventory.yml.template expand.inventory.yml

    4. Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing three dedicated keepers and two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN; you will assign the roles of keepers, shards, and replicas later in the KUMA web interface by following the steps of these instructions. You can adapt this example to suit your needs.

      Example expand.inventory.yml inventory file for adding a storage cluster

      kuma:

      vars:

      ansible_connection: ssh

      ansible_user: root

      children:

      kuma_collector:

      kuma_correlator:

      kuma_storage:

      hosts:

      kuma-storage-cluster2-server1.example.com

      kuma-storage-cluster2-server2.example.com

      kuma-storage-cluster2-server3.example.com

      kuma-storage-cluster2-server4.example.com

      kuma-storage-cluster2-server5.example.com

      kuma-storage-cluster2-server6.example.com

      kuma-storage-cluster2-server7.example.com

    5. On the test machine, run the following command as root from the directory with the unpacked installer:

      ./expand.sh expand.inventory.yml

      This command creates files for creating and installing the storage on each target machine specified in the expand.inventory.yml inventory file.

    6. Create and install the storage. For each storage cluster, you must create a separate storage, for example, three storages for three storage clusters. A storage consists of a client part and a server part, therefore creating a storage involves two steps.
      1. Creating the client part of the storage, which includes a resource set and the storage service.
        1. To create a resource set for a storage, in the KUMA web interface, under ResourcesStorages, click Add storage and edit the settings. In the ClickHouse cluster nodes section, specify roles for each server that you are adding: keeper, shard, replica. For more details, see Creating a resource set for a storage.

          The created resource set for the storage is displayed in the ResourcesStorages section. Now you can create storage services for each ClickHouse cluster node.

        2. To create a storage service, in the KUMA web interface, in the ResourcesActive services section, click Add service.

          This opens the Choose a service window; in that window, select the resource set that you created for the storage at the previous step and click Create service. Do the same for each ClickHouse storage.

          As a result, the number of created services must be the same as the number of nodes in the ClickHouse cluster, for example, fifty services for fifty nodes. The created storage services are displayed in the KUMA web interface in the ResourcesActive services section. Now you need to install storage services on each node of the ClickHouse cluster by using the service ID.

      2. Creating the server part of the storage.
      1. On the target machine, create the server part of the storage: in the KUMA web interface, in the ResourcesActive services section, select a storage service and click Copy ID.

        The service ID is copied to the clipboard; you will need it for the service installation command.

      2. Compose and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

        The storage service is installed on the target machine. You can check the status of the service in the web interface under ResourcesActive services.

      3. Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
      4. Dedicated keepers are automatically started immediately after installation and are displayed in the Resources → Active services section with the green status. Services on other storage nodes may not start until services are installed for all nodes in that cluster. Up to that point, services can be displayed with the red status. This is normal behavior when creating a new storage cluster or adding nodes to an existing storage cluster. As soon as the service installation command is run on all nodes of the cluster, all services get the green status.
    7. Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.

    The extra storage cluster is successfully added.

  • Removing servers from a distributed installation.

    To remove a server from a distributed installation:

    1. Remove all services from the server that you want to remove from the distributed installation.
      1. Remove the server part of the service. Copy the service ID in the KUMA web interface and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

      2. Remove the client part of the service in the KUMA web interface in the Active services → Delete section.

        The service is removed.

    2. Repeat step 1 for each server that you want to remove from the infrastructure.
    3. Remove the servers from the relevant sections of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case you need to upgrade KUMA.

    Servers are removed from the distributed installation.

  • Removing a storage cluster from a distributed installation.

    To remove one or more storage clusters from a distributed installation:

    1. Remove the storage service on each cluster server that you want to remove from the distributed installation.
      1. Remove the server part of the storage service. Copy the service ID in the KUMA web interface and run the following command on the target machine:

        sudo /opt/kaspersky/kuma/kuma <storage> --id <service ID> --uninstall

        Repeat for each server.

      2. Remove the client part of the service in the KUMA web interface in the ResourcesActive services → Delete section.

        The service is removed.

    2. Remove servers from the 'storage' section of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case you need to upgrade KUMA or modify its configuration.

    The cluster is removed from the distributed installation.

  • Migrating the KUMA Core to a new Kubernetes cluster.

    To migrate the KUMA Core to a new Kubernetes cluster:

    1. Prepare the k0s.inventory.yml inventory file.

      The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.

    2. Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.

    Migrating the KUMA Core to a new Kubernetes cluster

    When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.

    Resolving the KUMA Core migration error

    Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job step. In this case, the following error message is recorded in the log of core-transfer migration tasks:

    cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory

    To prevent this error, before you start migrating the KUMA Core:

    1. Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
    2. In the core-transfer-job.yaml.j2 file, find the following lines:

      cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&

      cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&

    3. Edit these lines as follows, making sure you keep the indentation (number of space characters):

      cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&

      cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&

    4. Save the changes to the file.

    You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.

    If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.

    To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:

    1. On any controller of the cluster, delete the Ingress object by running the following command:

      sudo k0s kubectl delete daemonset/ingress -n ingress

    2. Check if a migration job exists in the cluster:

      sudo k0s kubectl get jobs -n kuma

    3. If a migration job exists, delete it:

      sudo k0s kubectl delete job core-transfer -n kuma

    4. Go to the console of a host from the kuma_core group.
    5. Start the KUMA Core services by running the following commands:

      sudo systemctl start kuma-mongodb

      sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000

    6. Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:

      sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000

    7. Make sure that the kuma_core group has access to the KUMA interface by host FQDN.

      Other hosts do not need to be running.

    8. Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
    9. In the core-transfer-job.yaml.j2 file, find the following lines:

      cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&

      cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&

    10. Edit these lines as follows, making sure you keep the indentation (number of space characters):

      cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&

      cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&

    11. Save the changes to the file.

    You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.

    If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.

    For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.

    On the Core host, the installer does the following:

    • Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
    • Deletes the internal certificate of the Core.
    • Deletes the certificate files of all other components and deletes their records from MongoDB.
    • Deletes the following directories:
      • /opt/kaspersky/kuma/core/bin
      • /opt/kaspersky/kuma/core/certificates
      • /opt/kaspersky/kuma/core/log
      • /opt/kaspersky/kuma/core/logs
      • /opt/kaspersky/kuma/grafana/bin
      • /opt/kaspersky/kuma/mongodb/bin
      • /opt/kaspersky/kuma/mongodb/log
      • /opt/kaspersky/kuma/victoria-metrics/bin
    • Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
    • On the Core host, it moves the following directories:
      • /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
      • /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
      • /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
      • /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved

    After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.

    If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).

    If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.

    If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.

Page top
[Topic 222160]