Kaspersky Unified Monitoring and Analysis Platform

Additional requirements for deploying KUMA Core in Kubernetes

If you plan to protect KUMA's network infrastructure using Kaspersky Endpoint Security for Linux, first install KUMA in the Kubernetes cluster and only then deploy Kaspersky Endpoint Security for Linux. When updating or removing KUMA, you must first stop Kaspersky Endpoint Security for Linux using the following command:

systemctl stop kesl

When you install KUMA in a high availability configuration, the following requirements must be met:

  • General application installation requirements.
  • The hosts that you plan to use for Kubernetes cluster nodes must not use IP addresses from the following Kubernetes ranges:
    • serviceCIDR: 10.96.0.0/12
    • podCIDR: 10.244.0.0/16

    Traffic to proxy servers must excluded for the IP addresses from these ranges.

  • Each host must have a unique ID (/etc/machine-id).
  • The firewalld or uwf firewall management tool must be installed and enabled on the hosts for adding rules to iptables.
  • The nginx load balancer must be installed and configured (for details, please refer to the nginx load balancer documentation). You can install the nginx load balancer using one of the following commands:
    • sudo yum install nginx (for Oracle Linux)
    • sudo apt install nginx-full (for Astra Linux)
    • sudo apt install nginx libnginx-mod-stream (for Ubuntu)
    • sudo yum install nginx nginx-all-modules (for RED OS)

    If you want the nginx load balancer to be configured automatically during the KUMA installation, install the nginx load balancer and allow SSH access to it in the same way as for the Kubernetes cluster hosts.

    Example of an automatically created nginx configuration

    The installer creates the /etc/nginx/kuma_nginx_lb.conf configuration file. An example of this file contents is provided below. The upstream sections are generated dynamically and contain the IP addresses of the Kubernetes cluster controllers (in the example, 10.0.0.2-4 in the upstream kubeAPI_backend, upstream konnectivity_backend, controllerJoinAPI_backend sections) and the IP addresses of the worker nodes (in the example 10.0.1.2-3), for which the inventory file contains the "kaspersky.com/kuma-ingress=true" value for the extra_args variable.

    The "include /etc/nginx/kuma_nginx_lb.conf;" line must be added to the end of the /etc/nginx/nginx.conf file to apply the generated configuration file. If you have a large number of active services and users, you may need to increase the limit of open files in the nginx.conf settings.

    Configuration file example:

    # Ansible managed

    #

    # LB KUMA cluster

    #

    stream {

        server {

            listen          6443;

            proxy_pass      kubeAPI_backend;

        }

        server {

            listen          8132;

            proxy_pass      konnectivity_backend;

        }

        server {

            listen          9443;

            proxy_pass      controllerJoinAPI_backend;

        }

        server {

            listen          7209;

            proxy_pass      kuma-core-hierarchy_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7210;

            proxy_pass      kuma-core-services_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7220;

            proxy_pass      kuma-core-ui_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7222;

            proxy_pass      kuma-core-cybertrace_backend;

            proxy_timeout   86400s;

        }

        server {

            listen          7223;

            proxy_pass      kuma-core-rest_backend;

            proxy_timeout   86400s;

        }

        upstream kubeAPI_backend {

            server 10.0.0.2:6443;

            server 10.0.0.3:6443;

            server 10.0.0.4:6443;

        }

        upstream konnectivity_backend {

            server 10.0.0.2:8132;

            server 10.0.0.3:8132;

            server 10.0.0.4:8132;

        }

        upstream controllerJoinAPI_backend {

            server 10.0.0.2:9443;

            server 10.0.0.3:9443;

            server 10.0.0.4:9443;

        }

        upstream kuma-core-hierarchy_backend {

            server 10.0.1.2:7209;

            server 10.0.1.3:7209;

        }

        upstream kuma-core-services_backend {

            server 10.0.1.2:7210;

            server 10.0.1.3:7210;

        }

        upstream kuma-core-ui_backend {

            server 10.0.1.2:7220;

            server 10.0.1.3:7220;

        }

        upstream kuma-core-cybertrace_backend {

            server 10.0.1.2:7222;

            server 10.0.1.3:7222;

        }

        upstream kuma-core-rest_backend {

            server 10.0.1.2:7223;

            server 10.0.1.3:7223;

    }

     worker_rlimit_nofile 1000000;

    events {

    worker_connections 20000;

    }

    # worker_rlimit_nofile is the limit on the number of open files (RLIMIT_NOFILE) for workers.This is used to raise the limit without restarting the main process.

    # worker_connections is the maximum number of connections that a worker can open simultaneously.

  • An access key from the device on which KUMA is installed must be added to the nginx load balancer server.
  • On the nginx load balancer server, the SELinux module must be disabled in the operating system.
  • The tar, systemctl packages are installed on the hosts.

During KUMA installation, the hosts are automatically checked to see if they meet the following hardware requirements:

  • CPU cores (threads): 12 or more
  • RAM: 22,528 MB or more
  • Free disk space in the /opt partition: 1000 GB or more.
  • For an installation from scratch, the /var/lib partition must have at least 32 GB of free space. If the cluster already has been installed on this node, the size of the required free space is reduced by the size of the /var/lib/k0s directory.

If these conditions are not satisfied, the installation is aborted. For a demo installation, you can disable the check of these conditions by setting low_resources: true in the inventory file.

Additional requirements when installing on Astra Linux or Ubuntu operating systems.

  • Installing KUMA in a high availability configuration is supported for Astra Linux Special Edition RUSB.10015-01 (2022-1011SE17MD, update 1.7.2.UU.1). Kernel version 5.15.0.33 or later is required.
  • The following packages must be installed on the machines intended for deploying a Kubernetes cluster:
    • open-iscsi
    • wireguard
    • wireguard-tools

    To install the packages, run the following command:

    sudo apt install open-iscsi wireguard wireguard-tools

Additional requirements when installing on the Oracle Linux, RED OS, or Red Hat Enterprise Linux operating systems

The following packages must be installed on the machines intended for deploying the Kubernetes cluster:

  • iscsi-initiator-utils
  • wireguard-tools

Before installing the packages on Oracle Linux, you must add the EPEL repository as a source of packages using one of the following commands:

  • sudo yum install oracle-epel-release-el8 (for Oracle Linux 8)
  • sudo yum install oracle-epel-release-el9 (for Oracle Linux 9)

To install the packages, run the following command:

sudo yum install iscsi-initiator-utils wireguard-tools

Page top
[Topic 244399]