Kaspersky Next XDR Expert
[Topic 273808]

Multi-node deployment: Preparing the administrator and target hosts

Preparing for a multi-node deployment includes configuring the administrator and target hosts. After preparing hosts and specifying the configuration file, you will be able to deploy Kaspersky Next XDR Expert on target hosts by using KDT.

Preparing the administrator host

You first need to prepare a device that will act as the administrator host from which KDT will launch. This host can be either included in the Kubernetes cluster that is created by KDT during the deployment or not. If the administrator host is not included in the cluster, it will be used only to deploy and manage the Kubernetes cluster and Kaspersky Next XDR Expert. If the administrator host is included in the cluster, it will also act as a target host that is used for operation of Kaspersky Next XDR Expert components.

To prepare the administrator host:

  1. Make sure that the hardware and software on the administrator host meet the requirements for KDT.
  2. Allocate at least 10 GB of free space in the temporary files directory (/tmp) for KDT. If you do not have enough free space in this directory, run the following command to specify the path to another directory:

    export TMPDIR=<new_directory>/tmp

  3. Install the package for Docker version 23 or later, and then perform the post-installation steps to configure the administration host for proper functioning with Docker.

    Do not install unofficial distributions of Docker packages from the operating system maintainer repositories.

  4. For the administrator host that will be included in the cluster, perform additional preparatory steps.

    View details

    1. Since the device will act as the administrator and target host, make sure that it meets the requirements for the single-node deployment (because the demonstration and single-node deployment have a similar deployment scheme)..
    2. Make sure that the cgroup v2 technology is supported on the administrator host.

      The cgroup v2 technology is supported for the Linux kernel version 2.6.24 or later.

    3. Install the uidmap package on the administrator host.

      Check that the /etc/subgid and /etc/subuid files contain the user account under which KDT will be launched. To do this, you can run the following command:

      getsubids USER

      If this command does not return a result, you must add the user account to the /etc/subgid and /etc/subuid files manually in the following format:

      <username>:<min_subid>:<range_length>

      where

      • <username>—Username of the account under which KDT will be launched.
      • <min_subid>—Minimum subuid value.
      • <range_length>—Number of subuids allocated for the user <username>.

Preparing the target hosts

The target hosts are physical or virtual machines that are used to deploy Kaspersky Next XDR Expert and included in the Kubernetes cluster. Kaspersky Next XDR Expert components work on these hosts.

One of the target hosts can be used as administrator host. In this case, you must prepare this host as the administrator host, as described in the previous procedure, and then perform the preparing for the target host.

A minimum cluster configuration for the multi-node deployment includes four nodes:

  • One primary node

    The primary node is intended for managing the cluster, storing metadata, and distributing the workload.

  • Three worker nodes

    The worker nodes are intended for performing the workload of the Kaspersky Next XDR Expert components.

    For optimal workload distribution between nodes, it is recommended to use nodes with approximately the same performance.

    You can install the DBMS inside the Kubernetes cluster when you perform the demonstration deployment of Kaspersky Next XDR Expert. In this case, allocate the additional worker node for the DBMS installation. KDT will install the DBMS during the Kaspersky Next XDR Expert deployment.

    For the multi-node deployment, we recommend installing a DBMS on a separate server outside the cluster. After you deploy Kaspersky Next XDR Expert, changing the DBMS installed inside the cluster to a DBMS installed on a separate server is not available. You have to remove all Kaspersky Next XDR Expert components, and then install Kaspersky Next XDR Expert again. In this case, the data will be lost.

To prepare the target hosts:

  1. Make sure that the hardware and software on the target hosts meet the requirements for the multi-node deployment, and the target hosts are located in the same broadcast domain.

    For proper functioning of Kaspersky Next XDR Expert, the Linux kernel version must be 5.15.0.107 or later on the target hosts with the Ubuntu family operating systems.

    Docker must not be installed on the target hosts, except the target host that will be used as the administrator host. KDT will install all necessary software and dependencies during the deployment.

  2. On each target host, install the sudo package, if this package is not already installed. For Debian family operating systems, install the UFW package on the target hosts.
  3. On each target host, configure the /etc/environment file. If your organization's infrastructure uses a proxy server to access the internet, connect the target hosts to the internet.
  4. On the primary node with the UFW configuration, allow IP forwarding. In the /etc/default/ufw file, set DEFAULT_FORWARD_POLICY to ACCEPT.
  5. Provide access to the package repository. This repository stores the following packages required for Kaspersky Next XDR Expert:
    • nfs-common
    • tar
    • iscsi-package
    • wireguard
    • wireguard-tools

    KDT will try to install these packages during the deployment from the package repository. You can also install these packages manually.

  6. For the primary node, ensure that the curl package is installed.
  7. For the worker nodes, ensure that the libnfs package version 12 or later is installed.

    The curl and libnfs packages are not installed during the deployment from the package repository by using KDT. You must install these packages manually, if they are not already installed.

  8. Reserve static IP addresses for the target hosts, for the Kubernetes cluster gateway, and for the DBMS host (if the DBMS is installed inside the cluster).

    The Kubernetes cluster gateway is intended for connecting to the Kaspersky Next XDR Expert components installed inside the Kubernetes cluster. The gateway IP address is specified in the configuration file.

    For standard usage of the solution, when you install the DBMS on a separate server, the gateway IP address is an IP address in CIDR notation that contains the subnet mask /32 (for example, 192.168.0.0/32).

    For demonstration purposes, when you install the DBMS inside the Kubernetes cluster, the gateway IP address is an IP range (for example, 192.168.0.1—192.168.0.2).

    Make sure that the target hosts, the Kubernetes cluster gateway, and the DBMS host are located in the same broadcast domain.

  9. On your DNS server, register the service FQDNs to connect to the Kaspersky Next XDR Expert services.

    By default, the Kaspersky Next XDR Expert services are available at the following addresses:

    • <console_host>.<smp_domain>—Access to the OSMP Console interface.
    • <admsrv_host>.<smp_domain>—Interaction with Administration Server.
    • <kuma_host>.<smp_domain>—Access to the KUMA Console interface.
    • <api_host>.<smp_domain>—Access to the Kaspersky Next XDR Expert API.
    • <psql_host>.<smp_domain>—Interaction with the DBMS (PostgreSQL).

      Where <console_host>, <admsrv_host>, <kuma_host>, <api_host>, and <psql_host> are service host names, <smp_domain> is a service domain name. These parameters are parts of the service FQDNs, which you can specify in the configuration file. If you do not specify custom values of service host names, the default values are used: console_host—"console", admsrv_host—"admsrv", kuma_host—"kuma", api_host—"api", psql_host—"psql".

      Register the <psql_host>.<smp_domain> service FQDN if you installed the DBMS inside the Kubernetes cluster on the DBMS node and you need to connect to the DBMS.

    Depending on where you want to install the DBMS, the listed service FQDNs must be resolved to the IP address of the Kubernetes cluster as follows:

    • DBMS on a separate server (standard usage)

      In this case, the gateway IP address is the address of the Kaspersky Next XDR Expert services (excluding the DBMS IP address). For example, if the gateway IP address is 192.168.0.0/32, the service FQDNs must be resolved as follows:

      • <console_host>.<smp_domain>—192.168.0.0/32
      • <admsrv_host>.<smp_domain>—192.168.0.0/32
      • <kuma_host>.<smp_domain>—192.168.0.0/32
      • <api_host>.<smp_domain>—192.168.0.0/32
    • DBMS inside the Kubernetes cluster (demonstration deployment)

      In this case, the gateway IP address is an IP range. The first IP address of the range is the address of the Kaspersky Next XDR Expert services (excluding the DBMS IP address), and the second IP address of the range is the IP address of the DBMS. For example, if the gateway IP range is 192.168.0.1—192.168.0.2, the service FQDNs must be resolved as follows:

      • <console_host>.<smp_domain>—192.168.0.1
      • <admsrv_host>.<smp_domain>—192.168.0.1
      • <kuma_host>.<smp_domain>—192.168.0.1
      • <api_host>.<smp_domain>—192.168.0.1
      • <psql_host>.<smp_domain>—192.168.0.2
  10. On the target hosts, create the accounts that will be used for the Kaspersky Next XDR Expert deployment.

    These accounts are used for the SSH connection and must be able to elevate privileges (sudo) without entering a password. To do this, add the created user accounts to the /etc/sudoers file.

  11. Configure the SSH connection between the administrator and target hosts:
    1. On the administrator host, generate SSH keys by using the ssh-keygen utility without a passphrase.
    2. Copy the public key to every target host (for example, to the /home/<user_name>/.ssh directory) by using the ssh-copy-id utility.

      If you use a target host as the administrator host, you must copy the public key to it, too.

  12. For proper function of the Kaspersky Next XDR Expert components, provide network access between the target hosts and open the required ports on the firewall of the administrator and target hosts, if necessary.
  13. Configure time synchronization over Network Time Protocol (NTP) on the administrator and target hosts.
  14. If necessary, prepare custom certificates for working with Kaspersky Next XDR Expert public services.

    You can use one intermediate certificate that is issued off the organization's root certificate or leaf certificates for each of the services. The prepared custom certificates will be used instead of self-signed certificates.

Page top
[Topic 249228]

Single node deployment: Preparing the administrator and target hosts

Preparing for a single-node deployment includes configuring the administrator and target hosts. In the single-node configuration, the Kubernetes cluster and Kaspersky Next XDR Expert components are installed on one target host. After preparing the target host and specifying the configuration file, you will be able to deploy Kaspersky Next XDR Expert on the target host by using KDT.

Preparing the administrator host

You first need to prepare a device that will act as the administrator host from which KDT will launch. This host can be either included in the Kubernetes cluster that is created by KDT during the deployment or not. If the administrator host is not included in the cluster, it will be used only to deploy and manage the Kubernetes cluster and Kaspersky Next XDR Expert. If the administrator host is included in the cluster, it will also act as a target host that is used for operation of Kaspersky Next XDR Expert components. In this case, only one host will be used for deployment and operation of the solution.

To prepare the administrator host:

  1. Make sure that the hardware and software on the administrator host meet the requirements for KDT.
  2. Allocate at least 10 GB of free space in the temporary files directory (/tmp) for KDT. If you do not have enough free space in this directory, run the following command to specify the path to another directory:

    export TMPDIR=<new_directory>/tmp

  3. Install the package for Docker version 23 or later, and then perform the post-installation steps to configure the administration host for proper functioning with Docker.

    Do not install unofficial distributions of Docker packages from the operating system maintainer repositories.

  4. For the administrator host that will be included in the cluster, perform additional preparatory steps.

    View details

    1. Since the device will act as the administrator and target host, make sure that it meets the requirements for the single-node deployment (because the demonstration and single-node deployment have a similar deployment scheme)..
    2. Make sure that the cgroup v2 technology is supported on the administrator host.

      The cgroup v2 technology is supported for the Linux kernel version 2.6.24 or later.

    3. Install the uidmap package on the administrator host.

      Check that the /etc/subgid and /etc/subuid files contain the user account under which KDT will be launched. To do this, you can run the following command:

      getsubids USER

      If this command does not return a result, you must add the user account to the /etc/subgid and /etc/subuid files manually in the following format:

      <username>:<min_subid>:<range_length>

      where

      • <username>—Username of the account under which KDT will be launched.
      • <min_subid>—Minimum subuid value.
      • <range_length>—Number of subuids allocated for the user <username>.

Preparing the target host

The target host is a physical or virtual machine that is used to deploy Kaspersky Next XDR Expert and included in the Kubernetes cluster. The target host manages the Kubernetes cluster, stores metadata, as well as the Kaspersky Next XDR Expert components work on this host. A minimum cluster configuration for the single-node deployment includes one target host, which acts as the primary and worker nodes. On this primary worker node, the Kubernetes cluster and Kaspersky Next XDR Expert components are installed.

For standard usage, you have to install the DBMS manually on the target host before the deployment. In this case, the DBMS will be installed on the target host, but not included in the Kubernetes cluster. For demonstration purposes, you can install the DBMS inside the cluster by using KDT during the deployment.

If you want to run the Kaspersky Next XDR Expert deployment from the target host, you must prepare this host as the administrator host, as described in the previous procedure, and then perform the preparing for the target host.

To prepare the target host:

  1. Make sure that the hardware and software on the target host meet the requirements for the single-node deployment.

    For proper functioning of Kaspersky Next XDR Expert, the Linux kernel version must be 5.15.0.107 or later on the target host with the Ubuntu family operating systems

    Do not install Docker on the target host unless the target host will be used as the administrator host. KDT will install all necessary software and dependencies during the deployment.

  2. Install the sudo package, if this package is not already installed. For Debian family operating systems, install the UFW package.
  3. Configure the /etc/environment file. If your organization's infrastructure uses a proxy server to access the internet, you also need to connect the target host to the internet.
  4. If the primary worker node has the UFW configuration, allow IP forwarding. In the /etc/default/ufw file, set DEFAULT_FORWARD_POLICY to ACCEPT.
  5. Provide access to the package repository. This repository stores the following packages required for Kaspersky Next XDR Expert:
    • nfs-common
    • tar
    • iscsi-package
    • wireguard
    • wireguard-tools

    KDT will try to install these packages during the deployment from the package repository. You can also install these packages manually.

  6. Ensure that the curl and libnfs packages are installed on the primary worker node.

    The curl and libnfs packages are not installed during the deployment from the package repository by using KDT. You must install these packages manually, if they are not already installed. The libnfs package version 12 and later is used.

  7. Reserve static IP addresses for the target host and for the Kubernetes cluster gateway.

    The Kubernetes cluster gateway is intended for connecting to the Kaspersky Next XDR Expert components installed inside the Kubernetes cluster.

    For standard usage of the solution, when you install the DBMS on the target host outside the cluster, the gateway IP address is an IP address in CIDR notation that contains the subnet mask /32 (for example, 192.168.0.0/32).

    For demonstration purposes, when you install the DBMS inside the Kubernetes cluster, the gateway IP address is an IP range (for example, 192.168.0.1—192.168.0.2).

    Make sure that the target host and the Kubernetes cluster gateway are located in the same broadcast domain.

  8. On your DNS server, register the service FQDNs to connect to the Kaspersky Next XDR Expert services.

    By default, the Kaspersky Next XDR Expert services are available at the following addresses:

    • <console_host>.<smp_domain>—Access to the OSMP Console interface.
    • <admsrv_host>.<smp_domain>—Interaction with Administration Server.
    • <kuma_host>.<smp_domain>—Access to the KUMA Console interface.
    • <api_host>.<smp_domain>—Access to the Kaspersky Next XDR Expert API.
    • <psql_host>.<smp_domain>—Interaction with the DBMS (PostgreSQL).

      Where <console_host>, <admsrv_host>, <kuma_host>, <api_host>, and <psql_host> are service host names, <smp_domain> is a service domain name. These parameters are parts of the service FQDNs, which you can specify in the configuration file. If you do not specify custom values of service host names, the default values are used: console_host—"console", admsrv_host—"admsrv", kuma_host—"kuma", api_host—"api", psql_host—"psql".

      Register the <psql_host>.<smp_domain> service FQDN if you installed the DBMS inside the Kubernetes cluster on the DBMS node and you need to connect to the DBMS.

    Depending on where you want to install the DBMS, the listed service FQDNs must be resolved to the IP address of the Kubernetes cluster as follows:

    • DBMS on the target host outside the Kubernetes cluster (standard usage)

      In this case, the gateway IP address is the address of the Kaspersky Next XDR Expert services (excluding the DBMS IP address). For example, if the gateway IP address is 192.168.0.0/32, the service FQDNs must be resolved as follows:

      • <console_host>.<smp_domain>—192.168.0.0/32
      • <admsrv_host>.<smp_domain>—192.168.0.0/32
      • <kuma_host>.<smp_domain>—192.168.0.0/32
      • <api_host>.<smp_domain>—192.168.0.0/32
    • DBMS inside the Kubernetes cluster (demonstration deployment)

      In this case, the gateway IP address is an IP range. The first IP address of the range is the address of the Kaspersky Next XDR Expert services (excluding the DBMS IP address), and the second IP address of the range is the IP address of the DBMS. For example, if the gateway IP range is 192.168.0.1—192.168.0.2, the service FQDNs must be resolved as follows:

      • <console_host>.<smp_domain>—192.168.0.1
      • <admsrv_host>.<smp_domain>—192.168.0.1
      • <kuma_host>.<smp_domain>—192.168.0.1
      • <api_host>.<smp_domain>—192.168.0.1
      • <psql_host>.<smp_domain>—192.168.0.2
  9. Create the user accounts that will be used for the Kaspersky Next XDR Expert deployment.

    These accounts are used for the SSH connection and must be able to elevate privileges (sudo) without entering a password. To do this, add the created user accounts to the /etc/sudoers file.

  10. Configure the SSH connection between the administrator and target hosts:
    1. On the administrator host, generate SSH keys by using the ssh-keygen utility without a passphrase.
    2. Copy the public key to the target host (for example, to the /home/<user_name>/.ssh directory) by using the ssh-copy-id utility.

      If you use the target host as the administrator host, you must copy the public key to it, too.

  11. For proper function of the Kaspersky Next XDR Expert components, open the required ports on the firewall of the administrator and target hosts, if necessary.
  12. Configure time synchronization over Network Time Protocol (NTP) on the administrator and target hosts.
  13. If necessary, prepare custom certificates for working with Kaspersky Next XDR Expert public services.

    You can use one intermediate certificate that is issued off the organization's root certificate or leaf certificates for each of the services. The prepared custom certificates will be used instead of self-signed certificates.

Page top
[Topic 280752]

Preparing the hosts for installation of the KUMA services

The KUMA services (collectors, correlators, and storages) are installed on the KUMA target hosts that are located outside the Kubernetes cluster.

Access to KUMA services is performed by using the KUMA target host FQDNs. The administrator host must be able to access the KUMA target hosts by its FQDNs.

To prepare the KUMA target hosts for installation of the KUMA services:

  1. Ensure that the hardware, software, and installation requirements are met.
  2. Specify the host names.

    You must specify the FQDN, for example: kuma1.example.com.

    We do not recommend changing the KUMA host name after installation. This will make it impossible to verify the authenticity of certificates and will disrupt the network communication between the application components.

  3. Run the following commands:

    hostname -f

    hostnamectl status

    Compare the output of the hostname -f command and the value of the Static hostname field in the hostnamectl status command output. These values must match the FQDN of the device.

  4. Configure the SSH connection between the administrator host and KUMA target hosts.

    Use the SSH keys created for the target hosts. Copy the public key to the KUMA target hosts by using the ssh-copy-id utility.

  5. Register the KUMA target hosts in your organization's DNS zone, to allow host names to be translated to IP addresses.
  6. Ensure time synchronization over Network Time Protocol (NTP) is configured on all KUMA target hosts.

The hosts are ready for installation of the KUMA services.

Page top
[Topic 265298]

Installing a database management system

Kaspersky Next XDR Expert supports PostgreSQL or Postgres Pro database management systems (DBMS). For the full list of supported DBMSs, refer to the Hardware and software requirements.

Each of the following Kaspersky Next XDR Expert components requires a database:

  • Administration Server
  • Automation Platform
  • Incident Response Platform (IRP)
  • Identity and Access Manager (IAM)

Each of the components must have a separate database within the same instance of DBMS. We recommend that you install the DBMS instance outside the Kubernetes cluster.

For the DBMS installation, KDT requires a privileged DBMS account that has permissions to create databases and other DBMS accounts. KDT uses this privileged DBMS account to create the databases and other DBMS accounts required for the Kaspersky Next XDR Expert components.

For information about how to install the selected DBMS, refer to its documentation.

After you install the DBMS, you need to configure the DBMS server parameters to optimize the DBMS work with Open Single Management Platform.

Page top
[Topic 166761]

Configuring the PostgreSQL or Postgres Pro server for working with Open Single Management Platform

Kaspersky Next XDR Expert supports PostgreSQL or Postgres Pro database management systems (DBMS). For the full list of supported DBMSs, refer to the Hardware and software requirements. Consider configuring the DBMS server parameters to optimize the DBMS work with Administration Server.

The default path to the configuration file is: /etc/postgresql/<VERSION>/main/postgresql.conf

Recommended parameters for PostgreSQL and Postgres Pro DBMS for work with Administration Server:

  • shared_buffers = 25% of the RAM value of the device where the DBMS is installed

    If RAM is less than 1 GB, then leave the default value.

  • max_stack_depth = If the DBMS is installed on a Linux device: maximum stack size (execute the 'ulimit -s' command to obtain this value in KB) minus the 1 MB safety margin

    If the DBMS is installed on a Windows device, then leave the default value 2 MB.

  • temp_buffers = 24MB
  • work_mem = 16MB
  • max_connections = 220

    This is a minimum recommended value, you can specify a larger one.

  • max_parallel_workers_per_gather = 0
  • maintenance_work_mem = 128MB

Reload configuration or restart the server after updating the postgresql.conf file. Refer to the PostgreSQL documentation for details.

If you use a cluster Postgres DBMS, specify the max_connections parameter for all DBMS servers as well as in the cluster configuration.

If you use Postgres Pro 15.7 or Postgres Pro 15.7.1, disable the enable_compound_index_stats parameter:

enable_compound_index_stats = off

For detailed information about PostgreSQL and Postgres Pro server parameters and on how to specify the parameters, refer to the corresponding DBMS documentation.

See also

Installing a database management system

Page top
[Topic 241223]

Preparing the KUMA inventory file

Expand all | Collapse all

The KUMA inventory file is a file in the YAML format that contains installation parameters for deployment of the KUMA services that are not included in the Kubernetes cluster. The path to the KUMA inventory file is included in the configuration file that is used by Kaspersky Deployment Toolkit for the Kaspersky Next XDR Expert deployment.

The templates of the KUMA inventory file are located in the distribution package. If you want to install the KUMA services (storage, collector, and correlator) on one host, use the single.inventory.yaml file. To install the services on several hosts in the network infrastructure, use the distributed.inventory.yaml file.

We recommend backing up the KUMA inventory file that you used to install the KUMA services. You can use it to remove KUMA.

To prepare the KUMA inventory file,

Open the KUMA inventory file template located in the distribution package, and then edit the variables in the inventory file.

The KUMA inventory file contains the following blocks:

  • all block

    The all block contains the variables that are applied to all hosts specified in the inventory file. The variables are located in the vars section.

  • kuma block

    The kuma block contains the variables that are applied to hosts on which the KUMA services will be installed. These hosts are listed in the kuma block in the children section. The variables are located in the vars section.

The following table lists possible variables, their descriptions, possible values, and blocks of the KUMA inventory file where these variables can be located.

List of possible variables in the vars section

Variable

Description

Possible values

Block

Variables located in the vars section of the all and kuma blocks

ansible_connection

Method used to connect to the KUMA service hosts.

  • ssh—Connection to the target hosts via SSH is established.
  • local—No connection to the target hosts is established.

To provide the correct installation of the KUMA services, in the all block, set the ansible_connection variable to local.

In the kuma block, you must specify the ansible_connection variable and set ansible_connection to ssh to provide the connection to the hosts on which the KUMA services are installed via SSH.

  • all
  • kuma

ansible_user

User name used to connect to KUMA service hosts to install external KUMA services.

If the root user is blocked on the target hosts, specify a user name that has the right to establish SSH connections and elevate privileges by using su or sudo.

To provide the correct installation of the KUMA services, in the all block, set the ansible_user variable to nonroot.

In the kuma block, you must override the ansible_user variable and set ansible_user to the username of the account that can connect to remote hosts via SSH, to prepare them for the installation of the KUMA services.

  • all
  • kuma

deploy_example_services

Variable used to indicate the creation of predefined services during installation.

  • false—No services are needed. The default value for the KUMA inventory file template.

    Set the deploy_example_services variable to false for the standard deployment of KUMA services.

  • true—Services must be created during installation.

    Set the deploy_example_services variable to true only for the demonstration deployment of KUMA services.

all

ansible_become

Variable used to indicate the need to increase the privileges of the user account that is used to install KUMA components.

  • false—If the ansible_user value is root.
  • true—If the ansible_user value is not root.

kuma

ansible_become_method

Method used for increasing the privileges of the user account that is used to install KUMA components.

su or sudo if the ansible_user value is not root.

kuma

Variables located in the children section of the kuma block

kuma_utils

Group of hosts used for storing the service files and utilities of KUMA.

A host can be included in the kuma_utils group and in the kuma_collector, kuma_correlator, or kuma_storage group at the same time. The kuma_utils group can contain multiple hosts.

During the Kaspersky Next XDR Expert deployment, on the hosts that are included in kuma_utils, the following files are copied to the /opt/kaspersky/kuma/utils/ directory:

  • kuma is an executable file with which the KUMA services are installed.
  • kuma.exe is an executable file with which the KUMA agents are installed on Windows-based hosts.
  • LEGAL_NOTICES is a file with information about third-party code.
  • maxpatrol-tool, kuma-ptvm.tar.gz are utilities for integration with MaxPatrol.
  • ootb-content is an archive with out of the box resources for the KUMA services.

The group of hosts contains the ansible_host variable that specifies the unique host FQDN and IP address.

kuma

kuma_collector

Group of KUMA collector hosts. This group can contain multiple hosts.

The group of KUMA collector hosts contains the ansible_host variable that specifies the unique host FQDN and IP address.

kuma

kuma_correlator

Group of KUMA correlator hosts. This group can contain multiple hosts.

The group of KUMA correlator hosts contains the ansible_host variable that specifies the unique host FQDN and IP address.

kuma

kuma_storage

Group of KUMA storage hosts. This group can contain multiple hosts.

The group of KUMA storage hosts contains the ansible_host variable that specifies the unique host FQDN and IP address.

In this group, you can also specify the storage structure if you install the example services during the demonstration deployment (deploy_example_services: true). For the standard deployment (deploy_example_services: false), specify the storage structure in the KUMA Console interface.

kuma

Sample of the KUMA inventory file template for installation of the KUMA services on a single host (the single.inventory.yaml file)

all:

vars:

deploy_example_services: false

ansible_connection: local

ansible_user: nonroot

kuma:

vars:

ansible_connection: ssh

ansible_user: root

children:

kuma_utils:

hosts:

kuma.example.com:

ansible_host: 0.0.0.0

kuma_collector:

hosts:

kuma.example.com:

ansible_host: 0.0.0.0

kuma_correlator:

hosts:

kuma.example.com:

ansible_host: 0.0.0.0

kuma_storage:

hosts:

kuma.example.com:

ansible_host: 0.0.0.0

shard: 1

replica: 1

keeper: 1

Sample of the KUMA inventory file template for installation of the KUMA services on several hosts (the distributed.inventory.yaml file)

all:

vars:

deploy_example_services: false

ansible_connection: local

ansible_user: nonroot

kuma:

vars:

ansible_connection: ssh

ansible_user: root

children:

kuma_utils:

hosts:

kuma-utils.example.com:

ansible_host: 0.0.0.0

kuma_collector:

hosts:

kuma-collector-1.example.com:

ansible_host: 0.0.0.0

kuma_correlator:

hosts:

kuma-correlator-1.example.com:

ansible_host: 0.0.0.0

kuma_storage:

hosts:

kuma-storage-1.example.com:

ansible_host: 0.0.0.0

shard: 1

replica: 1

keeper: 1

kuma-storage-2.example.com:

ansible_host: 0.0.0.0

shard: 1

replica: 2

keeper: 2

kuma-storage-3.example.com:

ansible_host: 0.0.0.0

shard: 2

replica: 1

keeper: 3

kuma-storage-4.example.com:

ansible_host: 0.0.0.0

shard: 2

replica: 2

Page top
[Topic 265307]

Multi-node deployment: Specifying the installation parameters

Expand all | Collapse all

The configuration file is a file in the YAML format and contains a set of installation parameters for the Kaspersky Next XDR Expert components.

The installation parameters listed in the tables below are required for the multi-node deployment of Kaspersky Next XDR Expert. To deploy Kaspersky Next XDR Expert on a single node, use the configuration file that contains the installation parameters specific for the single-node deployment.

The template of the configuration file (multinode.smp_param.yaml.template) is located in the distribution package in the archive with the KDT utility. You can fill out the configuration file template manually; or use the Configuration wizard to specify the installation parameters that are required for the Kaspersky Next XDR Expert deployment, and then generate the configuration file.

Not all of the parameters listed below are included in the configuration file template. This template contains only those parameters that must be specified before Kaspersky Next XDR Expert deployment. Remaining parameters are set to default values, and they are not included in the template. You can manually add these parameters to the configuration file to override its values.

For correct function of KDT with the configuration file, add an empty line at the end of the file.

The nodes section of the configuration file contains installation parameters for each target host of the Kubernetes cluster. These parameters are listed in the table below.

Nodes section

Parameter name

Required

Description

desc

Yes

The name of the node.

The node name must comply with the following rules:

  • The node name must be 1 to 63 characters long.
  • The node name can only contain ASCII letters 'a' to 'z' (in either upper or lower-case), the digits '0' to '9', and the hyphen ('-').

type

Yes

The node type.

Possible parameter values:

  • primary
  • worker

host

Yes

The IP address of the node. All nodes must be included in the same subnet.

kind

No

The node type that specifies the Kaspersky Next XDR Expert component that will be installed on this node.

Possible parameter values:

  • admsrv—The value for the node on which Administration Server will be installed.
  • db—The value for the node on which the DBMS will be installed. It is used if you want to install the DBMS on the node inside the cluster (not for standard usage of the solution, only for demonstration purposes).

For Kaspersky Next XDR Expert to work correctly, we recommend that you select the node on which Administration Server will work. Also, you can select the node on which you want to install the DBMS. Specify the appropriate values of the kind parameter for these nodes. Do not specify this parameter for other nodes.

user

Yes

The user name of the account created on the target host and used for connection to the node by KDT.

The user name must comply with the following rules:

  • The user name must be 1 to 31 characters long.
  • The user name can contain letters ('a' to 'z'), digits ('0' to '9'), underscores ('_'), and hyphens ('-').

key

Yes

The path to the private part of the SSH key located on the administrator host and used for connection to the node by KDT.

Other installation parameters are listed in the parameters section of the configuration file and are described in the table below.

Parameters section

Parameter name

Required

Description

psql_dsn

Yes

The connection string for accessing the DBMS that is installed and configured on a separate server. 

Specify this parameter as follows: psql_dsn=postgres://<dbms_username>:<password>@<fqdn>:<port>.

dbms_username—The user name of a privileged internal DBMS account. This account is granted permissions to create databases and other DBMS accounts. By using this privileged DBMS account, the databases and other DBMS accounts required for the Kaspersky Next XDR Expert components will be created during the deployment. 

password—The password of the privileged internal DBMS account. The password must not contain the following symbols: " = ' % @ & ? _ #

fqdn:port—The FQDN and connection port of a separate server on which the DBMS is installed.

The psql_dsn parameter value must comply with the URI format. If the connection URI includes symbols with special meaning in any of its parts, it must be encoded with percent-encoding.

Symbols that must be replaced in the psql_dsn parameter value:

  • Whitespace → %20
  • %%25
  • &%26
  • /%2F
  • :%3A
  • =%3D
  • ?%3F
  • @%40
  • [%5B
  • ]%5D

Refer to the PostgreSQL connection string article for details.

If the psql_dsn parameter is set, the Kaspersky Next XDR Expert components use the DBMS located at the specified FQDN. Otherwise, the Kaspersky Next XDR Expert components use the DBMS inside the cluster (only for demonstration purposes).

For standard usage of the solution, install a DBMS on a separate server outside the cluster.
After you deploy Kaspersky Next XDR Expert, changing the DBMS installed inside the cluster to a DBMS installed on a separate server is not available.

nwc-language

Yes

The language of the OSMP Console interface specified by default. After installation, you can change the OSMP Console language.

Possible parameter values:

  • enUS
  • ruRu

ip_address

Yes

The reserved static IP address of the Kubernetes cluster gateway. The gateway must be included in the same subnet as all cluster nodes.

For standard usage of the solution, when you install the DBMS on a separate server, specify the gateway IP address as an IP address in CIDR notation that contains the subnet mask /32.

For demonstration purposes, when you install the DBMS inside the cluster, set the gateway IP address to an IP range in the format 0.0.0.0-0.0.0.0, where the first IP address of the range is the gateway IP address and the second IP address of the range is the DBMS IP address.

ssh_pk

Yes

The path to the private part of the SSH key located on the administrator host and used for connection to the cluster nodes and nodes with the KUMA services (collectors, correlators, and storages) by using KDT.

admin_password

Yes

The admin_password parameter specifies the password of the Kaspersky Next XDR Expert user account that will be created by KDT during the installation. The default username of this account is "admin".

The Main administrator role is assigned to this user account.

The password must comply with the following rules:

  • The user password cannot have fewer than 8 or more than 256 characters.
  • The password must contain characters from at least three of the groups listed below:
    • Uppercase letters (A–Z)
    • Lowercase letters (a–z)
    • Numbers (0–9)
    • Special characters (@ # $ % ^ & * - _ ! + = [ ] { } | : ' , . ? / \ ` ~ " ( ) ;)
  • The password must not contain any whitespaces, Unicode characters, or the ".@" combination.

When you specify the admin_password parameter value manually (not by the Configuration wizard), make sure that this value meets the YAML standard requirements for values in strings:

  • The parameter value containing special characters must be enclosed in single quotes.
  • Any single quote ' inside the parameter value must be doubled to escape this single quote.

Example: the user account password Any_pass%1234'5678"90 must be specified as the value 'Any_pass%1234''5678"90' of the admin_password parameter.

low_resources

No

The parameter indicating that Kaspersky Next XDR Expert is installed on the target host with limited computing resources.

Set the low_resources parameter to false for multi-node deployment. The default value is false.

Possible parameter values:

  • true—Installation with limited computing resources (for single-node deployment).
  • false—Standard installation.

core_disk_request

Yes

The parameter that specifies the amount of disk space for the operation of KUMA Core. This parameter is used only if the low_resources parameter is set to false. If the low_resources parameter is set to true, the core_disk_request parameter is ignored and 4 GB of the disk space for the operation of KUMA Core is allocated. If you do not specify the core_disk_request parameter and the low_resources parameter is set to false, the default amount of disk space for the operation of KUMA Core is allocated. The default amount of disk space is 512 GB.

inventory

Yes

The path to the KUMA inventory file located on the administrator host. The inventory file contains the installation parameters for deployment of the KUMA services that are not included in the Kubernetes cluster.

host_inventory

No

The path to the additional KUMA inventory file located on the administrator host. This file contains the installation parameters used to partially add or remove hosts with the KUMA services.

If you perform an initial deployment of Kaspersky Next XDR Expert or run a custom action that requires configuration file, leave the default parameter value (/dev/null).

license

Yes

The path to the license key of KUMA Core.

iam-nwc_host

flow_host

hydra_host

login_host

admsrv_host

console_host

api_host

kuma_host

psql_host

monitoring_host

gateway_host

Yes

The host name that is used in the FQDNs of the public Kaspersky Next XDR Expert services. The service host name and domain name (the smp_domain parameter value) are parts of the service FQDN.

Default values of the parameters:

  • iam-nwc_host—"console"
  • flow_host—"console"
  • hydra_host—"console"
  • login_host—"console"
  • admsrv_host—"admsrv"
  • console_host—"console"
  • api_host—"api"
  • kuma_host—"kuma"
  • psql_host—"psql"
  • monitoring_host—"monitoring"
  • gateway_host—"console"

smp_domain

Yes

The domain name that is used in the FQDNs of the public Kaspersky Next XDR Expert services. The service host name and domain name are parts of the service FQDN. For example, if the value of the console_host variable is osmp_console, and the value of the smp_domain variable is smp.local, then the FQDN of the service that provides access to the OSMP Console is osmp_console.smp.local.

pki_host_list

Yes

The list of host names of the public Kaspersky Next XDR Expert services for which a self-signed or custom certificate is to be generated.

intermediate_enabled

No

The parameter that indicates whether to use the custom intermediate certificate instead of the self-signed certificates for the public Kaspersky Next XDR Expert services. The default value is true.

Possible parameter values:

  • true—Use custom intermediate certificate.
  • false—Use self-signed certificates.

intermediate_bundle

No

The path to the custom intermediate certificate used to work with public Kaspersky Next XDR Expert services. Specify this parameter if the intermediate_enabled parameter is set to true.

admsrv_bundle

api_bundle

console_bundle

psql_bundle

No

The paths to the custom leaf certificates used to work with the public Kaspersky Next XDR Expert services: <admsrv_host>.<smp_domain>, <api_host>.<smp_domain>, <console_host>.<smp_domain>, <psql_host>.<smp_domain>. Specify the psql_bundle parameter only if you perform the demonstration deployment and install the DBMS inside the Kubernetes cluster on the DBMS node.

If you want to specify the leaf custom certificates, set the intermediate_enabled parameter to false and do not specify the intermediate_bundle parameter.

encrypt_secret

sign_secret

Yes

The names of the secret files that are stored in the Kubernetes cluster. These names contain the domain name, which must match the smp_domain parameter value.

ksc_state_size

Yes

The amount of free disk space allocated to store the Administration Server data (updates, installation packages, and other internal service data). Measured in gigabytes, specified as "<amount>Gi". The required amount of free disk space depends on the number of managed devices and other parameters, and can be calculated. The minimum recommended value is 10 GB.

prometheus_size

Yes

The amount of free disk space allocated to store metrics. Measured in gigabytes, specified as "<amount>GB". The minimum recommended value is 5 GB.

grafana_admin_user

No

The username of the account used to view OSMP metrics through the Grafana tool.

grafana_admin_password

No

The password of the account used to view OSMP metrics through the Grafana tool.

loki_size

Yes

The amount of free disk space allocated to store OSMP logs. Measured in gigabytes, specified as "<amount>Gi". The minimum recommended value is 20 GB.

loki_retention_period

Yes

The storage period of OSMP logs after which logs are automatically removed. The default value is 72 hours (set the parameter value in the configuration file as "<time in hours>h". For example, "72h").

file_storage_cp

No

The amount of free disk space allocated to store data of the component for working with response actions. Measured in gigabytes, specified as "<amount>Gi". The minimum recommended value is 20 GB.

psql_tls_off

No

The parameter that indicates whether to encrypt the traffic between the Kaspersky Next XDR Expert components and the DBMS by using the TLS protocol.

If the DBMS is installed outside the cluster, TLS encryption is disabled by default. If the DBMS is installed inside the cluster (not for standard usage of the solution, only for demonstration purposes), TLS encryption must be disabled.

Possible parameter values:

  • true—Do not encrypt the traffic (default value).
  • false—Encrypt the traffic.

psql_trusted_cas

No

The path to the PEM file that can contain the TLS certificate of the DBMS server or a root certificate from which the TLS server certificate can be issued.

Specify the psql_trusted_cas parameter if the DBMS will be installed and configured on a separate server and the traffic encryption is enabled (psql_tls_off is set to false).

psql_client_certificate

No

The path to the PEM file that contains a certificate and a private key of the Kaspersky Next XDR Expert component. This certificate is used to establish the TLS connection between the Kaspersky Next XDR Expert components and the DBMS.

Specify the psql_client_certificate parameter if the DBMS will be installed and configured on a separate server, and traffic encryption is enabled (psql_tls_off is set to false).

proxy_enabled

No

The parameter that indicates whether to use the proxy server to connect the Kaspersky Next XDR Expert components to the internet. If the host on which Kaspersky Next XDR Expert is installed has internet access, you can also provide internet access for the operation of Kaspersky Next XDR Expert components (for example, Administration Server) and for specific integrations, both Kaspersky and third-party. To establish the proxy connection, you must also specify the proxy server parameters in the Administration Server properties. The default value is false.

Possible parameter values:

  • true—Proxy server is used.
  • false—Proxy server is not used.

proxy_addresses

No

The IP address of the proxy server. If the proxy server uses multiple IP addresses, specify these addresses separated by a space (for example, "0.0.0.0 0.0.0.1 0.0.0.2"). Specify this parameter if the proxy_enabled parameter is set to true.

proxy_port

No

The number of the port through which the proxy connection will be established. Specify this parameter if the proxy_enabled parameter is set to true.

ansible_extra_flags

No

The verbosity level of logs of the KUMA Core and KUMA services deployment that is performed by KDT.

Possible parameter values:

  • -v
  • -vv
  • -vvv
  • -vvvv

As the number of "v" letters in the flag increases, logs become more detailed. If this parameter is not specified in the configuration file, the standard component installation logs are saved.

incident_attachments_max_count_limit

No

The number of files that you can attach to the incident. The default value is 100.

incident_attachments_max_size_limit

No

The total size of files attached to the incident. Measured in bytes. Specified without units of measurement. The default value is 26214400.

ignore_precheck

No

The parameter indicating whether to check the hardware, software, and network configuration of the Kubernetes cluster nodes for compliance with the prerequisites for installing the solution before the deployment. The default value is false.

Possible parameter values:

  • true—Skip the pre-checks.
  • false—Perform the pre-checks.

Sample of the configuration file for the multi-node deployment of Kaspersky Next XDR Expert

schemaType: ParameterSet

schemaVersion: 1.0.1

namespace: ""

name: bootstrap

project: xdr

nodes:

- desc: cdt-primary1

type: primary

host: 1.1.1.1

access:

ssh:

user: root

key: /root/.ssh/id_rsa

- desc: cdt-w1

type: worker

host: 1.1.1.1

access:

ssh:

user: root

key: /root/.ssh/id_rsa

- desc: cdt-w2

type: worker

host: 1.1.1.1

access:

ssh:

user: root

key: /root/.ssh/id_rsa

- desc: cdt-w3

type: worker

host: 1.1.1.1

kind: admsrv

access:

ssh:

user: root

key: /root/.ssh/id_rsa

parameters:

- name: psql_dsn

source:

value: "postgres://postgres:password@dbms.example.com:1234"

- name: ip_address

source:

value: 1.1.1.1/32

- name: ssh_pk

source:

path: /root/.ssh/id_rsa

- name: admin_password

source:

value: "password"

- name: core_disk_request

source:

value: 20Gi

- name: inventory

source:

value: "/root/osmp/inventory.yaml"

- name: host_inventory

source:

value: "/dev/null"

- name: license

source:

value: "/root/osmp/license.key"

- name: smp_domain

source:

value: "smp.local"

- name: pki_fqdn_list

source:

value: "admsrv api console kuma psql monitoring"

Page top
[Topic 249240]

Single-node deployment: Specifying the installation parameters

Expand all | Collapse all

The configuration file used to deploy Kaspersky Next XDR Expert on a single node contains the installation parameters that are required both for the multi-node and single-node deployment. Also, this configuration file contains parameters specific only for the single-node deployment (vault_replicas, vault_ha_mode, vault_standalone, and default_сlass_replica_count).

The template of the configuration file (singlenode.smp_param.yaml.template) is located in the distribution package in the archive with the KDT utility. You can fill out the configuration file template manually; or use the Configuration wizard to specify the installation parameters that are required for the Kaspersky Next XDR Expert deployment, and then generate the configuration file.

Not all of the parameters listed below are included in the configuration file template. This template contains only those parameters that must be specified before Kaspersky Next XDR Expert deployment. The remaining parameters are set to default values, and they are not included in the template. You can manually add these parameters to the configuration file to override its values.

For correct function of KDT with the configuration file, add an empty line at the end of the file.

The nodes section of the configuration file contains the target host parameters that are listed in the table below.

Nodes section

Parameter name

Required

Description

desc

Yes

The name of the node.

type

Yes

The node type.

For the target host, set the type parameter to primary-worker to enable the single-node deployment. In this case, the target host will act as the primary and worker nodes.

host

Yes

The IP address of the node. All nodes must be included in the same subnet.

kind

No

The node type that specifies the Kaspersky Next XDR Expert component that will be installed on this node.

For the single-node deployment, leave this parameter empty, because all components will be installed on a single node.

user

Yes

The username of the user account created on the target host and used for connection to the node by KDT.

key

Yes

The path to the private part of the SSH key located on the administrator host and used for connection to the node by KDT.

Other installation parameters are listed in the parameters section of the configuration file and are described in the table below.

Parameters section

Parameter name

Required

Description

psql_dsn

Yes

The connection string for accessing the DBMS that is installed and configured outside the Kubernetes cluster. 

Specify this parameter as follows:

psql_dsn=postgres://<dbms_username>:<password>@<fqdn>:<port>

where:

  • dbms_username—The user name of a privileged internal DBMS account. This account is granted permissions to create databases and other DBMS accounts. By using this privileged DBMS account, the databases and other DBMS accounts required for the Kaspersky Next XDR Expert components will be created during the deployment. 
  • password—The password of the privileged internal DBMS account.

    The password must not contain the following symbols: " = ' % @ & ? _ #

  • fqdn:port—The FQDN and connection port of the target host on which the DBMS is installed.

The psql_dsn parameter value must comply with the URI format. If the connection URI includes symbols with special meaning in any of its parts, it must be encoded with percent-encoding.

Symbols that must be replaced in the psql_dsn parameter value:

  • Whitespace → %20
  • %%25
  • &%26
  • /%2F
  • :%3A
  • =%3D
  • ?%3F
  • @%40
  • [%5B
  • ]%5D

Refer to the PostgreSQL connection string article for details.

If the psql_dsn parameter is set, the Kaspersky Next XDR Expert components use the DBMS located at the specified FQDN. Otherwise, the Kaspersky Next XDR Expert components use the DBMS inside the cluster (only for demonstration purposes).

For standard usage of the solution, install a DBMS on the target host outside the cluster.
After you deploy Kaspersky Next XDR Expert, changing the DBMS installed inside the cluster to a DBMS installed on a separate server is not available.

nwc-language

Yes

The language of the OSMP Console interface specified by default. After installation, you can change the OSMP Console language.

Possible parameter values: enUS, ruRu

ip_address

Yes

The reserved static IP address of the Kubernetes cluster gateway. The gateway must be included in the same subnet as all cluster nodes.

For standard usage of the solution, when you install the DBMS on the target host outside the cluster, the gateway IP address must contain the subnet mask /32.

For demonstration purposes, when you install the DBMS inside the cluster, set the gateway IP address to an IP range in the format 0.0.0.0-0.0.0.0, where the first IP address of the range is the gateway IP address itself and the second IP address of the range is the DBMS IP address.

ssh_pk

Yes

The path to the private part of the SSH key located on the administrator host and used for connection to the cluster nodes and nodes with the KUMA services (collectors, correlators, and storages) by using KDT.

admin_password

Yes

The admin_password parameter specifies the password of the Kaspersky Next XDR Expert user account that will be created by KDT during the installation. The default username of this account is "admin".

The Main administrator role is assigned to this user account.

The password must comply with the following rules:

  • The user password cannot have fewer than 8 or more than 256 characters.
  • The password must contain characters from at least three of the groups listed below:
    • Uppercase letters (A–Z)
    • Lowercase letters (a–z)
    • Numbers (0–9)
    • Special characters (@ # $ % ^ & * - _ ! + = [ ] { } | : ' , . ? / \ ` ~ " ( ) ;)
  • The password must not contain any whitespaces, Unicode characters, or the ".@" combination.

When you specify the admin_password parameter value manually (not by the Configuration wizard), make sure that this value meets the YAML standard requirements for values in strings:

  • The parameter value containing special characters must be enclosed in single quotes.
  • Any single quote ' inside the parameter value must be doubled to escape this single quote.

Example: the user account password Any_pass%1234'5678"90 must be specified as the value 'Any_pass%1234''5678"90' of the admin_password parameter.

low_resources

Yes

The parameter that indicates that Kaspersky Next XDR Expert is installed on the target host with limited computing resources.

Possible parameter values:

  • true—Installation with limited computing resources (for single-node deployment).
  • false—Standard installation.

For the single-node deployment, set the low_resources parameter to true so that Kaspersky Next XDR Expert components will require less memory and CPU resources. Also, if you enable this parameter, 4 GB of free disk space will be allocated to install KUMA Core on the target host.

vault_replicas

Yes

The number of replicas of the secret storage in the Kubernetes cluster.

For the single-node deployment, set the vault_replicas parameter to 1.

vault_ha_mode

Yes

The parameter that indicates whether to run the secret storage in the High Availability (HA) mode.

Possible parameter values:

  • true
  • false

For the single-node deployment, set the vault_ha_mode parameter to false.

vault_standalone

Yes

The parameter that indicates whether to run the secret storage in the standalone mode.

Possible parameter values:

  • true
  • false

For the single-node deployment, set the vault_standalone parameter value to true.

default_class_replica_count

Yes

The number of disk volumes that are used to store the service data of Kaspersky Next XDR Expert components and KDT. The default value is 3.

For the single-node deployment, set the default_class_replica_count parameter value to 1.

core_disk_request

Yes

The parameter that specifies the amount of disk space for the operation of KUMA Core. This parameter is used only if the low_resources parameter is set to false. If the low_resources parameter is set to true, the core_disk_request parameter is ignored and 4 GB of the disk space for the operation of KUMA Core is allocated. If you do not specify the core_disk_request parameter and the lowResources parameter is set to false, the default amount of disk space for the operation of KUMA Core is allocated. The default amount of disk space is 512 GB.

inventory

Yes

The path to the KUMA inventory file located on the administrator host. The inventory file contains installation parameters for deployment of the KUMA services that are not included in the Kubernetes cluster.

host_inventory

No

The path to the additional KUMA inventory file located on the administrator host. This file contains the installation parameters used to partially add or remove hosts with the KUMA services.

If you perform an initial deployment of Kaspersky Next XDR Expert or run a custom action that requires a configuration file, leave the default parameter value (/dev/null).

license

Yes

The path to the license key of KUMA Core.

iam-nwc_host

flow_host

hydra_host

login_host

admsrv_host

console_host

api_host

kuma_host

psql_host

monitoring_host

gateway_host

Yes

The host name that is used in the FQDNs of the public Kaspersky Next XDR Expert services. The service host name and domain name (the smp_domain parameter value) are parts of the service FQDN.

Default values of the parameters:

  • iam-nwc_host—"console"
  • flow_host—"console"
  • hydra_host—"console"
  • login_host—"console"
  • admsrv_host—"admsrv"
  • console_host—"console"
  • api_host—"api"
  • kuma_host—"kuma"
  • psql_host—"psql"
  • monitoring_host—"monitoring"
  • gateway_host—"console"

smp_domain

Yes

The domain name that is used in the FQDNs of the public Kaspersky Next XDR Expert services. The service host name and domain name are parts of the service FQDN. For example, if the value of the console_host variable is console, and the value of the smp_domain variable is smp.local, then the full name of the service that provides access to the OSMP Console is console.smp.local.

pki_host_list

Yes

The list of host names of the public Kaspersky Next XDR Expert services for which a self-signed or custom certificate is to be generated.

intermediate_enabled

No

The parameter that indicates whether to use the custom intermediate certificate instead of the self-signed certificates for the public Kaspersky Next XDR Expert services. The default value is true.

Possible parameter values:

  • true—Use custom intermediate certificate.
  • false—Use self-signed certificates.

intermediate_bundle

No

The path to the custom intermediate certificate used to work with public Kaspersky Next XDR Expert services. Specify this parameter if the intermediate_enabled parameter is set to true.

admsrv_bundle

api_bundle

console_bundle

psql_bundle

No

The paths to the custom leaf certificates used to work with the corresponding public Kaspersky Next XDR Expert services: <admsrv_host>.<smp_domain>, <api_host>.<smp_domain>, <console_host>.<smp_domain>, and <psql_host>.<smp_domain>. Specify the psql_bundle parameter only if you perform the demonstration deployment and install the DBMS inside the Kubernetes cluster on the DBMS node.

If you want to specify the leaf custom certificates, set the intermediate_enabled parameter to false and do not specify the intermediate_bundle parameter.

encrypt_secret

sign_secret

Yes

The names of the secret files that are stored in the Kubernetes cluster. These names contain the domain name, which must match the smp_domain parameter value.

ksc_state_size

Yes

The amount of free disk space allocated to store the Administration Server data (updates, installation packages, and other internal service data). Measured in gigabytes, specified as "<amount>Gi". The required amount of free disk space depends on the number of managed devices and other parameters, and can be calculated. The minimum recommended value is 10 GB.

prometheus_size

Yes

The amount of free disk space allocated to store metrics. Measured in gigabytes, specified as "<amount>GB". The minimum recommended value is 5 GB.

grafana_admin_user

No

The username of the account used to view OSMP metrics through the Grafana tool.

grafana_admin_password

No

The password of the account used to view OSMP metrics through the Grafana tool.

loki_size

Yes

The amount of free disk space allocated to store OSMP logs. Measured in gigabytes, specified as "<amount>Gi". The minimum recommended value is 20 GB.

loki_retention_period

Yes

The storage period of OSMP logs after which logs are automatically removed. The default value is 72 hours (set the parameter value in the configuration file as "<time in hours>h". For example, "72h").

file_storage_cp

No

The amount of free disk space allocated to store data of the component for working with response actions. Measured in gigabytes, specified as "<amount>Gi". The minimum recommended value is 20 GB.

psql_tls_off

No

The parameter that indicates whether to encrypt the traffic between the Kaspersky Next XDR Expert components and the DBMS by using the TLS protocol.

If the DBMS is installed outside the cluster, TLS encryption is disabled by default. If the DBMS is installed inside the cluster (not for standard usage of the solution, only for demonstration purposes), TLS encryption must be disabled.

Possible parameter values:

  • true—Do not encrypt the traffic (default value).
  • false—Encrypt the traffic.

psql_trusted_cas

No

The path to the PEM file that can contain the TLS certificate of the DBMS server or a root certificate from which the TLS server certificate can be issued.

Specify the psql_trusted_cas parameter if the DBMS will be installed and configured on a separate server and traffic encryption is enabled (psql_tls_off is set to false).

psql_client_certificate

No

The path to the PEM file that contains a certificate and a private key of the Kaspersky Next XDR Expert component. This certificate is used to establish the TLS connection between the Kaspersky Next XDR Expert components and the DBMS.

Specify the psql_client_certificate parameter if the DBMS will be installed and configured on a separate server and the traffic encryption is enabled (psql_tls_off is set to false).

proxy_enabled

No

The parameter that indicates whether to use the proxy server to connect the Kaspersky Next XDR Expert components to the internet. If the host on which Kaspersky Next XDR Expert is installed has internet access, you can also provide internet access for operation of Kaspersky Next XDR Expert components (for example, Administration Server) and for specific integrations, both Kaspersky and third-party. To establish the proxy connection, you must also specify the proxy server parameters in the Administration Server properties. The default value is false.

Possible parameter values:

  • true—Proxy server is used.
  • false—Proxy server is not used.

proxy_addresses

No

The IP address of the proxy server. If the proxy server uses multiple IP addresses, specify these addresses separated by a space (for example, "0.0.0.0 0.0.0.1 0.0.0.2"). Specify this parameter if the proxy_enabled parameter is set to true.

proxy_port

No

The number of the port through which the proxy connection will be established. Specify this parameter if the proxy_enabled parameter is set to true.

trace_level

No

The trace level. The default value is 0.

Possible parameter values: 0–5.

ansible_extra_flags

No

The verbosity level of logs of the KUMA Core and KUMA services deployment that is performed by KDT.

Possible parameter values:

  • -v
  • -vv
  • -vvv
  • -vvvv

As the number of "v" letters in the flag increases, logs become more detailed. If this parameter is not specified in the configuration file, the standard component installation logs are saved.

incident_attachments_max_count_limit

No

The number of files that you can attach to the incident. The default value is 100.

incident_attachments_max_size_limit

No

The total size of files attached to the incident. Measured in bytes. Specified without units of measurement. The default value is 26214400.

ignore_precheck

No

The parameter indicating whether to check the hardware, software, and network configuration of the Kubernetes cluster nodes for compliance with the prerequisites for installing the solution before the deployment. The default value is false.

Possible parameter values:

  • true—Skip the pre-checks.
  • false—Perform the pre-checks.

Sample of the configuration file for the single-node deployment of Kaspersky Next XDR Expert

schemaType: ParameterSet

schemaVersion: 1.0.1

namespace: ""

name: bootstrap

project: xdr

nodes:

- desc: cdt-1

type: primary-worker

host: 1.1.1.1

proxy:

access:

ssh:

user: root

key: /root/.ssh/id_rsa

parameters:

- name: psql_dsn

source:

value: "postgres://postgres:password@dbms.example.com:1234"

- name: ip_address

source:

value: 1.1.1.1/32

- name: ssh_pk

source:

path: /root/.ssh/id_rsa

- name: admin_password

source:

value: "password"

- name: low_resources

source:

value: "true"

- name: default_class_replica_count

source:

value: "1"

- name: vault_replicas

source:

value: "1"

- name: vault_ha_mode

source:

value: "false"

- name: vault_standalone

source:

value: "true"

- name: inventory

source:

value: "/root/osmp/inventory.yaml"

- name: host_inventory

source:

value: "/dev/null"

- name: license

source:

value: "/root/osmp/license.key"

- name: smp_domain

source:

value: "smp.local"

- name: pki_host_list

source:

value: "admsrv api console kuma psql monitoring"

Page top
[Topic 271992]

Specifying the installation parameters by using the Configuration wizard

For the multi-node and single-node Kaspersky Next XDR Expert deployment, you have to prepare a configuration file that contains the installation parameters of the Kaspersky Next XDR Expert components. The Configuration wizard allows you to specify the installation parameters that are required to deploy Kaspersky Next XDR Expert, and then generate the resulting configuration file. Optional installation parameters have default values, and they are not to be specified in the Configuration wizard. You can manually add these parameters to the configuration file to override their default values.

Prerequisites

Before specifying the installation parameters by using the Configuration wizard, you must install a database management system on a separate server that is located outside the Kubernetes cluster, perform all preparatory steps necessary for the administrator, target hosts (depending on the multi-node or single-node deployment option), and KUMA hosts.

Process

To specify the installation parameters by using the Configuration wizard:

  1. On the administrator host where the KDT utility is located, run the Configuration wizard by using the following command:

    ./kdt wizard -k <path_to_transport_archive> -o <path_to_configuration_file>

    where:

    • <path_to_transport_archive> is the path to the transport archive.
    • <path_to_configuration_file> is the path where you want to save the configuration file and the configuration file name.

    The Configuration wizard prompts you to specify the installation parameters. The list of the installation parameters that are specific for the multi-node and single-node deployment differs.

    If you do not have the Write permissions on the specified directory or a file with the same name is located in this directory, an error occurs and the wizard terminates.

  2. Enter the IPv4 address of a primary node (or a primary worker node, if you will perform the single-node deployment). This value corresponds to the host parameter of the configuration file.
  3. Enter the username of the user account used for connection to the primary node by KDT (the user parameter of the configuration file).
  4. Enter the path to the private part of the SSH key located on the administrator host and that is used for connection to the primary node by KDT (the key parameter of the configuration file).
  5. Enter the number of worker nodes.

    Possible values:

    • 0—Single-node deployment.
    • 3 or more—Multi-node deployment.

    This step defines the option of deploying Kaspersky Next XDR Expert. If you want to perform single-node deployment, the following parameters specific for this deployment option will take the default values:

    • typeprimary-worker
    • low_resourcestrue
    • vault_replicas1
    • vault_ha_modefalse
    • vault_standalonetrue
    • default_class_replica_count1
  6. For each worker node, enter the IPv4 address (the host parameter of the configuration file).

    Note that the primary and worker nodes must be included in the same subnet.

    For multi-node deployment, the kind parameter of the first worker node is set to admsrv by default. That means that Administration Server will be installed on the first worker node. For single-node deployment, the kind parameter is not specified for the primary worker node.

  7. For each worker node, enter the username used for connection to the worker node by KDT (the user parameter of the configuration file).
  8. For each worker node, enter the path to the private part of the SSH key used for connection to the worker node by KDT (the key parameter of the configuration file).
  9. Enter the connection string for accessing the DBMS that is installed and configured on a separate server (the psql_dsn parameter of the configuration file).

    Specify this parameter as follows: postgres://<dbms_username>:<password>@<fqdn>:<port>.

    The Configuration wizard specifies the installation parameters only for the deployment option with the DBMS installed on a separate server that is located outside the Kubernetes cluster.

  10. Enter the IP address of the Kubernetes cluster gateway (the ip_address parameter of the configuration file).

    The gateway must be included in the same subnet as all cluster nodes. The gateway IP address must contain the subnet mask /32.

  11. Enter the password of the Kaspersky Next XDR Expert user account that will be created by KDT during the installation (the admin_password parameter of the configuration file).

    The default username of this account is "admin." The Main administrator role is assigned to this user account.

  12. Enter the path to the KUMA inventory file located on the administrator host (the inventory parameter of the configuration file).

    The KUMA inventory file contains the installation parameters for deployment of the KUMA services that are not included in the Kubernetes cluster.

  13. Enter the path to the LICENSE file of KUMA Core (the license parameter of the configuration file).
  14. Enter the domain name that is used in the FQDNs of the public Kaspersky Next XDR Expert services (the smp_domain parameter of the configuration file).
  15. Enter the path to the custom certificates used to work with the public Kaspersky Next XDR Expert services (the intermediate_bundle parameter of the configuration file).

    If you want to use self-signed certificates, press Enter to skip this step.

  16. Skip the step to specify the extended_incident_lifecycle parameter. This is a service parameter. By default, the extended_incident_lifecycle parameter is disabled, do not change it.
  17. Check the specified parameters that are displayed in the numbered list.

    To edit the parameter, enter the parameter number, and then specify a new parameter value. Otherwise, press Enter to continue.

  18. Press Y to save a new configuration file with the specified parameters or N to stop the Configuration wizard without saving.

The configuration file with the specified parameters is saved in the YAML format.

Other installation parameters are included in the configuration file, with default values. You can edit the configuration file manually before the deployment of Kaspersky Next XDR Expert.

Page top
[Topic 271043]

Installing Kaspersky Next XDR Expert

Kaspersky Next XDR Expert is deployed by using KDT. KDT automatically deploys the Kubernetes cluster within which the Kaspersky Next XDR Expert components and other infrastructure components are installed. The steps of the Kaspersky Next XDR Expert installation process do not depend on the selected deployment option.

If you need to install multiple Kubernetes clusters with Kaspersky Next XDR Expert instances, you can use the required number of contexts.

To install Kaspersky Next XDR Expert:

  1. Unpack the downloaded distribution package with KDT on the administrator host.
  2. Read the End User License Agreement (EULA) of KDT located in the distribution package with the Kaspersky Next XDR Expert components.

    When you start using KDT, you accept the terms of the EULA of KDT.

    You can read the EULA of KDT after the deployment of Kaspersky Next XDR Expert. The file is located in the /home/kdt/ directory of the user who runs the deployment of Kaspersky Next XDR Expert.

  3. During installation, KDT downloads missing packages from the OS repositories. Before installing Kaspersky Next XDR Expert, run the following command on the target hosts to make sure that the apt/yum cache is up-to-date.

    apt update

  4. On the administrator host, run the following commands to start deployment of Kaspersky Next XDR Expert by using KDT. Specify the path to the transport archive with the Kaspersky Next XDR Expert components and the path to the configuration file that you filled out earlier (installation parameter sets for the multi-node and single-node deployment differ).

    chmod +x kdt

    ./kdt apply -k <full_path_to_transport_archive> -i <full_path_to_configuration_file>

    You can install Kaspersky Next XDR Expert without prompting to read the terms of the EULA and the Privacy Policy of OSMP, if you use the --accept-eula flag. In this case, you must read the EULA and the Privacy Policy of OSMP before the deployment of Kaspersky Next XDR Expert. The files are located in the distribution package with the Kaspersky Next XDR Expert components.

    If you want to read and accept the terms of the EULA and the Privacy Policy during the deployment, do not use the --accept-eula flag.

  5. If you do not use the --accept-eula flag in the previous step, read the EULA and the Privacy Policy of OSMP. The text is displayed in the command line window. Press the space bar to view the next text segment. Then, when prompted, enter the following values:
    1. Enter y if you understand and accept the terms of the EULA.

      Enter n if you do not accept the terms of the EULA.

    2. Enter y if you understand and accept the terms of the Privacy Policy, and if you agree that your data will be handled and transmitted (including to third countries) as described in the Privacy Policy.

      Enter n if you do not accept the terms of the Privacy Policy.

      To use Kaspersky Next XDR Expert, you must accept the terms of the EULA and the Privacy Policy.

    After you start the deployment, KDT checks whether the hardware, software, and network configuration of the Kubernetes cluster nodes meet the prerequisites for installing the solution. If all the strict pre-checks are successfully completed, KDT deploys the Kaspersky Next XDR Expert components within the Kubernetes cluster on the target hosts. Otherwise, the deployment will be interrupted. You can skip the pre-checks before the deployment, if needed (set the ignore_precheck installation parameter to true).

    During the Kaspersky Next XDR Expert deployment, a new user is created on the primary Administration Server. To start configuring OSMP Console, this user is assigned the following roles: the XDR role of the Main administrator in the Root tenant and the Kaspersky Security Center role of the Main administrator.

  6. View the installation logs of the Bootstrap component in the directory with the KDT utility and obtain diagnostic information about Kaspersky Next XDR Expert components, if needed.
  7. Sign in to OSMP Console and to KUMA Console.

    The OSMP Console address is https://<console_host>.<smp_domain>:443.

    The KUMA Console address is https://<kuma_host>.<smp_domain>:443.

    Addresses consist of the console_host, kuma_host, and smp_domain parameter values specified in the configuration file.

Kaspersky Next XDR Expert is deployed on the target hosts. Install the KUMA services to get started with the solution.

Page top
[Topic 249213]

Configuring internet access for the target hosts

If your organization's infrastructure uses the proxy server to access the internet, as well as you need to connect the target hosts to the internet, you must add the IP address of each target host to the no_proxy variable in the /etc/environment file before the Kaspersky Next XDR Expert deployment. This allows you to establish a direct connection of the target hosts to the internet and correctly deploy Kaspersky Next XDR Expert.

To configure internet access for the target hosts:

  1. On the target host, open the /etc/environment file by using a text editor. For example, the following command opens the file by using the GNU nano text editor:

    sudo nano /etc/environment

  2. In the /etc/environment file, add the IP address of the target host to the no_proxy variable separated by a comma without a space.

    For example, the no_proxy variable can be initially specified as follows:

    no_proxy=localhost,127.0.0.1

    You can add the IP address of the target host (192.168.0.1) to the no_proxy variable:

    no_proxy=localhost,127.0.0.1,192.168.0.1

    Alternatively, you can specify the subnet that includes the target hosts (in CIDR notation):

    no_proxy=localhost,127.0.0.1,192.168.0.0/24

  3. Save the /etc/environment file.

After you add the IP addresses in the /etc/environment file to each target host, you can continue preparing of the target hosts and further Kaspersky Next XDR Expert deployment.

Page top
[Topic 275599]

Synchronizing time on machines

To configure time synchronization on machines:

  1. Run the following command to install chrony:

    sudo apt install chrony

  2. Configure the system time to synchronize with the NTP server:
    1. Make sure the virtual machine has internet access.

      If access is available, go to step b.

      If internet access is not available, edit the /etc/chrony.conf file. Replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.

    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds, and then run the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contains the line System clock synchronized: yes.

Synchronization is configured.

Page top
[Topic 265841]

Installing KUMA services

Services are the main components of the KUMA component, which helps the system to manage events. Services allow you to receive events from event sources and subsequently bring them to a common form that is convenient for finding correlation, as well as for storage and manual analysis.

Service types:

  • Storages are used to save events.
  • Collectors are used to receive events and convert them to the KUMA format.
  • Correlators are used to analyze events and search for defined patterns.
  • Agents are used to receive events on remote devices and forward them to the KUMA collectors.

You must install the KUMA services only after you deploy Kaspersky Next XDR Expert. During the Kaspersky Next XDR Expert deployment, the required infrastructure is prepared: the service directories are created on the prepared hosts, and the files that are required for the service installation are added to these directories. We recommend installing services in the following order: storage, collectors, correlators, and agents.

To install and configure the KUMA services:

  1. Sign in to KUMA Console.

    You can use one of the following methods:

    • In the main menu of OSMP Console, go to SettingsKUMA.
    • In your browser, go to https://<kuma_host>.<smp_domain>:443.

      KUMA Console addresses consists of the kuma_host and smp_domain parameter values specified in the configuration file.

  2. In KUMA Console, create a resource set for each KUMA service (storages, collectors, and correlators) that you want to install on the prepared hosts in the network infrastructure.
  3. Create services for storages, collectors, and correlators in KUMA Console.
  4. Obtain the service identifiers to bind the created resource sets and the KUMA services:
    1. In the KUMA Console main menu, go to ResourcesActive services.
    2. Select the required KUMA service, and then click the Copy ID button.
  5. On the prepared hosts in the network infrastructure, run the corresponding commands to install the KUMA services. Use the service identifiers that were obtained earlier:
    • Installation command for the storage:

      sudo /opt/kaspersky/kuma/kuma storage --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --install

    • Installation command for the collector:

      sudo /opt/kaspersky/kuma/kuma collector --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --api.port <port used for communication with the collector>

    • Installation command for the correlator:

      sudo /opt/kaspersky/kuma/kuma correlator --core https://<KUMA Core server FQDN>:7210 --id <service ID copied from the KUMA Console> --api.port <port used for communication with the correlator> --install

    By default, the FQDN of KUMA Core is <kuma_console>.<smp_domain>.

    The port that is used for connection to KUMA Core cannot be changed. By default, port 7210 is used.

    Open ports that correspond to the installed collector and correlator on the server (TCP 7221 and other ports used for service installation as the --api.port <port> parameter values).

  6. During the installation of the KUMA services, read the End User License Agreement (EULA) of KUMA. The text is displayed in the command line window. Press the space bar to view the next text segment. Then, when prompted, enter the following values:
    • Enter y if you understand and accept the terms of the EULA.
    • Enter n if you do not accept the terms of the EULA. To use the KUMA services, you must accept the terms of the EULA.

    You can read the EULA of KUMA after the installation of the KUMA services in one of the following ways:

    • On hosts, it is included in the kuma_utils group in the KUMA inventory file: open the LICENSE file located in the /opt/kaspersky/kuma/utils directory.
    • On hosts, it is included in other groups (kuma_storage, kuma_collector, or kuma_correlator) in the KUMA inventory file: open the LICENSE file located in the /opt/kaspersky/kuma directory.
    • Run the following command:

      /opt/kaspersky/kuma/kuma license --show

    After you accept the EULA, the KUMA services are installed on the prepared machines in the network infrastructure.

  7. If necessary, verify that the collector and correlator are ready to receive events.
  8. If necessary, install agents in the KUMA network infrastructure.

    The files required for the agent installation are located in the /opt/kaspersky/kuma/utils directory.

The KUMA services required for the function of Kaspersky Next XDR Expert are installed.

Page top
[Topic 265478]

Deployment of multiple Kubernetes clusters and Kaspersky Next XDR Expert instances

KDT allows you to deploy multiple Kubernetes clusters with Kaspersky Next XDR Expert instances and switch between them by using contexts. Context is a set of access parameters that define the Kubernetes cluster that the user can select to interact with. The context also includes data for connecting to the cluster by using KDT.

Prerequisites

Before creating contexts and installing Kubernetes clusters with Kaspersky Next XDR Expert instances, you must do the following:

  1. Prepare the administrator and target hosts.

    For the installation of multiple clusters and Kaspersky Next XDR Expert instances, you need to prepare one administration host for all clusters and separate sets of target hosts for each of the clusters. Kubernetes components should not be installed on the target hosts.

  2. Prepare the hosts for installation of the KUMA services.

    For installation of the KUMA services, you need to prepare separate sets of hosts for each Kaspersky Next XDR Expert instance.

  3. Prepare the KUMA inventory file.

    For installation of the KUMA services, you need to prepare separate inventory files for each Kaspersky Next XDR Expert instance.

  4. Prepare the configuration file.

    For installation of multiple clusters and Kaspersky Next XDR Expert instances, you need to prepare configuration files for each Kaspersky Next XDR Expert instance. In these configuration files, specify the corresponding administration and target hosts, and other parameters specific to a particular cluster and Kaspersky Next XDR Expert instance.

Process

To create a context with the Kubernetes cluster and Kaspersky Next XDR Expert instance:

  1. On the administrator host where the KDT utility is located, run the following command and specify the context name:

    ./kdt ctx --create <context_name>

    The context with the specified name is created.

  2. Install the Kubernetes cluster and Kaspersky Next XDR Expert.

The cluster with the Kaspersky Next XDR Expert instance is deployed in the context. The creation of the context is finished. When you obtain log files of Kaspersky Next XDR Expert components, the log files contain your current context name.

You can repeat this procedure to create the required number of contexts with installed clusters and Kaspersky Next XDR Expert instances.

You must deploy the Kubernetes cluster and the Kaspersky Next XDR Expert instance after you create the context to finish the context creation. If you do not perform the deployment in the context, and then create another context, the first context will be removed.

To view the list of created contexts and the active context name,

On the administrator host where the KDT utility is located, run the following command:

./kdt ctx

To switch to the required context,

On the administrator host where the KDT utility is located, run the following command and specify the context name:

./kdt ctx <context_name>

After you select the context, KDT connects to the corresponding Kubernetes cluster. Now, you can work with this cluster and the Kaspersky Next XDR Expert instance. KDT commands are applied to the selected cluster.

When you remove the Kaspersky Next XDR Expert components installed in the Kubernetes cluster and the cluster itself by using KDT, the corresponding contexts are also removed. Other contexts and their clusters with Kaspersky Next XDR Expert instances are not removed.

Page top
[Topic 269993]

Pre-check of infrastructure readiness for deployment

After you start the deployment of Kaspersky Next XDR Expert, KDT checks whether the hardware, software, and network configuration of the Kubernetes cluster nodes meet the prerequisites for installing the solution. A pre-check is performed for each node of the cluster.

All checks are divided into two groups:

  • Strict checks

    Checking the parameters that are critical for the operation of Kaspersky Next XDR Expert. If this check fails, the deployment is interrupted.

  • Non-strict checks

    Checking the parameters that are not critical for the operation of Kaspersky Next XDR Expert. If this check fails, the deployment continues.

The following pre-checks are performed:

  • Hardware:
    • Free space on disks is enough for deployment.
    • CPU configuration meets the requirements.
    • Free RAM space is enough for deployment.
    • CPU supports the AVX, SSE2, and BMI instructions.
  • Software:
    • Operating system and its version meet the requirements.
    • Kernel version meets the requirements.
    • Systemctl is installed and available (strict check).
    • Update of the package manager cache is available (strict check).
    • Required packages of the correct version are installed on the node (strict check).
    • Prohibited packages (docker and podman) are not installed on the node (strict check).
    • Outdated k0s binaries are missing (strict check).
    • Outdated k0s configuration files are missing (strict check).
  • Network:
    • All cluster nodes are located in the same broadcast domain.
    • DNS name resolution on the node is available (strict check).
    • Time synchronization on cluster nodes is configured (strict check).
    • Required ports are available.

If all the strict pre-checks are successfully completed, KDT deploys the Kaspersky Next XDR Expert components within the Kubernetes cluster on the target hosts. The results of all passed and failed checks are saved on each node in the file /tmp/k0s_report.txt. You can skip the pre-checks before the deployment, if needed (set the ignore_precheck installation parameter to true).

Page top
[Topic 296620]

Signing in to Kaspersky Next XDR Expert

To sign in to Kaspersky Next XDR Expert, you must know the web address of Open Single Management Platform Console. In your browser, JavaScript must be enabled.

To sign in to Open Single Management Platform Console:

  1. In your browser, go to https://<console_host>.<smp_domain>:443.

    The Open Single Management Platform Console address consists of the console_host and smp_domain parameter values specified in the configuration file.

    The sign-in page is displayed.

  2. Do one of the following:
    • To sign in to Open Single Management Platform Console by using a domain user account, enter the user name and password of the domain user.

      You can enter the user name of the domain user in one of the following formats:

      • Username@dns.domain
      • NTDOMAIN\Username

      Before you sign in with a domain user account, poll the domain controller to obtain the list of domain users.

    • Enter the user name and password of the internal user.
    • If one or more virtual Servers are created on the Server and you want to sign in to a virtual Server:
      1. Click Show virtual Server options.
      2. Type the virtual Server name that you specified while creating the virtual Server.
      3. Enter the user name and password of the internal or domain user who has rights on the virtual Server.
  3. Click the Sign in button.

After sign-in, the dashboard is displayed, and it contains the language and theme that you used the last time you signed in.

Kaspersky Next XDR Expert allows you to work with Open Single Management Platform Console and KUMA Console interfaces.

If you sign in to one of the consoles, and then open the other console on a different tab of the same browser window, you are signed in to the other console without having to re-enter the credentials. In this case, when you sign out of one console, the session also ends for the other console.

If you use different browser windows or different devices to sign in to Open Single Management Platform Console and KUMA Console, you have to re-enter the credentials. In this case, when you sign out of one console on the browser window or device where it is open, the session continues on the window or device where the other console is open.

To sign out of Open Single Management Platform Console,

In the main menu, go to your account settings, and then select Sign out.

Open Single Management Platform Console is closed and the sign-in page is displayed.

Page top
[Topic 249152]