Kaspersky Unified Monitoring and Analysis Platform

About Kaspersky Unified Monitoring and Analysis Platform

Kaspersky Unified Monitoring and Analysis Platform (hereinafter KUMA or "application") is an integrated software solution that combines the following functionality:

  • Receiving, processing, and storing information security events
  • Analyzing and correlating incoming data
  • Searching in received events
  • Creating notifications about detected indicators of information security threats

The application is built on a microservice architecture. This means that you can create and configure only those microservices (hereinafter also "services") that you need, which lets you use KUMA as a log management system or as a full-fledged SIEM system. In addition, flexible routing of data feeds lets you use third-party services for additional event processing.

The update functionality (including anti-virus signature updates and code base updates) may not be available in the application in the territory of the USA.

In this Help topic

What's new

Distribution kit

Hardware and software requirements

KUMA interface

Compatibility with other applications

Page top
[Topic 217694]

What's new

Kaspersky Unified Monitoring and Analysis Platform introduces the following features and improvements:

  • Corrections and improvements in KUMA 3.4.2:
    • The issue of SQLite database maintenance taking too long has been resolved.
    • Storage of resource version history is now optimized, which avoids increasing the size of the database.
  • What's new in KUMA 3.4.2:
  • KUMA now supports the following additional operating systems:
    • Astra Linux 1.7.6
  • In KUMA 3.4.1, enrichment rules of the DNS type have the Recursion desired parameter. You can use the Recursion desired toggle switch to make a KUMA collector send recursive queries to authoritative DNS servers for the purposes of enrichment. The default value is Disabled.
  • KUMA 3.4.1 can receive and process incidents from NCIRCC. After upgrading to version 3.4.1, at each startup, KUMA Core sends a request to get new incident cards to the address specified in the URL field in KUMA's NCIRCC integration settings, and then continues sending requests every 10 minutes. If a new incident appears in the NCIRCC user account dashboard, KUMA registers the incident with the ALRT* prefix and further interaction with NCIRCC is carried out in context of the created incident.

    Interaction with NCIRCC is available even if the incident in KUMA has the Closed status: you can edit the value of the NCIRCC status field and chat with NCIRCC.

  • KUMA 3.4.1 introduces new predefined dashboard layouts.
  • Starting with KUMA 3.4.1 and Kaspersky Endpoint Security 12.9, EDR actions are supported when responding to threats.
  • Now you can visualize the dependencies of resources on each other and on other objects on an interactive graph. Now, when editing resources, you can find out to which linked resources the change will be applied. You can display certain types of resources on the graph and save the resulting graph in SVG format.
  • Now you can add tags to resources, which makes it easier to search for resources that have the same tag.
  • The Access to shared resources role is retired and the following new user roles replace it:
    • Read shared resources
    • Manage shared resources The General administrator and users with the Manage shared resources can now edit resources in the shared tenant.
  • Resource versioning (except dictionaries and tables) allows storing change history for resources.

    When you save changes in resource settings, a new version of the resource is created. You can restore a previous version of a resource, for example, to recover its functionality; you can also compare resource versions to keep track of the changes.

    After upgrading KUMA to version 3.4, existing resources will acquire versions only after these resources are changed and the changes are saved.

  • Now you can find resources by their content using full-text search. You can find resources in which at least one field contains a specific word; this can be useful, for example, if you need to find rules with a certain word in a condition.
  • The new type of KUMA resource, Data collection and analysis rules, allows you to schedule SQL queries to the storage and perform correlation based on the received data.
  • Now you can pass the values of unique fields to the fields of correlation events when creating correlation rules of the standard type.
  • New SQL function sets, enrich and lookup, allow using the attributes of assets and accounts, as well as data from dictionaries and tables, in search queries to filter events, generate reports and widgets (graph type: table). You can use the enrich and lookup function sets in an SQL query in data collection and analysis rules.
  • Now you can save search history. You can refer to the history of search queries and quickly find a query that you have used before.
  • Now you can organize saved queries in a folder tree for structured storage and quick search of queries. Now you can edit previously saved queries, rename them, hierarchically arrange queries in groups (folders), and find previously saved queries using the search bar. You can also edit queries and create links to frequently used queries by adding them to favorites.
  • Now you can create a temporary list of exclusions (for example, you can create exclusions for false positives when managing alerts or incidents). You can create a list of exclusions for each correlation rule.
  • When creating a collector, at the Event parsing step, you now can pass the name or path of the file being processed by the collector to the KUMA event field.
  • The following settings have been added to the connector of the file type:
    • The Update timeout, sec field. If a file is not updated for this time, KUMA applies the action specified in the Timeout action drop-down list to the file: delete, add suffix, leave unchanged.
    • The Timeout action drop-down list. In this drop-down list, you can specify the action that KUMA applies to the file after the time specified in the Update timeout, sec field expires.
  • Connectors of the file, 1с-xml, and 1c-log types get the following new settings:
    • The File/folder polling mode drop-down list. This drop-down list lets you specify the mode in which the connector rereads files in the directory.
    • Poll interval, ms. This field lets you specify the interval in milliseconds at which the connector rereads files in the directory.
  • The event cold storage period is determined in a new way: now you can specify event storage conditions in the ClickHouse cluster as an amount of disk space (absolute in GB or a percentage) when creating the storage or space. The new Event retention time setting lets you configure the total retention time of events in KUMA, counting from the time when the event is received. This setting replaces the Cold retention period.

    When upgrading KUMA to version 3.4, if you have previously configured cold storage disks, the Event retention time setting is calculated as the sum of the old Retention period and Cold retention period settings.

  • Now you can make the storage more stable by flexibly configuring event storage conditions in the ClickHouse cluster using the Event storage options setting: by storage period, storage size in GB, or the ratio of the storage size to the total disk space available to it. When a specified condition is triggered, events are moved to a cold storage disk or deleted.

    You can configure storage conditions for the whole storage or for each storage space individually. The Event storage options setting replaces the Retention period setting.

  • Users with different rights can have granular access to events. Access to events is controlled at the level of storage spaces. After upgrading KUMA to version 3.4, the 'All spaces' space set is assigned to all existing users, that is, access to all spaces is unrestricted. To differentiate access, you must configure space sets, and adjust access permissions. Also, after the update, all available storage spaces become selected in all widgets where storages had been selected. If a new space is created, this space is not automatically selected in widget settings. You must select the new space manually in the widget settings.
  • Now you can manage extended event schema fields in the Settings → Extended event schema fields section. You can view existing extended event schema fields and the resources in which they are used, edit fields, create new fields manually or import them from a file, and export fields and information about fields.

    When upgrading KUMA to version 3.4, the previously created extended event schema fields are automatically migrated and displayed in the Settings → Extended event schema fields section, with the following special considerations:

    • If you had multiple fields of the same type with the same name, only one such field is migrated to KUMA 3.4.
    • All fields with the KL prefix in the name are migrated to KUMA 3.4 with the Enabled status. If any of these fields become service fields, you will not be able to delete, edit, disable, or export them.
    • Extended event schema fields that do not satisfy the requirements that version 3.4 imposes on fields, are migrated to KUMA 3.4 with the Disabled status.

    After the upgrade, we recommend checking such fields and manually fixing any problems or changing the configurations of resources that use such fields.

  • Now you can filter and display data for a relative time range.

    This functionality is available for filtering events by period and for customizing the display of data in reports, the dashboard layout, and in widgets. You can use this functionality to display events or other data for which the selected filtering option has been updated within a time span relative to the current time.

    For data filtering, the time is specified as UTC time, and then converted in the KUMA interface to the local time zone set in the browser.

  • Added support for autocomplete when typing functions of variables in correlators and correlation rules.

    Now, when describing a local or global variable, when you start typing the name of a function, a list of possible options is displayed in the input field, and to the left of it, a window is displayed with the description of the function and usage examples. You can select a function from the list and insert it together with arguments into the input field.

  • Now you can apply multiple monitoring policies to multiple event sources or disable monitoring policies for multiple sources at the same time.
  • Monitoring policies get a new Schedule setting that allows you to configure how often you want to apply monitoring policies to event sources.
  • Now you can manage connections created for an agent, which improves ease of use. You can rename connections (which lets you know from which connection and from which agent the event arrived) duplicate connections to create new connections based on existing ones, and delete connections. The functionality that allows using one agent to read multiple files has also been restored.
  • KUMA agents now have the ability to trace event route if at least one internal destination is specified in the agent connection and if a connector of the internal type is configured in the collector that receives events from the agent. After configuring the agent, information about the event route is added to the event card, the alert card, and the correlation event card in the Event tracing log section. For events with route tracing, the Event tracing log section displays information about the services through which the event passes; the information is displayed in converted form. Service names are displayed as clickable links. Clicking a link with the service name opens the service card in a new browser tab. If you rename the service, the new name of the service is displayed in the cards of new events and in the cards of already processed events. If you delete a service in the Active services section, the Event tracing log section displays Deleted instead of the link. The rest of the event route data is not deleted and continues to be displayed.
  • The Sigma rule converter converts rules to a filter selector, an SQL query for event search, or a KUMA correlation rule of the 'simple' type. Available under the LGPL 2.1 license.
  • Now you can install the AI score and asset status service if your license covers the AI module.

    The AI service helps with precisely assessing the severity of correlation events generated by triggered correlation rules. The AI service gets correlation events that connect linked assets from the available storage clusters, constructs the expected sequence of events, and trains the AI model. Based on the chain of triggered correlation rules, the AI service calculates whether such a sequence of events is typical for this infrastructure. Non-typical patterns increase the score of the asset. The AI service calculates the AI score and the Status, which are displayed in the asset card. You can apply a filter by the Score AI and Status fields when searching for assets. You can also set up proactive categorization of assets by the Score AI and Status fields, which moves the asset to the category corresponding to the risk level as soon as the AI service assigns a score to the asset. You can also track asset category changes and the distribution of assets by status on the dashboard.

  • In the RU region, if you have the AI license module, you can use the Kaspersky Investigation & Response Assistant (KIRA) to analyze the command that triggered the correlation rule. This analysis helps with the investigation of alerts and incidents by offering an easy to understand description of the command line options.

    You can send a query to KIRA from the card of the event or correlation event. If the command is obfuscated, KIRA deobfuscates it and displays the result: the conclusion, summary, and detailed analysis. Query results are stored in the cache for 14 days and are can be viewed in the event card on the KIRA analysis tab by all users with access rights. You can also view the result in the properties of the Query in KIRA task, or restart the task and perform the analysis from scratch.

  • Now you can categorize assets by a relative time range.

    You can set up active categorization of assets to have assets moved to a category from the moment whenever a categorization condition has been satisfied for a certain period of time defined relative to the current time.

    For categorization, the time is specified as UTC time, and then converted in the KUMA interface to the local time zone set in the browser.

  • New types of custom notification templates.

    In previous versions, notification templates were available only for alert notifications. Now you can also create the following types of notification templates:

    • Report generated.
    • Task finished (only one template of this type can exist).
    • Sources monitoring alert.
    • KASAP group changed.

    All types of templates are available when creating a template for the Shared tenant. For all other tenants, the following notification template types are available: Alert created and Sources monitoring alert.

  • A new graph type: Stacked Bar chart.

    You can use the new graph type when creating Events and Assets widgets to visualize the relative quantities or percentages for selected parameters. Values of individual parameters values are displayed in each bar in a different color.

  • Now you can select multiple assets using a filter and delete all selected assets. You can also select all assets in a category, link them to a category, or unlink assets from a category.
  • Now you can select multiple resources and delete them. You can delete all resources or specific types of resources.
  • New predefined widgets are available in the Assets group, as well as a new type, Custom widget, which lets you get custom analytics for assets.
  • Improved export of widgets to PDF. Now, if the data displayed in a widget continues beyond the visible area, when such a widget is exported to PDF, it is split into multiple widgets, and vertical bar charts are converted to horizontal bar charts.
  • New unified normalizer for different versions of NetFlow (NetFlow v5, NetFlow v9, IPFIX/NetFlow v10) lets you replace several normalizers with just one. The NetFlow v5, NetFlow v9, and IPFIX (NetFlow v10) normalizers remain available.

    In addition, the last NetFlow template is now saved to disk for each event source, which allows immediately parsing the netflow from an already known event source when the collector is restarted.

  • The End User License Agreement can now be accepted automatically when installing the KUMA agent on Linux devices and Windows devices using the --accept-eula option. Also, for the Windows agent, you can now use the command line to set the password for the agent's user account.
  • In the Resources → Active services section, a new column of the table of services, UUID, displays the unique identifier of the service.

    This column is hidden by default. Identifying KUMA services by UUID can facilitate troubleshooting at the operating system level.

  • KUMA supports the UNION operator for connections to an Oracle database as an event source.
  • To optimize asset management, the process of importing information about assets from Kaspersky Security Center has been split into two tasks:
    • Importing information about the basic parameters of assets (protection status, versions of anti-virus databases, hardware information), which takes less time and is presumed to be performed more frequently.
    • Importing information about other assets parameters (vulnerabilities, software, owners), which can involve downloading a large amount of data and which takes a longer time to complete.

    Each of the import tasks can be started independently of the other, and you can configure a separate schedule for each task when configuring the integration with Kaspersky Security Center.

  • Now you can display separate incoming events graphs for multiple event sources at the same time, as well as create an incoming events chart based on graphs for multiple event sources, which lets you compare the amount of events received from multiple event sources and how this figure changes in time.
  • New filtering criteria added to the conditions for active categorization and search of assets: Software version, KSC group, CVSS (severity level of CVE vulnerability on the asset), CVE count (number of unique vulnerabilities with the CVE attribute on the asset), as well as filtering by custom fields of assets.
  • Now you can receive resource updates through a proxy server.
  • Now you can generate resource utilization reports (CPU, RAM, etc) in the form of dumps at the request of Technical Support.
  • For resources, the table displays the number of resources from the tenants available to you in the table: the total number or the number with the filter or search applied, as well as the number of selected resources.
  • The new connector for office365 lets you configure the reception of events from the Microsoft 365 (Office 365) solution using the API.
  • Certain obsolete resources are no longer supported or provided:
    • [OOTB] Linux audit and iptables syslog
    • [OOTB] Linux audit.log file
    • [OOTB] Checkpoint Syslog CEF by CheckPoint
    • [OOTB] Eltex MES Switches
    • [OOTB] PTsecurity NAD
    • [OOTB][AD] Granted TGS without TGT (Golden Ticket)
    • [OOTB][AD] Possible Kerberoasting attack
    • [OOTB][AD][Technical] 4768. TGT Requested
    • [OOTB][AD] List of requested TGT. EventID 4768
Page top
[Topic 220925]

Distribution kit

The distribution kit includes the following files:

  • kuma-ansible-installer-<build number>.tar.gz is used to install KUMA components without the option of deployment in a high availability configuration
  • kuma-ansible-installer-ha-<build number>.tar.gz is used to install KUMA components with the option of deployment in a high availability configuration
  • Files containing information about the version (release notes) in Russian and English
Page top
[Topic 217846]

Hardware and software requirements

Recommended hardware

This section lists the hardware requirements for processing an incoming event stream in KUMA at various Events per Second (EPS) rates.

The table below lists the hardware and software requirements for installing the KUMA components, assuming that the ClickHouse cluster only accepts INSERT queries. Hardware requirements for SELECT queries are calculated separately for the particular database usage profile of the customer.

Recommended hardware for ClickHouse cluster storage

When designing the storage cluster, the EPS, the average event size, and the storage depth are not the only things you must take into account. You must also be mindful of special considerations that are presented in this article for your convenience in the following sections:

  • Disk subsystem
  • Network interface
  • RAM
  • Usage profile

Disk subsystem

Insert queries (write access) constitute the majority of the load that KUMA places on ClickHouse:

  • Events arrive in the database in an almost constant stream.
  • Inserts are small and frequent by ClickHouse standards.
  • ClickHouse has high write amplification because of the constant merging of parts of partitions in the background.
  • A partition part under 1 GB in size is stored as a single file. When the part reaches 1 GB, wide column format is applied, wherein each column of the table is represented by two files, .dat and .mrk. In this case, overwriting a part to merge it with another part requires writing to 384 files + fsync for each, which is a lot of IOPS.

Of course, ClickHouse handles search queries at the same time, but there are much fewer of those than write operations. Considering the load profile, we recommend organizing the disk subsystem in the following way:

  1. Use DAS (Direct Attached Storage) with NVME or SAS/SATA interfaces, avoiding SAN/NAS solutions.
  2. If the nodes of the ClickHouse cluster are deployed in a virtual environment, the disk array must be directly mounted to the /opt/kaspersky/kuma directory. Avoid virtual disks.
  3. Ideally, SSD (SATA/NVME) should be used, if possible. Any server-grade SSDs will outperform HDDs (even 15k RPM). For SSDs, use RAID-10 or RAID-0; for RAID-0, make sure that replication in ClickHouse is used.
  4. If SSDs cannot be used, use 15k-RPM SAS HDDs in RAID-10.
  5. A hybrid option is possible: storing hot data (for example, for the last month) on SSDs, and cold data on HDDs. This arrangement makes sure the SSD arrays always handle the write operations.
  6. A software RAID array (madm) will always outperform a hardware RAID controller and offer more flexibility. When using RAID-0 or RAID-10, it is important that the stripe size be 1 MB. In hardware RAID controllers, the default stripe size is often 64 KB, and the maximum supported value is 256–512 KB. For RAID-10, near-layout is optimal for writing, and far-layout is optimal for reading. You must keep this in mind when using a hybrid SSD + HDD configuration.
  7. The recommended file systems are EXT4 or XFS, preferably mounted with the noatime option.

Network interface

  1. On each node of the cluster, a network interface with a bandwidth of at least 10 Gbps must be installed and appropriately switched. Considering write amplification and replication, 25 Gbps is ideal.
  2. If necessary, replication processes can use a separate network interface that does not handle insert and search traffic.

RAM

ClickHouse recommends keeping the RAM-to-stored-data ratio at 1:100.
But how much RAM is needed if each node is supposed to store 50 TB, given that the event storage depth is often dictated by regulatory requirements, while search queries are not performed to the full storage depth? Thus, the recommendation can be interpreted as follows: if the average query depth involves scanning partitions with a total volume of 10 TB, then you need 100 GB of RAM. This is a general recommendation for broad analytical queries.
In the general case, more free RAM on the server improves the performance of search queries for the most recent events because the contents of the partition files will stay in operating system's page cache, so file content will be read from RAM, and not from disk.

Usage profile

The hardware requirements are designed for the database to consume a stream of a certain EPS when the database is at rest (no search queries are being executed, only inserts). At the KUMA deployment stage, it is not always possible to accurately answer the following questions:

  1. What search and analytic queries will be performed in the system?
  2. What is the intensity, concurrency, and depth of search queries?
  3. What is the actual performance of the cluster nodes?

Thus, it is perfectly normal for a KUMA ClickHouse cluster to evolve over time. If an increase of EPS or intensity/resource consumption of search queries makes the cluster unable to cope with the load, you can add more shards (horizontal scaling) or improve the disk subsystems/CPU/RAM on the cluster nodes.

The configuration of the equipment must be chosen based on the system load profile. You can use the "All-in-one" configuration for an event stream of under 10,000 EPS and when using graphical panels supplied with the system.

KUMA supports Intel and AMD CPUs with SSE 4.2 and AVX instruction set support.

 

Up to 3,000 EPS

Up to 10,000 EPS

Up to 20,000 EPS

Up to 50,000 EPS

Configuration

Installation on a single server

 

One device. Device characteristics:

At least 16 threads or vCPUs.

At least 32 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD*.

Data transfer rate: at least 100 Mbps.

 

Installation on a single server

 

One device. Device characteristics:

At least 24 threads or vCPUs.

At least 64 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD*.

Data transfer rate: at least 100 Mbps.

 

1 server for the Core +

1 server for the Collector +

1 server for the Correlator +

3 dedicated servers with the Keeper role +

2 servers for the Storage*

*Recommended configuration. 2 Storage servers are used when ClickHouse is configured with 2 replicas in each shard to ensure fault tolerance and high availability of events collected in the Storage. If fault tolerance requirements do not apply to the Storage, a ClickHouse configuration with 1 replica in each shard may be used and, accordingly, 1 server may be used for the Storage.

1 server for the Core +

2 servers for the Collector +

1 server for the Correlator +

3 dedicated servers with the Keeper role +

4 servers for the Storage*

*Recommended configuration. 4 Storage servers are used when ClickHouse is configured with 2 replicas in each shard to ensure fault tolerance and high availability of events collected in the Storage. If fault tolerance requirements do not apply to the Storage, a ClickHouse configuration with 1 replica in each shard may be used and, accordingly, 2 servers may be used for the Storage.

Requirements for the Core component

-

-

One device.

Device characteristics:

At least 10 threads or vCPUs.

At least 24 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD.

Data transfer rate: at least 100 Mbps.

 

One device.

Device characteristics:

At least 10 threads or vCPUs.

At least 24 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD.

Data transfer rate: at least 100 Mbps.

 

Requirements for the Collector component

-

-

One device.

Device characteristics:

At least 8 threads or vCPUs.

At least 16 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: HDD allowed.

Data transfer rate: at least 100 Mbps.

 

Two devices.

Characteristics of each device:

At least 8 threads or vCPUs.

At least 16 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: HDD allowed.

Data transfer rate: at least 100 Mbps.

 

Requirements for the Correlator component

-

-

One device.

Device characteristics:

At least 8 threads or vCPUs.

At least 32 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: HDD allowed.

Data transfer rate: at least 100 Mbps.

 

One device.

Device characteristics:

At least 8 threads or vCPUs.

At least 32 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: HDD allowed.

Data transfer rate: at least 100 Mbps.

 

Requirements for the Keeper component

-

-

Three devices.

Characteristics of each device:

At least 6 threads or vCPUs.

At least 12 GB of RAM.

At least 50 GB in the /opt directory.

Data storage type: SSD.

Data transfer rate: at least 100 Mbps.

 

Three devices.

Characteristics of each device:

At least 6 threads or vCPUs.

At least 12 GB of RAM.

At least 50 GB in the /opt directory.

Data storage type: SSD.

Data transfer rate: at least 100 Mbps.

 

Requirements for the Storage component

-

-

Two devices.

Characteristics of each device:

At least 24 threads or vCPUs.

At least 64 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD*.

The recommended transfer rate between ClickHouse nodes is at least 10 Gbps if the data stream is equal to or exceeds 20,000 EPS.

 

Four devices.

Characteristics of each device:

At least 24 threads or vCPUs.

At least 64 GB of RAM.

At least 500 GB in the /opt directory.

Data storage type: SSD*.

The recommended transfer rate between ClickHouse nodes is at least 10 Gbps if the data stream is equal to or exceeds 20,000 EPS.

 

Operating systems

  • Ubuntu 22.04 LTS
  • Oracle Linux 8.6, 8.7, 8.10, 9.2, 9.3, 9.4
  • Astra Linux Special Edition RUSB.10015-01 (2021-1126SE17 update 1.7.1)
  • Astra Linux Special Edition RUSB.10015-01 (2022-1011SE17MD update 1.7.2.UU.1)
  • Astra Linux Special Edition RUSB.10015-01 (2022-1110SE17 update 1.7.3) Core version 5.15.0.33 or higher is required.
  • Astra Linux Special Edition RUSB.10015-01 (2023-0630SE17MD, update 1.7.4.UU.1)
  • Astra Linux Special Edition RUSB.10015-01 (2024-0212SE17MD, update 1.7.5.UU.1)
  • Astra Linux Special Edition RUSB.10015-01 (2024-0830SE17, update 1.7.6)
  • RED OS 7.3.4, 8

TLS ciphersuites

TLS versions 1.2 and 1.3 are supported. Integration with a server that does not support the TLS versions and ciphersuites that KUMA requires is impossible.

Supported TLS 1.2 ciphersuites:

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

Supported TLS 1.3 ciphersuites:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256

Depending on the number and complexity of database queries made by users, reports, and dashboards, a greater amount of resources may be required.

For every 50,000 assets (above the first 50,000), you must add 2 extra threads or vCPUs and 4 GB of RAM to the resources of the Core component.

For every 100 services (above the first 100) managed by the Core component, you must add 2 additional threads or vCPUs to the resources of the Core component.

ClickHouse must be deployed on solid-state drives (SSD). SSDs help improve data access speed.

* If the system usage profile does not involve running aggregation SQL queries to the Storage with a depth of over 24 hours, you can use HDD arrays (15,000-RPM SAS HDDs in RAID-10).

Hard drives can be used to store data using the HDFS technology.

Exported events are written to the drive of the Core component to the /opt/kaspersky/kuma/core/tmp/ temporary directory. The exported data is stored for 10 days and then automatically deleted. If you plan to export a large amount of events, you must allocate additional space.

Working in virtual environments

The following virtual environments are supported for installing KUMA:

  • VMware 6.5 or later
  • Hyper-V for Windows Server 2012 R2 or later
  • QEMU-KVM 4.2 or later
  • "Brest" virtualization software RDTSP.10001-02

Working in cloud environments

KUMA can work in a cloud infrastructure. The system can be installed on virtual machines following the IaaS (infrastructure-as-a-service) model.

For a cloud infrastructure, we recommend using the single-server configuration for 3000 EPS and 10,000 EPS. Virtual machines must satisfy the hardware and software requirements of a regular installation.

When choosing the disk subsystem of the server, use the "number of input/output operations (IOPS)" parameter as the reference. The recommended minimum value is 1000 IOPS.

Resource recommendations for the Collector component

Consider that for event processing efficiency, the CPU core count is more important than the clock rate. For example, eight CPU cores with a medium clock rate can process events more efficiently than four CPU cores with a high clock rate.

Consider also that the amount of RAM utilized by the collector depends on configured enrichment methods (DNS, accounts, assets, enrichment with data from Kaspersky CyberTrace) and whether aggregation is used (RAM consumption is influenced by the data aggregation window setting, the number of fields used for aggregation of data, volume of data in fields being aggregated). The utilization of computation resources by KUMA depends on the type of events being parsed and the efficiency of the normalizer.

For example, with an event stream of 1000 EPS and event enrichment disabled (event enrichment is disabled, event aggregation is disabled, 5000 accounts, 5000 assets per tenant), one collector requires the following resources:

• 1 CPU core or 1 virtual CPU

• 512 MB of RAM

• 1 GB of disk space (not counting event cache)

For example, to support 5 collectors that do not perform event enrichment, you must allocate the following resources: 5 CPU cores, 2.5 GB of RAM, and 5 GB of free disk space.

Kaspersky recommendations for storage servers

You must use high-speed protocols, such as Fibre Channel or iSCSI 10G for the connection of the data storage system to storage servers. We do not recommend using application-level protocols such as NFS or SMB to connect data storage systems.

On ClickHouse cluster servers, we recommend using the ext4 file system.

If you are using RAID arrays, we recommend using RAID 0 for high performance, or RAID 10 for high performance and high availability.

To ensure high availability and performance of the data storage subsystem, we recommend making sure that all ClickHouse nodes are deployed strictly on different disk arrays.

If you are using a virtualized infrastructure to host system components, we recommend deploying ClickHouse cluster nodes on different hypervisors. You must prevent any two virtual machines with ClickHouse from running on the same hypervisor.

For high-load KUMA installations, we recommend installing ClickHouse on physical servers.

Requirements for agent devices

You must install agents on network infrastructure devices that will send data to the KUMA collector. Device requirements are listed in the following table.

 

Windows devices

Linux devices

CPU

Single-core, 1.4 GHz or higher

Single-core, 1.4 GHz or higher

RAM

512 MB

512 MB

Free disk space

1 GB

1 GB

Operating systems

  • Microsoft Windows 2012

    Microsoft Windows 2012 has reached end of life, therefore this operating system is supported in a limited way.

  • Microsoft Windows Server 2012 R2
  • Microsoft Windows Server 2016
  • Microsoft Windows Server 2019
  • Microsoft Windows 10 20H2, 21H1
  • Oracle Linux 8.6, 8.7, 9.2
  • Astra Linux Special Edition RUSB.10015-01 (2021-1126SE17 update 1.7.1)
  • Astra Linux Special Edition RUSB.10015-01 (2022-1011SE17MD update 1.7.2.UU.1)
  • Astra Linux Special Edition RUSB.10015-01 (2022-1110SE17 update 1.7.3)
  • Astra Linux Special Edition RUSB.10015-01 (2023-0630SE17MD, update 1.7.4.UU.1)

Requirements for client devices for managing the KUMA web interface

CPU: Intel Core i3 8th generation

RAM: 8 GB

Supported browsers:

  • Google Chrome 110 or later
  • Mozilla Firefox 115 or later

Device requirements for installing KUMA on Kubernetes

Minimum configuration of a Kubernetes cluster for deployment of a high-availability KUMA configuration:

  • 1 load balancer node (not part of the cluster)
  • 3 controller nodes
  • 2 worker nodes

The minimum hardware requirements for devices for installing KUMA on Kubernetes are listed in the table below.

 

Balancer

Controller

Worker node

CPU

1 core with 2 threads or 2 vCPUs.

1 core with 2 threads or 2 vCPUs.

12 threads or 12 vCPUs.

RAM

At least 2 GB

At least 2 GB

At least 24 GB

Free disk space

At least 30 GB

At least 30 GB

At least 1 TB in the /opt directory.

 

At least 32 GB in the /var/lib directory.

 

Network bandwidth

10 Gbps

10 Gbps

10 Gbps

Page top
[Topic 217889]

KUMA interface

The application is managed using the web interface.

The window of the application web interface contains the following:

  • Sections in the left part of the application web interface window
  • Tabs in the upper part of the application web interface window for some sections of the application
  • The workspace in the lower part of the application web interface window

The workspace displays the information that you choose to view in the sections and on the tabs of the application web interface window. It also contains controls that you can use to configure the display of the information.

While managing the application web interface, you can use shortcut keys to perform the following actions:

  • In all sections: close the window that opens in the right side pane—Esc.
  • In the Events section:
    • Switch between events in the right side pane— and .
    • Start a search (when focused on the query field)—Ctrl/Command+Enter.
    • Save a search query—Ctrl/Command+S.
Page top
[Topic 230383]

Compatibility with other applications

Kaspersky Endpoint Security for Linux

If KUMA components and the Kaspersky Endpoint Security for Linux application are installed on the same server, the report.db directory may grow very large and even take up the entire drive space. In addition, Kaspersky Endpoint Security for Linux scans all KUMA files by default, including service files, which may affect performance. To avoid these problems:

  • Upgrade Kaspersky Endpoint Security for Linux to version 12.0 or later.
  • We do not recommend enabling the network components of Kaspersky Endpoint Security for Linux.
  • Add the following directories to general exclusions and to on-demand scan exclusions:
    1. On the KUMA Core server:
      • /opt/kaspersky/kuma/victoria-metrics/ — directory with Victoria Metrics data.
      • /opt/kaspersky/kuma/mongodb — directory with MongoDB data.
    2. On the storage server:
      • /opt/kaspersky/kuma/clickhouse/ — the ClickHouse directory.
      • /opt/kaspersky/kuma/storage/<storage ID>/buffers/ — directory with storage buffers.
    3. On the correlator server:
      • /opt/kaspersky/kuma/correlator/<correlator ID>/data/ — directories with dictionaries.
      • /opt/kaspersky/kuma/correlator/<correlator ID>/lists — directories with active lists.
      • /opt/kaspersky/kuma/correlator/<correlator ID>/ctxtables — directories with context tables.
      • /opt/kaspersky/kuma/correlator/<correlator ID>/buffers — directory with buffers.
    4. On the collector server:
      • /opt/kaspersky/kuma/collector/<collector ID>/buffers — directory with buffers.
      • /opt/kaspersky/kuma/collector/<collector>/data/ — directory with dictionaries.
    5. Directories with logs for each service.

For more details on scan exclusions, please refer to the Kaspersky Endpoint Security for Linux Online Help.

Page top
[Topic 230384]