Kaspersky Next XDR Expert

Creating a collector

A collector receives raw events from event sources, performs normalization, and sends processed events to their destinations. The maximum size of an event that can be processed by the KUMA collector is 4 MB.

If you are using the SMB license, and both the hourly average EPS and the daily average EPS allowed by the license is exceeded for a collector, the collector stops receiving events and is displayed with a red status and a notification about the EPS limit being exceeded. The user with the General Administrator role gets a notification about the EPS limit being exceeded and the collector being stopped. Every hour, the hourly average EPS value is recalculated and compared with the EPS limit in the license. If the hourly average is under the limit, the restrictions on the collector are lifted, and the collector resumes receiving and processing events.

Installing a collector involves two steps:

  • Create the collector in the KUMA Console using the Installation Wizard. In this step, you specify the general collector settings to be applied when installing the collector on the server.
  • Install the collector on the network infrastructure server on which you want to receive events.

Actions in the KUMA Console

The creation of a collector in the KUMA Console is carried out by using the Installation Wizard. This Wizard combines the required resources into a set of resources for a collector. Upon completion of the Wizard, the service itself is automatically created based on this set of resources.

To create a collector in the KUMA Console,

Start the Collector Installation Wizard:

  • In the KUMA Console, in the Resources section, click Add event source button.
  • In the KUMA Console in the ResourcesCollectors section click Add collector button.

As a result of completing the steps of the Wizard, a collector service is created in the KUMA Console.

A resource set for a collector includes the following resources:

These resources can be prepared in advance, or you can create them while the Installation Wizard is running.

Actions on the KUMA Collector Server

When installing the collector on the server that you intend to use for receiving events, run the command displayed at the last step of the Installation Wizard. When installing, you must specify the identifier automatically assigned to the service in the KUMA Console, as well as the port used for communication.

Testing the installation

After creating a collector, you are advised to make sure that it is working correctly.

In this section

Starting the Collector Installation Wizard

Installing a collector in a KUMA network infrastructure

Validating collector installation

Ensuring uninterrupted collector operation

Page top
[Topic 217765]

Starting the Collector Installation Wizard

A collector consists of two parts: one part is created inside the KUMA Console, and the other part is installed on the network infrastructure server intended for receiving events. The Installation Wizard creates the first part of the collector.

To start the Collector Installation Wizard:

  • In the KUMA Console, in the Resources section, click Add event source.
  • In the KUMA Console in the ResourcesCollectors section click Add collector.

Follow the instructions of the Wizard.

Aside from the first and last steps of the Wizard, the steps of the Wizard can be performed in any order. You can switch between steps by using the Next and Previous buttons, as well as by clicking the names of the steps in the left side of the window.

After the Wizard completes, a resource set for a collector is created in the KUMA Console under ResourcesCollectors, and a collector service is added under ResourcesActive services.

In this section

Step 1. Connect event sources

Step 2. Transport

Step 3. Event parsing

Step 4. Filtering events

Step 5. Event aggregation

Step 6. Event enrichment

Step 7. Routing

Step 8. Setup validation

Page top
[Topic 220707]

Step 1. Connect event sources

This is a required step of the Installation Wizard. At this step, you specify the main settings of the collector: its name and the tenant that will own it.

To specify the general settings of the collector:

  1. On the Basic settings tab, fill in the following fields:
    1. In the Collector name field, enter a unique name for the service you are creating. The name must contain 1 to 128 Unicode characters.

      When certain types of collectors are created, agents named "agent: <Collector name>, auto created" are also automatically created together with the collectors. If this type of agent was previously created and has not been deleted, it will be impossible to create a collector named <Collector name>. If this is the case, you will have to either specify a different name for the collector or delete the previously created agent.

    2. In the Tenant drop-down list, select the tenant that will own the collector. The tenant selection determines what resources will be available when the collector is created.

      If you return to this window from another subsequent step of the Installation Wizard and select another tenant, you will have to manually edit all the resources that you have added to the service. Only resources from the selected tenant and shared tenant can be added to the service.

    3. If required, specify the number of processes that the service can run concurrently in the Workers field. By default, the number of worker processes is the same as the number of vCPUs on the server where the service is installed.
    4. You can optionally add up to 256 Unicode characters describing the service in the Description field.
  2. On the Advanced settings tab, fill in the following fields:
    1. If necessary, use the Debug toggle switch to enable logging of service operations. Error messages of the collector service are logged even when debug mode is disabled. The log can be viewed on the machine where the collector is installed, in the /opt/kaspersky/kuma/collector/<collector ID>/log/collector directory.
    2. You can use the Create dump periodically toggle switch at the request of Technical Support to generate resource (CPU, RAM, etc.) utilization reports in the form of dumps.
    3. In the Dump settings field, you can specify the settings to be used when creating dumps. The specifics of filling in this field must be provided by Technical Support.

General settings of the collector are specified. Proceed to the next step of the Installation Wizard.

Page top
[Topic 220710]

Step 2. Transport

This is a required step of the Installation Wizard. On the Transport tab of the Installation Wizard, select or create a connector and in its settings, specify the source of events for the collector service.

To add an existing connector to a resource set,

select the name of the required connector from the Connector drop-down list.

The Transport tab of the Installation Wizard displays the settings of the selected connector. You can open the selected connector for editing in a new browser tab using the edit-grey button.

To create a new connector:

  1. Select Create new from the Connector drop-down list.
  2. In the Type drop-down list, select the connector type and specify its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector:

    When using the tcp or udp connector type at the normalization stage, IP addresses of the assets from which the events were received will be written in the DeviceAddress event field if it is empty.

    When using a wmi, wec, or etw connector, agents are automatically created for receiving Windows events.

    It is recommended to use the default encoding (UTF-8), and to apply other settings only if bit characters are received in the fields of events.

    Making KUMA collectors to listen on ports up to 1,000 requires running the service of the relevant collector with root privileges. To do this, after installing the collector, add the line AmbientCapabilities = CAP_NET_BIND_SERVICE to its systemd configuration file in the [Service] section.
    The systemd file is located in the /usr/lib/systemd/system/kuma-collector-<collector ID>.service directory.

The connector is added to the resource set of the collector. The created connector is only available in this resource set and is not displayed in the web interface ResourcesConnectors section.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220711]

Step 3. Event parsing

Expand all | Collapse all

This is a required step of the Installation Wizard. On the Event parsing tab of the Installation Wizard, click the + Add event parsing button to open the Basic event parsing window, and in that window, in the Normalizer drop-down list, select or create a normalizer. In normalizer settings, define the rules for converting raw events into normalized events. You can add multiple event parsing rules to the normalizer to implement complex event processing logic. You can test the normalizer using test events.

When you create a new normalizer in the Installation Wizard, the default normalizer is saved in the resource set for the collector. You cannot use the created normalizer in other collectors. You can select the Save normalizer check box to create the normalizer as a separate resource. This will make the normalizer available for selection in other collectors of the tenant.

If, when editing the settings of a resource set of a collector, you edit or delete conversions in a normalizer connected to the resource set of the collector, the edits will not be saved, and the normalizer itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly in the normalizer under Resources → Normalizers in the web interface.

Adding a normalizer to a resource set

To add a normalizer to a resource set:

  1. Click the + Add event parsing button.

    This opens the Basic event parsing window with the normalizer settings. By default, the Normalization scheme tab is selected.

  2. In the Normalizer drop-down list, select a normalizer. You can select a normalizer that belongs to the tenant of the collector or to the common tenant.

    The normalizer settings are displayed.

    If you want to edit the settings of the normalizer, in the Normalizer drop-down list, click the pencil edit-pencil icon next to the name of the normalizer to open the Edit normalizer window, and in that window, click the dark circle. This opens the Basic event parsing window with the normalizer settings. If you want to edit additional parsing settings, in the Edit normalizer window, hover over the dark circle and click the plus symbol that appears. This opens the Additional event parsing with settings of the additional parsing. For more details on configuring the additional parsing of events, see the Creating the structure of event normalization rules section below.

  3. Click OK.

The normalizer is added and displayed as a dark circle on the Event parsing tab of the Installation Wizard. You can click the dark circle to view the settings of the normalizer.

To create a new normalizer in the collector:

  1. Click the + Add event parsing button.

    This opens the Basic event parsing window with the normalizer settings. By default, the Normalization scheme tab is selected.

  2. If you want to create the normalizer as a separate resource, select the Save normalizer check box. If the check box is selected, the normalizer becomes available for selection in other collectors of the tenant. This check box is cleared by default.
  3. In the Name field, enter a unique name for the normalizer. Maximum length of the name: 128 Unicode characters.
  4. In the Parsing method drop-down list, select the type of events to receive. Depending on the selected value, you can use the predefined event field matching rules or define rules manually. When you select some parsing methods, additional settings may become available that you must specify.

    Available parsing methods:

    • json

      This parsing method is used to process JSON data where each object, including its nested objects, occupies a single line in a file.

      When processing files with hierarchically structured data, you can reference the fields of nested objects using the dot notation. For example, the username parameter from the string "user": {"username": "system: node: example-01"} can be accessed by using the user.username query.

      Files are processed line by line. Multi-line objects with nested structures may be normalized incorrectly.

      In complex normalization schemes where additional normalizers are used, all nested objects are processed at the first normalization level, except for cases when the extra normalization conditions are not specified and, therefore, the event being processed is passed to the extra normalizer in its entirety.

      You can use \n and \r\n as newline characters. Strings must be UTF-8 encoded.

      If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

    • cef

      This parsing method is used to process CEF data.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping.

    • regexp

      This parsing method is used to create custom rules for processing data in a format using regular expressions.

      You must add a regular expression (RE2 syntax) with named capturing groups to the field under Normalization. The name of the capturing group and its value are considered the field and value of the raw event that can be converted to an event field in KUMA format.

      To add event handling rules:

      1. If necessary, copy an example of the data you want to process to the Event examples field. We recommend completing this step.
      2. In the field under Normalization, add a RE2 regular expression with named capturing groups, for example, "(?P<name>regexp)". The regular expression added to the field under Normalization must exactly match the event. When designing the regular expression, we recommend using special characters that match the starting and ending positions of the text: ^, $.

        You can add multiple regular expressions or remove regular expressions. To add a regular expression, click Add regular expression. To remove a regular expression, click the delete icon cross next to it.

      3. Click the Copy field names to the mapping table button.

        Capture group names are displayed in the KUMA field column of the Mapping table. You can select the corresponding KUMA field in the column opposite each capturing group. If you followed the CEF format when naming the capturing groups, you can use automatic CEF mapping by selecting the Use CEF syntax for normalization check box.

      Event handling rules are added.

    • syslog

      This parsing method is used to process data in syslog format.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping.

      To parse events in rfc5424 format with a structured-data section, in the Keep extra fields drop-down list, select Yes. This makes the values from the structured-data section available in the Extra fields.

    • csv

      This parsing method is used to create custom rules for processing CSV data.

      When choosing this parsing method, you must specify the separator of values in the string in the Delimiter field. Any single-byte ASCII character can be used as a delimiter for values in a string.

    • kv

      This parsing method is used to process data in key-value pair format. Available parsing method settings are listed in the table below.

      Available parsing method settings

      Setting

      Description

      Pair delimiter

      The character used to separate key-value pairs. You can specify any single-character (1 byte) value. The specified value must not match the value specified in the Value delimiter field.

      Value delimiter

      The character used to separate a key from its value. You can specify any single-character (1 byte) value. The specified value must not match the value specified in the Pair delimiter field.

    • xml

      This parsing method is used to process XML data in which each object, including nested objects, occupies a single line in a file. Files are processed line by line.

      If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

      If you select this parsing method, under XML attributes, you can specify the key XML attributes to be extracted from tags. If an XML structure has multiple XML attributes with different values in the same tag, you can identify the necessary value by specifying the key of the value in the Source column of the Mapping table.

      To add key XML attributes:

      1. Click + Add field.
      2. This opens a window; in that window, specify the path to the XML attribute.

      You can add multiple XML attributes or remove XML attributes. To remove an individual XML attribute, click the delete icon cross next to it. To remove all XML attributes, click Reset.

      If XML key attributes are not specified, then in the course of field mapping the unique path to the XML value will be represented by a sequence of tags.

      Tag numbering

      Starting with KUMA 2.1.3, you can use automatic tag numbering in XML events. This lets you parse an event with the identical tags or unnamed tags, such as <Data>.

      As an example, we will number the tags of the EventData attribute of the Microsoft Windows PowerShell event ID 800.

      <Event xmlns="http://schemas .microsoft.com/win/2004/08/events/event">

      <System>

      <Provider Name="Microsoft-Windows-ActiveDirectory_DomainService" Guid="{0e8478c5-3605-4e8c-8497-1e730c959516}" EventSourceName="NTDS" />

      <EventID Qualifiers="0000">0000</EventID>

      <Version>@</Version>

      <Level>4</Level>

      <Task>15</Task>

      <Opcode>0</Opcode>

      <Keywords >0x8080000000000000</Keywords>

      <TimeCreated SystemTime="2000-01-01T00:00:00.659495900Z" />

      <EventRecordID>55647</EventRecordID>

      <Correlation />

      <Execution ProcessID="1" ThreadID="1" />

      <Channel>service</Channel>

      <Computer>computer</Computer>

      <Security UserID="0000" />

      </System>

      <EventData>

      <Data>583</Data>

      <Data>36</Data>

      <Data>192.168.0.1:5084</Data>

      <Data>level</Data>

      <Data>name, lDAPDisplayName</Data>

      <Data />

      <Data>5545</Data>

      <Data>3</Data>

      <Data>0</Data>

      <Data>0</Data>

      <Data>0</Data>

      <Data>15</Data>

      <Data>none</Data>

      </EventData>

      </Event>

      To parse events with identical tags or unnamed tags, you need to configure tag numbering and data mapping for numbered tags with KUMA event fields.

      KUMA 3.0.x supports using XML attributes and tag numbering at the same time in the same extra normalizer. If an XML attribute contains unnamed tags or identical tags, we recommend using tag numbering. If the XML attribute contains only named tags, we recommend using XML attributes.

      To use XML attributes and tag numbering in extra normalizers, you must sequentially enable the Keep raw event setting in each extra normalizer along the path that the event follows to the target extra normalizer, and in the target extra normalizer itself.

      For an example of how tag numbering works, you can refer to the MicrosoftProducts normalizer. The Keep raw event setting is enabled sequentially in both AD FS and 424 extra normalizers.

      To set up the parsing of events with unnamed or identical tags:

      1. Open an existing normalizer or create a new normalizer.
      2. In the Basic event parsing window of the normalizer, in the Parsing method drop-down list, select xml.
      3. In the Tag numbering field, click + Add field.
      4. In the displayed field, enter the full path to the tag to whose elements you want to assign a number, for example, Event.EventData.Data. The first tag gets number 0. If the tag is empty, for example, <Data />, it is also assigned a number.
      5. To configure data mapping, under Mapping, click + Add row and do the following:
        1. In the displayed row, in the Source field, enter the full path to the tag and the index of the tag. For example, for the Microsoft Windows PowerShell event ID 800 from the example above, the full paths to tags and tag indices are as follows:
          • Event.EventData.Data.0,
          • Event.EventData.Data.1,
          • Event.EventData.Data.2 and so on.
        2. In the KUMA field drop-down list, select the field in the KUMA event that will receive the value from the numbered tag after parsing.
      6. Save changes in one of the following ways:
        • If you created a new normalizer, click Save.
        • If you edited an existing normalizer, in the collector to which the normalizer is linked, click Update configuration.

      Parsing is configured.

    • netflow

      This parsing method is used to process data in all supported NetFlow protocol formats: NetFlow v5, NetFlow v9, and IPFIX.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. This takes into account the source fields of all NetFlow versions (NetFlow v5, NetFlow v9, and IPFIX).

      If the netflow parsing method is selected for the main parsing, extra normalization is not available.

      The default mapping rules for the netflow parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • netflow5

      This parsing method is used to process data in the NetFlow v5 format.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the netflow5 parsing method is selected for the main parsing, extra normalization is not available.

      The default mapping rules for the netflow5 parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • netflow9

      This parsing method is used to process data in the NetFlow v9 format.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the netflow9 parsing method is selected for the main parsing, extra normalization is not available.

      The default mapping rules for the netflow9 parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sflow5

      This parsing method is used to process data in sflow5 format.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the sflow5 parsing method is selected for the main parsing, extra normalization is not available.

    • ipfix

      This parsing method is used to process IPFIX data.

      If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the ipfix parsing method is selected for the main parsing, extra normalization is not available.

      The default mapping rules for the ipfix parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sql. This parsing method becomes available only if at the Transport step of the Installation Wizard you specified a connector of the sql type.

      The normalizer uses this parsing method to process data obtained by making a selection from the database.

  5. In the Keep raw event drop-down list, specify whether you want to store the original raw event in the re-created normalized event.
    • Don't save—do not save the raw event. This value is selected by default.
    • Only errors—save the raw event in the Raw field of the normalized event if errors occurred when parsing it. You can use this value to debug the service. In this case, an event having a non-empty Raw field indicates problems.
    • Always—always save the raw event in the Raw field of the normalized event.
  6. In the Keep extra fields drop-down list, choose whether you want to store the raw event fields in the normalized event if no mapping is configured for them (see step 8 of these instructions):
    • No. This value is selected by default.
    • Yes. The original event fields are saved in the Extra field of the normalized event.
  7. If necessary, provide an example of the data you want to process to the Event examples field. We recommend completing this step.
  8. In the Mapping table, configure the mapping of raw event fields to KUMA event fields:
    1. Click + Add row.

      You can add multiple table rows or delete table rows. To delete a table row, select the check box next to it and click the Delete button.

    2. In the Source column, specify the name of the raw event field that you want to map to the KUMA event field.

      For details about the field name format, refer to the Normalized event data model article. For a description of the mapping, refer to the Mapping fields of predefined normalizers article.

      If you want to create rules for modifying the fields of the original event before writing to the KUMA event fields, click the settings settings_icon icon next to the field name to open the Conversion window, and in that window, click + Add conversion. You can reorder the rules or delete the rules. To reorder rules, use the reorder DragIcon icons. To delete a rule, click the delete cross-black icon next to it.

      Available conversions

      Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

      • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
      • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
      • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
        • Replace chars specifies the sequence of characters to be replaced.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
      • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
        • Expression is the RE2 regular expression whose results you want to replace.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

      If at the Transport step of the Installation Wizard, you specified a connector of the file type, you can pass the name or path of the file being processed by the collector to the KUMA event field. To do this, in the Source column, specify one of the following values:

      • $kuma_fileSourceName to pass the name of the file being processed by the collector in the KUMA event field.
      • $kuma_fileSourcePath to pass the path to the file being processed by the collector in the KUMA event field.

      When you use a file connector, the new variables in the normalizer will only work with destinations of the internal type.

    3. In the KUMA field column, select a KUMA event field from the drop-down list. You can find the KUMA event field by typing its name. If the name of the KUMA event field begins with DeviceCustom* or Flex*, enter a unique custom label in the Label field, if necessary.

    If you want KUMA to enrich events with asset information, and the asset information to be available in the alert card when a correlation rule is triggered, in the Mapping table, configure a mapping of host address and host name fields depending on the purpose of the asset. For example, you can configure a mapping for SourceAddress and SourceHostName, or DestinationAddress and DestinationHostName KUMA event fields. As a result of enrichment, the event card includes a SourceAssetID or DestinationAssetID KUMA event field, and a link to the asset card. As a result of enrichment, asset information is also available in the alert card.

    If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field.

  9. Click OK.

The normalizer is created and displayed as a dark circle on the Event parsing tab of the Installation Wizard. You can click the dark circle to view the settings of the normalizer. If you want to edit additional parsing settings, hover over the dark circle and click the plus symbol that appears. This opens the Additional event parsing with settings of the additional parsing. For more details on configuring the additional parsing of events, see the Creating the structure of event normalization rules section below.

Enriching normalized events with additional data

You can create enrichment rules in the normalizer to add extra data to created normalized events. Enrichment rules are stored in the normalizer in which they were created.

To add enrichment rules to the normalizer:

  1. Select the main or additional normalization rule to open a window, and in that window, select the Enrichment tab.
  2. Click the + Add enrichment button.

    The enrichment rule settings section is displayed. You can add multiple enrichment rules or delete enrichment rules. To delete an enrichment rule, click the delete cross-black icon next to it.

  3. In the Source kind drop-down list, select the type of the enrichment source. When you select some enrichment source types, additional settings may become available that you must specify.

    Available Enrichment rule source types:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Constant

      The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Dictionary name

      The dictionary from which the values are to be taken.

      Key fields

      Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

      If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

      If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

    • table

      This type of enrichment is used if you need to add a value from the dictionary of the Table type.

      When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

      In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

      • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
      • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

      New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Target field

      The KUMA event field that you want to populate with the data.

      Source field

      The event field whose value is written to the target field.

      Clicking wrench-new opens the Conversion window, in which you can click Add conversion to create rules for modifying the source data before writing them to the KUMA event fields. You can reorder and delete created rules. To change the position of a rule, click DragIcon next to it. To delete a rule, click cross-black next to it.

      Available conversions

      Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

      • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
      • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
      • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
        • Replace chars specifies the sequence of characters to be replaced.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
      • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
        • Expression is the RE2 regular expression whose results you want to replace.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

      When using enrichment of events that have event selected as the Source kind and the extended event schema fields are used as arguments, the following special considerations apply:

      • If the source extended event schema field has the "Array of strings" type, and the target extended event schema field has the "String" type, the values are written to the target extended event schema field in TSV format.

        Example: The SA.StringArray extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the event enrichment operation is written to the DeviceCustomString1 extended event schema field. The DeviceCustomString1 extended event schema field contains values: ["string1", "string2", "string3"].

      • If the source and target extended event schema fields have the "Array of strings" type, values of the source extended event schema field are added to the values of the target extended event schema field, and the "," character is used as the delimiter character.

        Example: The SA.StringArrayOne field of the extended event scheme contains the ["string1", "string2", "string3"] values, and the SA.StringArrayTwo field of the extended event scheme contains the ["string4", "string5", "string6"] values. An event enrichment operation is performed. The result of the event enrichment operation is written to the SA.StringArrayTwo field of the extended event scheme. The SA.StringArrayTwo extended event schema field contains values: ["string4", "string5", "string6", "string1", "string2", "string3"].

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Template

      The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

      • {{.SA.StringArrayOne}}
      • {{- range $index, $element := . SA.StringArrayOne -}}

        {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

      To convert the data in an array field in a template into the TSV format, use the toString function, for example:

      template {{toString .SA.StringArray}}

  4. In the Target field drop-down list, select the KUMA event field to which you want to write the data. You cannot select a value if in the Source kind drop-down list, you selected table.
  5. If you want to enable details in the normalizer log, enable the Debug toggle switch. The toggle switch is turned off by default.
  6. Click OK.

The event enrichment rules with additional data are added to the normalizer into the selected parsing rule.

Configuring parsing linked to IPv4 addresses

If at the Transport step of the Installation Wizard, you specified a connector of the udp, tcp, or http type, you can forward events from multiple IPv4 addresses from sources of different types to the same collector, and have the collector apply the specified normalizers. To do this, you need to specify several IPv4 addresses and select the normalizer that you want to apply to events coming from the specified IPv4 addresses. The following types of normalizers are available: json, cef, regexp, syslog, csv, kv, and xml.

If you select a connector type other than udp, tcp, or http in a collector with configured normalizers and linking to IPv4 addresses, the Parsing settings tab is no longer displayed, and only the first of the previously specified normalizers is specified on the Event parsing tab of the Installation Wizard. The Parsing settings tab is hidden immediately, and the changes are applied after the resource is saved. If you want to restore the previous settings, exit the Installation Wizard without saving.

For normalizers of the Syslog and regexp types, you can use of a chain of normalizers. In this case, you can specify additional normalization conditions depending on the value of the DeviceProcessName field. The difference from extra normalization is that you can specify shared normalizers.

To configure parsing with linking to IPv4 addresses:

  1. Select the Parsing settings tab and click the + Event source button.

    A group of parsing settings is displayed. You can add multiple parsings or delete parsings. To remove a parsing, click the delete cross-black icon next to it.

  2. In the IP address(es) field, enter the IPv4 address from which events will be sent. You can specify multiple IP addresses separated by commas. The length of the IPv4 address list is unlimited; however, we recommend limiting the number of IPv4 addresses to keep the load on the collector balanced. You must specify a value in this field if you want to apply multiple normalizers in one collector.

    The IPv4 address must be unique for each normalizer. KUMA checks the uniqueness of IPv4 addresses, and if you specify the same IPv4 address for different normalizers, the Field must be unique message is displayed.

    If you want to send all events to the same normalizer without specifying IPv4 addresses, we recommend creating a separate collector. To improve performance, we recommend creating a separate collector with one normalizer if you want to apply the same normalizer to events from a large number of IPv4 addresses.

  3. In the Normalizer drop-down list, create or select a normalizer. You can click the arrow icon next to the drop-down list to select the Parsing schemas tab.

    Normalization will be triggered if at the Transport step of the Installation Wizard, you specified a connector of the udp, tcp, or http type. For a http connector, you must specify the header of the event source. Taking into account the available connectors, the following normalizer types are available for automatic source recognition: json, cef, regexp, syslog, csv, kv, and xml.

  4. If you selected a normalizer of the Syslog or regexp type, and you want to add a conditional normalization, click the + Add conditional normalization button. Conditional normalization is available if in the Mapping table of the main normalizer, you configured the mapping of the source event field to the DeviceProcessName KUMA event field. Under Condition, in the DeviceProcessName field, specify a process name, and in the drop-down list, create or select a normalizer. You can specify multiple combinations of the DeviceProcessName KUMA event field and a normalizer. Normalization is performed until the first match.

Parsing with linking to IPv4 addresses is configured.

Creating a structure of event normalization rules

To implement complex event processing logic, you can add multiple event parsing rules to the normalizer. Events are switched between the parsing rules depending on the specified conditions. Events are processed sequentially in accordance with the creation order of the parsing rules. The event processing path is displayed as arrows.

To create an additional parsing rule:

  1. Create a normalizer. For more information on creating normalizers, see the Adding a normalizer to a resource set section earlier in this article.

    The created normalizer is displayed as a dark circle on the Event parsing tab of the Installation Wizard.

  2. Hover over the dark circle and click the plus sign that appears.
  3. In the Additional event parsing window that opens, configure the additional event parsing rule:
    • Extra normalization conditions tab:

      If you want to send a raw event for extra normalization, select Yes in the Keep raw event drop-down list. The default value is No. We recommend passing a raw event to normalizers of json and xml types. If you want to send a raw event for extra normalization to the second, third, etc nesting levels, at each nesting level, select Yes in the Keep raw event drop-down list.

      To send only the events with a specific field to the additional normalizer, specify the field in the Field to pass into normalizer field.

      On this tab, you can define other conditions that must be satisfied for the event to be sent for additional parsing.

    • Normalization scheme tab:

      On this tab, you can configure event processing rules, similar to the main normalizer settings. The Keep raw event setting is not available. The Event examples field displays the values specified when the normalizer was created.

    • Enrichment tab:

      On this tab, you can configure enrichment rules for events. For more details on configuring enrichment rules, see the Enriching normalized events with additional data section earlier in this article.

  4. Click OK.

An additional parsing rule is added to the normalizer and displayed as a dark box. The dark box specifies the conditions that trigger the additional parsing rule.

You can do the following:

  • You can chick the additional parsing rule to edit its settings.
  • You can find an additional parsing rule by entering its name in the field in the upper part of the window.
  • Create a new additional parsing rule. To do this, hover over the additional parsing rule and click on the plus icon that appears.
  • Delete an additional parsing rule. To do this, hover over the additional parsing rule and click on the trash can icon.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220712]

Step 4. Filtering events

This is an optional step of the Installation Wizard. The Event filtering tab of the Installation Wizard allows you to select or create a filter whose settings specify the conditions for selecting events. You can add multiple filters to the collector. You can swap the filters by dragging them by the DragIcon icon as well as delete them. Filters are combined by the AND operator.

When configuring filters, we recommend to adhere to the chosen normalization scheme. In filters, use only KUMA service fields and the fields that you specified in the normalizer in the Mapping and Enrichment sections. For example, if the DeviceAddress field is not used in normalization, avoid using the DeviceAddress field in a filter because such filtering will not work.

To add an existing filter to a collector resource set,

Click the Add filter button and select the required filter from the Filter drop-down menu.

To add a new filter to the collector resource set:

  1. Click the Add filter button and select Create new from the Filter drop-down menu.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. This can be useful if you decide to reuse the same filter across different services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter in the Name field. The name must contain 1 to 128 Unicode characters.
  4. In the Conditions section, specify the conditions that must be met by the filtered events:
    • The Add condition button is used to add filtering conditions. You can select two values (two operands, left and right) and assign the operation you want to perform with the selected values. The result of the operation is either True or False.
      • In the operator drop-down list, select the function to be performed by the filter.

        In this drop-down list, you can select the do not match case check box if the operator should ignore the case of values. This check box is ignored if the InSubnet, InActiveList, InCategory, and InActiveDirectoryGroup operators are selected. This check box is cleared by default.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      • In the Left operand and Right operand drop-down lists, select where the data to be filtered will come from. As a result of the selection, Advanced settings will appear. Use them to determine the exact value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.
      • You can use the If drop-down list to choose whether you need to create a negative filter condition.

      Conditions can be deleted using the cross button.

    • The Add group button is used to add groups of conditions. Operator AND can be switched between AND, OR, and NOT values.

      A condition group can be deleted using the cross button.

    • By clicking Add filter, you can add existing filters selected in the Select filter drop-down list to the conditions. You can click edit-grey to navigate to a nested filter.

      A nested filter can be deleted using the cross button.

The filter has been added.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220713]

Step 5. Event aggregation

This is an optional step of the Installation Wizard. The Event aggregation tab of the Installation Wizard allows you to select or create aggregation rules whose settings specify the conditions for aggregating events of the same type. You can add multiple aggregation rules to the collector.

To add an existing aggregation rule to a set of collector resources,

click Add aggregation rule and select Aggregation rule in the drop-down list.

To add a new aggregation rule to a set of collector resources:

  1. Click the Add aggregation rule button and select Create new from the Aggregation rule drop-down menu.
  2. Enter the name of the newly created aggregation rule in the Name field. The name must contain 1 to 128 Unicode characters.
  3. In the Threshold field, specify how many events must be accumulated before the aggregation rule triggers and the events are aggregated. The default value is 100.
  4. In the Triggered rule lifetime field, specify how long (in seconds) the collector must accumulate events to be aggregated. When this time expires, the aggregation rule is triggered and a new aggregation event is created. The default value is 60.
  5. In the Identical fields section, use the Add field button to select the fields that will be used to identify the same types of events. Selected events can be deleted using the buttons with a cross icon.
  6. In the Unique fields section, you can click Add field to select the fields that will disqualify events from aggregation even if the events contain fields listed in the Identical fields section. Selected events can be deleted using the buttons with a cross icon.
  7. In the Sum fields section, you can use the Add field button to select the fields whose values will be summed during the aggregation process. Selected events can be deleted using the buttons with a cross icon.
  8. In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    To create a filter:

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
      3. In the operator drop-down list, select an operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
      5. If you want to add a negative condition, select If not from the If drop-down list.

      You can add multiple conditions or a group of conditions.

    5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Aggregation rule added. You can delete it using the cross button.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220714]

Step 6. Event enrichment

Expand all | Collapse all

This is an optional step of the Installation Wizard. On the Event enrichment tab of the Installation Wizard, you can specify which data from which sources should be added to events processed by the collector. Events can be enriched with data obtained using enrichment rules or LDAP.

Rule-based enrichment

There can be more than one enrichment rule. You can add them by clicking the Add enrichment button and can remove them by clicking the cross button. You can use existing enrichment rules or create rules directly in the Installation Wizard.

To add an existing enrichment rule to a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select the relevant resource.

The enrichment rule is added to the set of resources for the collector.

To create a new enrichment rule in a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select Create new.
  3. In the Source kind drop-down list, select the source of data for enrichment and define its corresponding settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Constant

      The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Dictionary name

      The dictionary from which the values are to be taken.

      Key fields

      Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

      If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

      If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can use the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

        • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
        • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
        • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
          • Replace chars specifies the sequence of characters to be replaced.
          • With chars is the character sequence to be used instead of the character sequence being replaced.
        • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
        • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
        • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
        • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
          • Expression is the RE2 regular expression whose results you want to replace.
          • With chars is the character sequence to be used instead of the character sequence being replaced.
        • Converting encoded strings to text:
          • decodeHexString—used to convert a HEX string to text.
          • decodeBase64String—used to convert a Base64 string to text.
          • decodeBase64URLString—used to convert a Base64url string to text.

          When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

          During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

          If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

        Conversions when using the extended event schema

        Whether or not a conversion can be used depends on the type of extended event schema field being used:

        • For an additional field of the "String" type, all types of conversions are available.
        • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
        • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Template

      The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

      • {{.SA.StringArrayOne}}
      • {{- range $index, $element := . SA.StringArrayOne -}}

        {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

      To convert the data in an array field in a template into the TSV format, use the toString function, for example:

      template {{toString .SA.StringArray}}

    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa. IP addresses are converted to DNS names only for private addresses: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 100.64.0.0/10.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can use the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is deprecated, we recommend using cybertrace-http instead.

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests. The default CyberTrace port is 9999.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Maximum number of events in the enrichment queue—maximum number of events stored in the enrichment queue for re-sending. The default value is 1,000,000,000.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can use the Add row button to add a string, and can use the cross button to remove a string.

    • cybertrace-http

      This is a new streaming event enrichment type in CyberTrace that allows you to send a large number of events with a single request to the CyberTrace API. Recommended for systems with a lot of events. Cybertrace-http outperforms the previous 'cybertrace' type, which is still available in KUMA for backward compatibility.

      Limitations:

      • The cybertrace-http enrichment type cannot be used for retroscan in KUMA.
      • If the cybertrace-http enrichment type is being used, detections are not saved in CyberTrace history in the Detections window.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests and the port that CyberTrace API is using. The default port is 443.
      • Secret (required) is a drop-down list in which you can select the secret which stores the credentials for the connection.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Key fields (required) is the list of event fields used for enriching events with data from CyberTrace.
      • Maximum number of events in the enrichment queue—maximum number of events stored in the enrichment queue for re-sending. The default value is 1,000,000,000. After reaching 1 million events received from the CyberTrace server, events stop being enriched until the number of received events is reduced to less than 500,000.
    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

    • geographic data

      This type of enrichment is used to add IP address geographic data to event fields. Learn more about linking IP addresses to geographic data.

      When this type is selected, in the Mapping geographic data to event fields settings block, you must specify from which event field the IP address will be read, select the required attributes of geographic data, and define the event fields in which geographic data will be written:

      1. In the Event field with IP address drop-down list, select the event field from which the IP address is read. Geographic data uploaded to KUMA is matched against this IP address.

        You can use the Add event field with IP address button to specify multiple event fields with IP addresses that require geographic data enrichment. You can delete event fields added in this way by clicking the Delete event field with IP address button.

        When the SourceAddress, DestinationAddress, and DeviceAddress event fields are selected, the Apply default mapping button becomes available. You can use this button to add preconfigured mapping pairs of geographic data attributes and event fields.

      2. For each event field you need to read the IP address from, select the type of geographic data and the event field to which the geographic data should be written.

        You can use the Add geodata attribute button to add field pairs for Geodata attributeEvent field to write to. You can also configure different types of geographic data for one IP address to be written to different event fields. To delete a field pair, click cross-red.

        • In the Geodata attribute field, select which geographic data corresponding to the read IP address should be written to the event. Available geographic data attributes: Country, Region, City, Longitude, Latitude.
        • In the Event field to write to, select the event field which the selected geographic data attribute must be written to.

        You can write identical geographic data attributes to different event fields. If you configure multiple geographic data attributes to be written to the same event field, the event will be enriched with the last mapping in the sequence.

  4. Use the Debug toggle switch to indicate whether or not to enable logging of service operations. Logging is disabled by default.
  5. In the Filter section, you can specify conditions to identify events that will be processed by the enrichment rule resource. You can select an existing filter from the drop-down list or create a new filter.

    Creating a filter in resources

    To create a filter:

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
      3. In the operator drop-down list, select an operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

          The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

          If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

          If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        • inContextTable—presence of the entry in the specified context table.
        • intersect—presence in the left operand of the list items specified in the right operand.
      4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
      5. If you want to add a negative condition, select If not from the If drop-down list.

      You can add multiple conditions or a group of conditions.

    5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

The new enrichment rule was added to the set of resources for the collector.

LDAP enrichment

To enable enrichment using LDAP:

  1. Click Add enrichment with LDAP data.

    This opens the settings block for LDAP enrichment.

  2. In the LDAP accounts mapping settings block, use the New domain button to specify the domain of the user accounts. You can specify multiple domains.
  3. In the LDAP mapping table, define the rules for mapping KUMA fields to LDAP attributes:
    • In the KUMA field column, indicate the KUMA event field which data should be compared to LDAP attribute.
    • In the LDAP attribute column, specify the attribute that must be compared with the KUMA event field. The drop-down list contains standard attributes and can be augmented with custom attributes.

      Before configuring event enrichment using custom attributes, make sure that custom attributes are configured in AD.

      To enrich events with accounts using custom attributes:

      1. Add Custom AD Account Attributes in the LDAP connection settings.

        Standard imported attributes from AD cannot be added as custom attributes. For example, if you add the standard accountExpires attribute as a custom attribute, KUMA returns an error when saving the connection settings.

        The following account attributes can be requested from Active Directory:

        • accountExpires
        • badPasswordTime
        • cn
        • co
        • company
        • department
        • description
        • displayName
        • distinguishedName
        • division
        • employeeID
        • givenName
        • l
        • lastLogon
        • lastLogonTimestamp
        • Mail
        • mailNickname
        • managedObjects
        • manager
        • memberOf (this attribute can be used for search during correlation)
        • mobile
        • name
        • objectCategory
        • objectGUID (this attribute always requested from Active Directory even if a user doesn't specify it)
        • objectSID
        • physicalDeliveryOfficeName
        • pwdLastSet
        • sAMAccountName
        • sAMAccountType
        • sn
        • streetAddress
        • telephoneNumber
        • title
        • userAccountControl
        • UserPrincipalName
        • whenChanged
        • whenCreated

        After you add custom attributes in the LDAP connection settings, the LDAP attribute to receive drop-down list in the collector automatically includes the new attributes. Custom attributes are identified by a question mark next to the attribute name. If you added the same attribute for multiple domains, the attribute is listed only once in the drop-down list. You can view the domains by moving your cursor over the question mark. Domain names are displayed as links. If you click a link, the domain is automatically added to LDAP accounts mapping if it was not previously added.

        If you deleted a custom attribute in the LDAP connection settings, manually delete the row containing the attribute from the mapping table in the collector. Account attribute information in KUMA is updated each time you import accounts.  

      2. Import accounts.
      3. In the collector, in the LDAP mapping table, define the rules for mapping KUMA fields to LDAP attributes.
      4. Restart the collector.

        After the collector is restarted, KUMA begins enriching events with accounts.

    • In the KUMA event field to write to column, specify in which field of the KUMA event the ID of the user account imported from LDAP should be placed if the mapping was successful.

    You can use the Add row button to add a string to the table, and can use the cross button to remove a string. You can use the Apply default mapping button to fill the mapping table with standard values.

Event enrichment rules for data received from LDAP were added to the group of resources for the collector.

If you add an enrichment to an existing collector using LDAP or change the enrichment settings, you must stop and restart the service.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220715]

Step 7. Routing

This is an optional step of the Installation Wizard. On the Routing tab of the Installation Wizard, you can select or create destinations with settings indicating the forwarding destination of events processed by the collector. Typically, events from the collector are routed to two points: to the correlator to analyze and search for threats; and to the storage, both for storage and so that processed events can be viewed later. If necessary, events can be sent elsewhere, for example, to the event router. In that case, select the 'internal' connector at the Transport step. There can be more than one destination point.

To add an existing destination to a collector resource set:

  1. In the Routing step of the installation wizard, click Add.
  2. This opens the Create destination window; in that window, select the type of destination you want to add.
  3. In the Destination drop-down list, select the necessary destination.

    The window name changes to Edit destination, and it displays the settings of the selected resource. To open the settings of a destination for editing in a new browser tab, click edit-grey.

  4. Click Save.

The selected destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

To add a new destination resource to a collector resource set:

  1. In the Routing step of the installation wizard, click Add.
  2. This opens the Create destination window; in that window, specify the following settings:
    1. On the Basic settings tab, in the Name field, enter a unique name for the destination. The name must contain 1 to 128 Unicode characters.
    2. You can use the State toggle switch to enable or disable the service as needed.
    3. In the Type drop-down list, select the type of the destination. The following values are available:
    4. On the Advanced settings tab, specify the values of parameters. The set of parameters that can be configured depends on the type of the destination selected on the Basic settings tab. For detailed information about parameters and their values, click the link for each type of destination in paragraph "c." of these instructions.

The created destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Proceed to the next step of the Installation Wizard.

Page top
[Topic 220716]

Step 8. Setup validation

This is the required, final step of the Installation Wizard. At this step, KUMA creates a service resource set, and the Services are created automatically based on this set:

  • The set of resources for the collector is displayed under ResourcesCollectors. It can be used to create new collector services. When this set of resources changes, all services that operate based on this set of resources will start using the new parameters after the services restart. To do so, you can use the Save and restart services and Save and update service configurations buttons.

    A set of resources can be modified, copied, moved from one folder to another, deleted, imported, and exported, like other resources.

  • Services are displayed in ResourcesActive services. The services created using the Installation Wizard perform functions inside the KUMA program. To communicate with external parts of the network infrastructure, you need to install similar external services on the servers and assets intended for them. For example, an external collector service should be installed on a server intended as an events recipient, external storage services should be installed on servers that have a deployed ClickHouse service, and external agent services should be installed on the Windows assets that must both receive and forward Windows events.

To finish the Installation Wizard:

  1. Click Create and save service.

    The Setup validation tab of the Installation Wizard displays a table of services created based on the set of resources selected in the Installation Wizard. The lower part of the window shows examples of commands that you must use to install external equivalents of these services on their intended servers and assets.

    For example:

    /opt/kaspersky/kuma/kuma collector --core https://kuma-example:<port used for communication with the KUMA Core> --id <service ID> --api.port <port used for communication with the service> --install

    The "kuma" file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You should also ensure the network connectivity of the KUMA system and open the ports used by its components if necessary.

  2. Close the Wizard by clicking Save collector.

The collector service is created in KUMA. Now the service must be installed on the server intended for receiving events.

If a wmi, wec, or etw connector was selected for collectors, you must also install the automatically created KUMA agents.

Page top
[Topic 220717]

Installing a collector in a KUMA network infrastructure

A collector consists of two parts: one part is created in the KUMA Console, and the other part is installed on the network infrastructure server intended for receiving events. The second part of the collector is installed in the network infrastructure.

To install a collector:

  1. Log in to the server where you want to install the service.
  2. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<FQDN of the KUMA Core server>:<port used by KUMA Core for internal communication (port 7210 is used by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component>

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://test.kuma.com:7210 --id XXXX --api.port YYYY

    If errors are detected as a result of the command execution, make sure that the settings are correct. For example, the availability of the required access level, network availability between the collector service and the Core, and the uniqueness of the selected API port. After fixing errors, continue installing the collector.

    If no errors were found, and the collector status in the KUMA Console is changed to green, stop the command execution and proceed to the next step.

    The command can be copied at the last step of the installer wizard. The address and port of the KUMA Core server, the identifier of the collector to be installed, and the port that the collector uses for communication are automatically specified in the command.

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

    Before installation, ensure the network connectivity of KUMA components.

  3. Run the command again by adding the --install key:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA Console> --api.port <port used for communication with the installed component> --install

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

  4. Add KUMA collector port to firewall exclusions.

    For the program to run correctly, ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components.

The collector is installed. You can use it to receive data from an event source and forward it for processing.

Page top
[Topic 220708]

Validating collector installation

To verify that the collector is ready to receive events:

  1. In the KUMA Console, open ResourcesActive services.
  2. Make sure that the collector you installed has the green status.

If the status of the collector is not green, view the log of this service on the machine where it is installed, in the /opt/kaspersky/kuma/collector/<collector ID>/log/collector directory. Errors are logged regardless of whether debug mode is enabled or disabled.

If the collector is installed correctly and you are sure that data is coming from the event source, the table should display events when you search for events associated with the collector.

To check for normalization errors using the Events section of the KUMA Console:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Make sure that you selected Only errors in the Keep raw event drop-down list of the Normalizer resource in the Resources section of the KUMA Console.
  4. In the Events section of KUMA, search for events with the following parameters:

If any events are found with this search, it means that there are normalization errors and they should be investigated.

To check for normalization errors using the Grafana Dashboard:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Open the Metrics section and follow the KUMA Collectors link.
  4. See if the Errors section of the Normalization widget displays any errors.

If there are any errors, it means that there are normalization errors and they should be investigated.

For collectors that use WEC, WMI, or ETW connectors as the transport, make sure that a unique port is used for connecting to the agent. This port is specified in the Transport section of Collector Installation Wizard.

Page top
[Topic 221402]

Ensuring uninterrupted collector operation

An uninterrupted event stream from the event source to KUMA is important for protecting the network infrastructure. Continuity can be ensured though automatic forwarding of the event stream to a larger number of collectors:

  • On the KUMA side, two or more identical collectors must be installed.
  • On the event source side, you must configure control of event streams between collectors using third-party server load management tools, such as rsyslog or nginx.

With this configuration of the collectors in place, no incoming events will be lost if the collector server is unavailable for any reason.

Please keep in mind that when the event stream switches between collectors, each collector will aggregate events separately.

If the KUMA collector fails to start, and its log includes the "panic: runtime error: slice bounds out of range [8:0]" error:

  1. Stop the collector.

    sudo systemctl stop kuma-collector-<collector ID>

  2. Delete the DNS enrichment cache files.

    sudo rm -rf /opt/kaspersky/kuma/collector/<collector ID>/cache/enrichment/DNS-*

  3. Delete the event cache files (disk buffer). Run the command only if you can afford to jettison the events in the disk buffers of the collector.

    sudo rm -rf /opt/kaspersky/kuma/collector/<collector ID>/buffers/*

  4. Start the collector service.

    sudo systemctl start kuma-collector-<collector ID>

In this section

Event stream control using rsyslog

Event stream control using nginx

Page top
[Topic 238522]

Event stream control using rsyslog

To enable rsyslog event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install rsyslog on the event source server (see the rsyslog documentation).
  3. Add rules for forwarding the event stream between collectors to the configuration file /etc/rsyslog.conf:

    *. * @@ <main collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended on

    *. * @@ <backup collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended off

    Example configuration file

    Example configuration file specifying one primary and two backup collectors. The collectors are configured to receive events on TCP port 5140.

    *.* @@kuma-collector-01.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended on

    & @@kuma-collector-02.example.com:5140

    & @@kuma-collector-03.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended off

  4. Restart rsyslog by running the following command:

    systemctl restart rsyslog.

Event stream control is now enabled on the event source server.

Page top
[Topic 238527]

Event stream control using nginx

To control the event stream using nginx, you need to create and configure an nginx server to receive events from the event source and then forward these to collectors.

To enable nginx event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install nginx on the server intended for event stream control.
    • Installation command in Oracle Linux 8.6:

      $sudo dnf install nginx

    • Installation command in Ubuntu 20.4:

      $sudo apt-get install nginx

      When installing from sources, you must compile with the parameter -with-stream option:
      $ sudo ./configure -with-stream -without-http_rewrite_module -without-http_gzip_module

  3. On the nginx server, add the stream module to the nginx.conf configuration file that contains the rules for forwarding the stream of events between collectors.

    Example stream module

    Example module in which event stream is distributed between the collectors kuma-collector-01.example.com and kuma-collector-02.example.com, which receive events via TCP on port 5140 and via UDP on port 5141. Balancing uses the nginx.example.com ngnix server.

    stream {

     upstream syslog_tcp {

    server kuma-collector-1.example.com:5140;

    server kuma-collector-2.example.com:5140;

    }

    upstream syslog_udp {

    server kuma-collector-1.example.com:5141;

    server kuma-collector-2.example.com:5141;

    }

     server {

    listen nginx.example.com:5140;

    proxy_pass syslog_tcp;

    }

    server {

    listen nginx.example.com:5141 udp;

    proxy_pass syslog_udp;

    proxy_responses 0;

    }

    }

     worker_rlimit_nofile 1000000;

    events {

    worker_connections 20000;

    }

    # worker_rlimit_nofile is the limit on the number of open files (RLIMIT_NOFILE) for workers. This is used to raise the limit without restarting the main process.

    # worker_connections is the maximum number of connections that a worker can open simultaneously.

    With a large number of active services and users, you may need to increase the limit of open files in the nginx.conf settings. For example:

    worker_rlimit_nofile 1000000;

    events {

    worker_connections 20000;

    }

    # worker_rlimit_nofile is the limit on the number of open files (RLIMIT_NOFILE) for workers. This is used to raise the limit without restarting the main process.

    # worker_connections is the maximum number of connections that a worker can open simultaneously.

  4. Restart nginx by running the following command:

    systemctl restart nginx

  5. On the event source server, forward events to the nginx server.

Event stream control is now enabled on the event source server.

Nginx Plus may be required to fine-tune balancing, but certain balancing methods, such as Round Robin and Least Connections, are available in the base version of nginx.

For more details on configuring nginx, please refer to the nginx documentation.

Page top
[Topic 238530]