Kaspersky Next XDR Expert

Data collection and analysis rules

Data collection and analysis rules are used to recognize events from stored data.

Data collection and analysis rules, in contrast to real-time streaming correlation, allow using the SQL language to recognize and analyze events stored in the database.

To manage the section, you need one of the following roles: General administrator, Tenant administrator, Tier 1 analyst, Tier 2 analyst.

When creating or editing data collection and analysis rules, you need to specify the settings listed in the table below.

Settings of data collection and analysis rules

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

If you have access to only one tenant, this field is filled in automatically. If you have access to multiple tenants, the name of the first tenant from your list of available tenants is inserted. You can select any tenant from this list.

Sql

Required setting.

The SQL query must contain an aggregation function with a LIMIT and/or a data grouping with a LIMIT.

You must use a LIMIT value between 1 and 10,000.

Examples of SQL queries

  • A query containing only an aggregation function:

    SELECT count(DeviceCustomFloatingPoint1) AS `Aggregate` FROM `events`  WHERE  Type = 1  ORDER BY  Aggregate DESC  LIMIT  10

  • A query containing only data grouping:

    SELECT SourceAddress,DeviceCustomFloatingPoint1 FROM `events` WHERE  Type = 1  GROUP BY  SourceAddress, DeviceCustomFloatingPoint1  ORDER BY  DeviceCustomFloatingPoint1 DESC  LIMIT 10

  • A query containing an aggregation function and data grouping:

    SELECT SourceAddress, sum(DeviceCustomFloatingPoint1) FROM `events`  WHERE  Type = 1  GROUP BY  SourceAddress, DeviceCustomFloatingPoint1  ORDER BY  DeviceCustomFloatingPoint1 DESC  LIMIT  10

  • A query containing an expression using aggregation functions:

    SELECT stddevPop(DeviceCustomFloatingPoint1) + avg(DeviceCustomFloatingPoint1) AS `Aggregate` FROM `events`  WHERE  Type = 1  ORDER BY  Aggregate DESC  LIMIT  10

You can also use SQL function sets: enrich and lookup.

Query interval

Required setting.

The interval for executing the SQL query. You can specify the interval in minutes, hours, and days. The minimum interval is 1 minute.

The default timeout of the SQL query is equal to the interval that you specify in this field.

If the execution of the SQL query takes longer than the timeout, an error occurs. In this case, we recommend increasing the interval. For example, if the interval is 1 minute, and the query takes 80 seconds to execute, we recommend setting the interval to at least 90 seconds.

Tags

Optional setting.

Tags for resource search.

Depth

Optional setting.

Expression for the lower bound of the interval for searching events in the database.

To select a value from the list or to specify the depth as a relative interval, place the cursor in the field. For example, if you want to find all events from one hour ago to now, set the relative interval of now-1h.

Description

Optional setting.

Description of data collection and analysis rules.

Mapping

Settings of the mapping the fields of an SQL query result to KUMA events:

Source field is the field from the SQL query result that you want to convert into a KUMA event.

Event field is the KUMA event field. You can select one of the values in the list by placing the mouse cursor in this field.

Label is a unique custom label for event fields that begin with DeviceCustom*.

You can add new table rows or delete table rows. To add a new table row, click Add mapping. To delete a row, select the check box next to the row and click the button.

If you do not want to fill in the fields manually, you can click the Add mapping from SQL button. The field mapping table is populated with the values of the SQL query fields, including aliases (if any). For example, if the value of an SQL query field is SourceAddress and this value is the same as the name of an event field, this value is inserted into in the Event field column in the field mapping table.

Clicking the Add mapping from SQL button again does not refresh the table, and fields from the SQL query are added to it again.

You can create a data collection and analysis rule in one of the following ways:

  • In the Resources → Resources and services → Data collection and analysis rules section.
  • In the Events section.

To create a data collection and analysis rule in the Events section:

  1. Create or generate an SQL query and click the Правила сбора и анализа данных button.

    A new browser tab for creating a data collection and analysis rule is opened in the browser with pre-filled SQL query and Depth fields. The field mapping table is also be populated automatically if you did not use an asterisk (*) in the SQL query.

  2. Fill in the required fields.

    If necessary, you can change the value in the Query interval field.

  3. Save the settings.

The data collection and analysis rule is saved and is available in the Resources and services → Data collection and analysis rules section.

Page top
[Topic 295030]

Configuring the scheduler for a data collection and analysis rule

For a data collection and analysis rule to run, you must create a scheduler for it.

The scheduler makes SQL queries to specified storage partitions with the interval and search depth configured in the rule, and then converts the SQL query results into base events, which it then sends to the correlator.

SQL query results converted to base events are not stored in the storage.

For the scheduler to work correctly, you must configure the link between the data collection and analysis rule, the storage, and the correlators in the Resources → Data collection and analysis section.

To manage this section, you need one of the following roles: General administrator, Tenant administrator, Tier 2 analyst, Access to shared resources, Manage shared resources.

The schedulers are arranged in the table by the date of their last launch. You can sort the data in columns in ascending or descending order by clicking the Arrow_Down_Icone icon in the column heading.

Available columns of the table of schedulers:

  • Rule name is the name of the data collection and analysis rule for which you created the scheduler.
  • Tenant name is the name of the tenant to which the data collection and analysis rule belongs.
  • Status is the status of the scheduler. The following values are possible:
    • Enabled means the scheduler is running, and the data collection and analysis rule will be started in accordance with the specified schedule.
    • Disabled means the scheduler is not running.

      This is the default status of a newly created scheduler. For the scheduler to run, it must be Enabled.

  • The scheduler finished at is the last time the scheduler's data collection and analysis rule was started.
  • Rule run status is the status with which the scheduler has finished. The following values are possible:
    • Ok means the scheduler finished without errors, the rule was started.
    • Unknown means the scheduler was Enabled and its status is currently unknown. The Unknown status is displayed if you have linked storages and correlators on the corresponding tabs and Enabled the scheduler, but have not yet started it.
    • Stopped means the scheduler is stopped, the rule is not running.
    • Error means the scheduler has finished, and the rule was completed with an error.
  • Last error lists errors (if any) that occurred during the execution of the data collection and analysis rule.

Failure to send events to the configured correlator does not constitute an error.

You can use the toolbar in the upper part of the table to perform actions on schedulers.

  • Add a new scheduler. Click the Add button, and in the displayed window, select the check boxes next to the names of the data collection and analysis rules for which you want to create a scheduler.

    In this window, you can select only data collection and analysis rules that have been created previously. You cannot create a new rule.

  • Remove a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to delete and click the Delete button. The scheduler and links are removed. The data collection and analysis rule is not removed.
  • Enable a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to enable and click the Enable on a schedule button. The data collection and analysis rule for which this scheduler was created will be executed in accordance with the schedule configured in the settings of this rule.
  • Disable a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to disable and click the Disable button. The data collection and analysis rule is paused, but the scheduler itself and the links are not deleted.
  • Start the scheduler. In the schedulers table, select the check boxes next to the enabled schedulers and click the Run now button. This scheduler's data collection and analysis rule is executed immediately.

To edit the scheduler, click the corresponding line in the table.

Available scheduler settings for data collection and analysis rules are described below.

General tab

On this tab you can:

  • Enable or disable the scheduler using a toggle switch.

    If the toggle switch is enabled, the data collection and analysis rule runs in accordance with the schedule configured in its settings.

  • Edit the following settings of the data collection and analysis rule:
    • Name
    • Query interval
    • Depth
    • Sql
    • Description
    • Mapping

The Linked storages tab

On this tab you need to specify the storage to which the scheduler will send SQL queries.

To specify a storage:

  1. Click the Link button in the toolbar.
  2. This opens a window; in that window, specify the name of the storage to which you want to add the link, as well as the name of the section of the selected storage.

    You can select only one storage, but multiple sections of that storage.

  3. Click Add.

The link is created and displayed in the table on the Linked storages tab.

If necessary, you can remove the links by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.

The Linked correlators tab

On this tab, you must add correlators for handling base events.

To add a correlator:

  1. Click the Link button in the toolbar.
  2. This opens a window; in that window, hover over the Correlator field.
  3. In the displayed list of correlators, select check boxes next to the correlators you want to add.
  4. Click Add.

The correlators are added and displayed in the table on the Linked correlators tab.

If necessary, you can remove the correlators by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.

You can also view the result of the scheduler in the Core log; to do so, you must first configure the Debug mode in Core settings. To download the log, select the Resources → Active services in KUMA, then select the Core service and click the Log button.

Log records with scheduler results have the datamining scheduler prefix.

Page top
[Topic 295865]