Kaspersky Next XDR Expert

Contents

KUMA resources

Expand all | Collapse all

Resources are KUMA components that contain parameters for implementing various functions: for example, establishing a connection with a given web address or converting data according to certain rules. Like parts of an erector set, these components are assembled into resource sets for services that are then used as the basis for creating KUMA services.

Resources are contained in the Resources section, Resources block of KUMA Console. The following resource types are available:

  • Correlation rules—resources of this type contain rules for identifying event patterns that indicate threats. If the conditions specified in these resources are met, a correlation event is generated.
  • Normalizers—resources of this type contain rules for converting incoming events into the format used by KUMA. After processing in the normalizer, the raw event becomes normalized and can be processed by other KUMA resources and services.
  • Connectors—resources of this type contain settings for establishing network connections.
  • Aggregation rules—resources of this type contain rules for combining several basic events of the same type into one aggregation event.
  • Enrichment rules—resources of this type contain rules for supplementing events with information from third-party sources.
  • Destinations—resources of this type contain settings for forwarding events to a destination for further processing or storage.
  • Filters—resources of this type contain criteria for selecting individual events from the event stream to be sent to processing.
  • Response rules—resources of this type are used in correlators to, for example, execute scripts or launch Open Single Management Platform tasks when certain conditions are met.
  • Data collection and analysis rules—resources of this type contain rules that allow scheduling SQL queries with aggregation functions to the storage. Data received from SQL queries is then used for correlation.
  • Notification templates—resources of this type are used when sending notifications about new alerts.
  • Active lists—resources of this type are used by correlators for dynamic data processing when analyzing events according to correlation rules.
  • Dictionaries—resources of this type are used to store keys and their values, which may be required by other KUMA resources and services.
  • Proxies—resources of this type contain settings for using proxy servers.
  • Secrets—resources of this type are used to securely store confidential information (such as credentials) that KUMA needs to interact with external services.

When you click on a resource type, a window opens displaying a table with the available resources of this type. The table contains the following columns:

You can configure the set of columns for the table and their order, as well as configure the display of information in some columns:

  • You can show or hide columns in the menu that you can open by clicking the gear icon in the toolbar in the upper part of the table.
  • You can reorder the columns by dragging column headings to the left or right.
  • You can configure the display of information in a column by clicking the Arrow_Down_Icone icon in the heading of the column.
  • Name—the name of the resource. Can be used to search for resources and sort them.
  • Updated—the date and time of the last update of a resource. Can be used to sort resources.
  • Created by—the name of the user who created a resource.
  • Description—the description of a resource.
  • Type—the type of the resource. Displayed for all types of resources, except Aggregation rules, Enrichment rules, Data collection and analysis rules, Filters, Active lists, Proxies.
  • Resource path—address in the resource tree. Displayed in the tree of folders, starting from the tenant in which the resource was created.
  • Tags—tags assigned to the resource. A resource can have more than one tag.

    Tags are part of the resource and are imported with the resource.

  • Package name—the name of the package in which the resource was imported from the repository.
  • Correlatorthe list of correlators to which the correlation rule is linked. Displayed only for resources of the Correlation rule type.

    For a rule that is not linked to any correlator, the column is blank. If the list contains multiple values, these values are sorted alphabetically. You can filter the values in the Correlator column by clicking the heading of the column and selecting the correlation rules that you want displayed in the table: Without correlator or With correlator.

    For rules of the Shared tenant, the correlators of tenants to which you have access and which you selected in the tenant list are displayed.

    Values in the Correlator column are not editable. If you want to perform actions on correlators, you need to click the rule in the table to open the properties of the rule, and go to the Correlators tab.

  • MITRE techniques—the MITRE matrix techniques that this correlation rule covers. Displayed only for resources of the Correlation rule type. When you hover over a value, the name of the rule is displayed.

The maximum table size is not limited. If you want to select all resources, scroll to the end of the table and select the Select all check box, which selects all available resources in the table.

The table of resources in the lower part displays the number of resources from tenants that are available to you in the table:

  • Total is the total amount or the amount with the filter or search applied.
  • Selected is the number of selected resources.

When filters are applied, the resource selection and the Selected value are reset. If the amount of resources changes due to the actions (for example, deletion) undertake by another user, the displayed number of resources changes after you refresh the page, perform an action with the resource, or apply a filter.

Resources can be organized into folders. The folder structure is displayed in the left part of the window: root folders correspond to tenants and contain a list of all resources of the tenant. All other folders nested within the root folder display the resources of an individual folder. When a folder is selected, the resources it contains are displayed as a table in the right pane of the window.

Resources can be created, edited, copied, moved from one folder to another, and deleted. Resources can also be exported and imported.

KUMA comes with a set of predefined resources, which can be identified by the "[OOTB]<resource_name>" name. OOTB resources are protected from editing.

If you want to adapt a predefined OOTB resource to your organization's infrastructure:

  1. In the Resources-<resource type> section, select the OOTB resource that you want to edit.
  2. In the upper part of the KUMA Console, click Duplicate, then click Save.
  3. A new resource named "[OOTB]<resource_name> - copy" is displayed in the web interface.
  4. Edit the copy of the predefined resource as necessary and save your changes.

The adapted resource is available for use.

In this section

Operations with resources

Destinations

Normalizers

Aggregation rules

Enrichment rules

Data collection and analysis rules

Correlation rules

Filters

Active lists

Dictionaries

Response rules

Connectors

Secrets

Context tables

Page top
[Topic 217687]

Operations with resources

To manage Kaspersky Unified Monitoring and Analysis Platform resources, you can create, move, copy, edit, delete, import, and export them. These operations are available for all resources, regardless of the resource type.

The table of resources in the lower part displays the number of resources from tenants that are available to you in the table:

  • Total is the total amount or the amount with the filter or search applied.
  • Selected is the number of selected resources.

When filters are applied, the resource selection and the Selected value are reset. If the amount of resources changes due to the actions (for example, deletion) undertake by another user, the displayed number of resources changes after you refresh the page, perform an action with the resource, or apply a filter.

Kaspersky Unified Monitoring and Analysis Platform resources are arranged in folders. You can add, rename, move, or delete resource folders.

In this section

Creating, renaming, moving, and deleting resource folders

Creating, duplicating, moving, editing, and deleting resources

Bulk deletion of resources

Link correlators to a correlation rule

Updating resources

Exporting resources

Importing resources

Tag management

Resource usage tracing

Resource versioning

Page top
[Topic 217971]

Creating, renaming, moving, and deleting resource folders

Resources can be organized into folders. The folder structure is displayed in the left part of the window: root folders correspond to tenants and contain a list of all resources of the tenant. All other folders nested within the root folder display the resources of an individual folder. When a folder is selected, the resources it contains are displayed as a table in the right pane of the window.

You can create, rename, move and delete folders.

To create a folder:

  1. Select the folder in the tree where the new folder is required.
  2. Click the Add folder button.

The folder will be created.

To rename a folder:

  1. Locate required folder in the folder structure.
  2. Hover over the name of the folder.

    The More-DropDown icon will appear near the name of the folder.

  3. Open the More-DropDown drop-down list and select Rename.

    The folder name will become active for editing.

  4. Enter the new folder name and press ENTER.

    The folder name cannot be empty.

The folder will be renamed.

To move a folder,

Drag and drop the folder to a required place in folder structure by clicking its name.

Folders cannot be dragged from one tenant to another.

To delete a folder:

  1. Select the relevant folder in the folder structure.
  2. Right-click to bring up the context menu and select Delete.

    The conformation window appears.

  3. Click OK.

The folder will be deleted.

The program does not delete folders that contain files or subfolders.

Page top
[Topic 218051]

Creating, duplicating, moving, editing, and deleting resources

You can create, move, copy, edit, and delete resources.

To create the resource:

  1. In the Resources<resource type> section, select or create a folder where you want to add the new resource.

    Root folders correspond to tenants. For a resource to be available to a specific tenant, it must be created in the folder of that tenant.

  2. Click the Add <resource type> button.

    The window for configuring the selected resource type opens. The available configuration parameters depend on the resource type.

  3. Enter a unique resource name in the Name field.
  4. Specify the required parameters (marked with a red asterisk).
  5. If necessary, specify the optional parameters (not required).
  6. Click Save.

The resource will be created and available for use in services and other resources.

To move the resource to a new folder:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box near the resource you want to move. You can select multiple resources.

    The DragIcon icon appears near the selected resources. The number of selected resources is displayed in the lower part of the table.

  3. Use the DragIcon icon to drag and drop resources to the required folder.

The resources will be moved to the new folders.

You can only move resources to folders of the tenant in which the resources were created. Resources cannot be moved to another tenant's folders.

To copy the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box next to the resource that you want to copy and click Duplicate.

    The number of selected resources is displayed in the lower part of the table.

    A window opens with the settings of the resource that you have selected for copying. The available configuration parameters depend on the resource type.

    The <selected resource name> - copy value is displayed in the Name field.

  3. Make the necessary changes to the parameters.
  4. Enter a unique name in the Name field.
  5. Click Save.

The copy of the resource will be created.

To edit the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the resource.

    A window with the settings of the selected resource opens. The available configuration parameters depend on the resource type.

  3. Make the necessary changes to the parameters.
  4. Do one of the following:
    • Click Save to save your changes.
    • Click Save with a comment, and in the displayed window, add a comment that describes your changes. The changes are saved and the comment is added to the created version of the resource.

The resource is updated and a new version is created for it. If this resource is used in a service, restart the service to apply the new version of the resource.

If the current resource is not editable (for example, you cannot edit a correlation rule), you can go to the card of another resource by clicking the View button. This button becomes available in batch resources when you click another resource linked to your current resource.

If, when saving changes to a resource, it turns out that the current version of the resource has been modified by another user, you are prompted to select one of the following actions:

  • Save your changes as a new version of the resource on top of the changes made by the other user.
  • Save your changes as a new resource.

    In this case, a duplicate of the original resource is created with the changed settings. The "- copy" string is added to the name of the new resource, and the name and version of the resource that was duplicated is specified in the version comments of the new resource.

  • Discard your changes.

    Discarded changes cannot be restored.

To delete the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box next to the resource that you want to delete and click Delete.

    The number of selected resources is displayed in the lower part of the table. A confirmation window opens.

  3. Click OK.

The resource and all its saved versions are deleted.

Page top
[Topic 218050]

Bulk deletion of resources

In the KUMA Console, you can select multiple resources and delete them.

You must have the right to delete resources.

To delete resources:

  1. In the Resources → <resource type> section, find the required resource in the folder structure.
  2. Select the check boxes next to the resources that you want to delete.

    In the lower part of the table, you can see the total number of resources and the number of resources selected.

  3. Click Delete.

    This opens a window that tells you whether it is safe to delete resources, depending on whether the resources selected for deletion are linked to other resources.

    For all resources that cannot be deleted, the application displays a table of links in a modal window.

  4. Click Delete.

    Only resources without links are deleted.

Deleting folders with resources

You can select the delete operation for any folder at any level, except the tenant.

To delete a folder with resources:

  1. In the Resources section, select a folder.
  2. Click the icon_three dots button and select the Delete option.

    This opens a window prompting you to confirm deletion. The window displays a field in which you can enter the generated value. Also, if dependent resources exist in the folder, a list of dependencies is displayed.

  3. Enter the generated value.
  4. Confirm the deletion.

You can delete a folder if:

  • The folder does not contain any subfolders or resources.
  • The folder does not contain any subfolders, but does contain unlinked resources.
  • None of the resources in the folder are dependencies of anything (services, resources, integrations).

Page top
[Topic 295160]

Link correlators to a correlation rule

The Link correlators option is available for the created correlation rules.

To link correlators:

  1. In the KUMA web interface → ResourcesCorrelation rules section, select the created correlation rule and go to the Correlators tab.
  2. This opens the Correlators window; in that window, select one or more correlators by selecting the check box next to them.
  3. Click OK.

Correlators are linked to a correlation rule.

The rule is added to the end of the execution queue in each selected correlator. If you want to move the rule up in the execution queue, go to ResourcesCorrelators → <selected correlator> → Edit correlatorCorrelation, select the check box next to the relevant rule and use the Move up or Move down buttons to reorder the rules as necessary.

Page top
[Topic 263712]

Updating resources

Kaspersky regularly releases packages with resources that can be imported from the repository. You can specify an email address in the settings of the Repository update task. After the first execution of the task, KUMA starts sending notifications about the packages available for update to the specified address. You can update the repository, analyze the contents of each update, and decide if to import and deploy the new resources in the operating infrastructure. KUMA supports updates from Kaspersky servers and from custom sources, including offline update using the update mirror mechanism. If you have other Kaspersky products in the infrastructure, you can connect KUMA to existing update mirrors. The update subsystem expands KUMA capabilities to respond to the changes in the threat landscape and the infrastructure. The capability to use it without direct Internet access ensures the privacy of the data processed by the system.

To update resources, perform the following steps:

  1. Update the repository to deliver the resource packages to the repository. The repository update is available in two modes:
    • Automatic update
    • Manual update
  2. Import the resource packages from the updated repository into the tenant.

For the service to start using the resources, make sure that the updated resources are mapped after performing the import. If necessary, link the resources to collectors, correlators, or agents, and update the settings.

To enable automatic update:

  1. In the Settings → Repository update section, configure the Data refresh interval in hours. The default value is 24 hours.
  2. Specify the Update source. The following options are available:
    • .

      You can view the list of update servers in the Knowledge Base.

    • Custom source:
      • The URL to the shared folder on the HTTP server.
      • The full path to the local folder on the host where the KUMA Core is installed.

        If a local folder is used, the kuma system user must have read access to this folder and its contents.

  3. If necessary, in the Proxy server list, select an existing proxy server to be used when running the Repository update task.

    You can also create a new proxy server by clicking on the AddResource button.

  4. Specify the Emails for notification by clicking the Add button. The notifications that new packages or new versions of the packages imported into the tenant are available in the repository are sent to the specified email addresses.

    If you specify the email address of a KUMA user, the Receive email notifications check box must be selected in the user profile. For emails that do not belong to any KUMA user, the messages are received without additional settings. The settings for connecting to the SMTP server must be specified in all cases.

  5. Click Save. The update task starts shortly. Then the task restarts according to the schedule.

To manually start the repository update:

  1. To disable automatic updates, in the Settings → Repository update section, select the Disable automatic update check box. This check box is cleared by default. You can also start a manual repository update without disabling automatic update. Starting an update manually does not affect the automatic update schedule.
  2. Specify the Update source. The following options are available:
    • Kaspersky update servers.
    • Custom source:
      • The URL to the shared folder on the HTTP server.
      • The full path to the local folder on the host where the KUMA Core is installed.

        If a local folder is used, the kuma user must have access to this folder and its contents.

  3. If necessary, in the Proxy server list, select an existing proxy server to be used when running the Repository update task.

    You can also create a new proxy server by clicking on the AddResource button.

  4. Specify the Emails for notification by clicking the Add button. The notifications that new packages or new versions of the packages imported into the tenant are available in the repository are sent to the specified email addresses.

    If you specify the email address of a KUMA user, the Receive email notifications check box must be selected in the user profile. For emails that do not belong to any KUMA user, the messages are received without additional settings. The settings for connecting to the SMTP server must be specified in all cases.

  5. Click Run update. Thus, you simultaneously save the settings and manually start the Repository update task.
Page top
[Topic 242817]

Configuring a custom source using Kaspersky Update Utility

You can update resources without Internet access by using a custom update source via the Kaspersky Update Utility.

Configuration consists of the following steps:

  1. Configuring a custom source using Kaspersky Update Utility:
    1. Installing and configuring Kaspersky Update Utility on one of the computers in the corporate LAN.
    2. Configuring copying of updates to a shared folder in Kaspersky Update Utility settings.
  2. Configuring update of the KUMA repository from a custom source.

Configuring a custom source using Kaspersky Update Utility:

You can download the Kaspersky Update Utility distribution kit from the Kaspersky Technical Support website.

  1. In Kaspersky Update Utility, enable the download of updates for KUMA 2.1:
    • Under ApplicationsPerimeter control, select the check box next to KUMA 2.1 to enable the update capability.
    • If you work with Kaspersky Update Utility using the command line, add the following line to the [ComponentSettings] section of the updater.ini configuration file or specify the true value for an existing line:

      KasperskyUnifiedMonitoringAndAnalysisPlatform_3_4=true

  2. In the Downloads section, specify the update source. By default, Kaspersky update servers are used as the update source.
  3. In the Downloads section, in the Update folders group of settings, specify the shared folder for Kaspersky Update Utility to download updates to. The following options are available:
    • Specify the local folder on the host where Kaspersky Update Utility is installed. Deploy the HTTP server for distributing updates and publish the local folder on it. In KUMA, in the SettingsRepository updateCustom source section, specify the URL of the local folder published on the HTTP server.
    • Specify the local folder on the host where the Kaspersky Update Utility is installed. Make this local folder available over the network. Mount the network-accessible local folder on the host where KUMA is installed. In KUMA, in the SettingsRepository updateCustom source section, specify the full path to the local folder.

For detailed information about working with Kaspersky Update Utility, refer to the Kaspersky Knowledge Base.

Page top
[Topic 245074]

Exporting resources

If shared resources are hidden for a user, the user cannot export shared resources or resources that use shared resources.

To export resources:

  1. In the Resources section, click Export resources.

    The Export resources window opens with the tree of all available resources.

  2. In the Password field enter the password that must be used to protect exported data.
  3. In the Tenant drop-down list, select the tenant whose resources you want to export.
  4. Check boxes near the resources you want to export.

    If selected resources are linked to other resources, linked resources will be exported, too. The number of selected resources is displayed in the lower part of the table.

  5. Click the Export button.

Current versions of resources in a password-protected file are saved on your computer in accordance with your browser settings. Previous versions of the resources are saved in the file. The Secret resources are exported blank.

To export a previous version of a resource:

  1. In the KUMA Console, in the Resources section, select the type of resources that you need.

    This opens a window opens with a table of available resources of this type.

    If you want to view all resources, in the Resources section, go to the List tab.

  2. Select the check box for the resource whose change history you want to view, and click the Show version history button in the upper part of the table.

    This opens the window with the version history of the resource.

  3. Click the row of the version that you want to export and click the Export button in the lower part of the displayed window.

    You can only export a previous version of a resource. The Export button is not displayed when the current version of the resource is selected.

The resource version is saved in a JSON file on your computer in accordance with your browser settings.

Page top
[Topic 217870]

Importing resources

In KUMA 3.4, we recommended using resources from the "[OOTB] KUMA 3.4 resources" package and resources published in the repository after the release of this package.

To import resources:

  1. In the Resources section, click Import resources.

    The Resource import window opens.

  2. In the Tenant drop-down list, select the tenant to assign the imported resources to.
  3. In the Import source drop-down list, select one of the following options:
    • File

      If you select this option, enter the password and click the Import button.

    • Repository

      If you select this option, a list of packages available for import is displayed. We recommend you to ensure that the repository update date is relatively recent and configure automatic updates if necessary.

      You can select one or more packages to import and click the Import button. The dependent resources of the Shared tenant are imported into the Shared tenant, the rest of the resources are imported into the selected tenant. You do not need special rights for the Shared tenant; you must only have the right to import in the selected tenant.

      Imported resources marked as "This resource is a part of the package. You can delete it, but it is impossible to edit." can only be deleted. To rename, edit or move an imported resource, make a copy of the resource using the Duplicate button and perform the desired actions with the resource copy. When importing future versions of the package, the duplicate is not updated because it is a separate object.

      Imported resources in the "Integration" directory can be edited; such resources are marked as "This resource is a part of the package". A Dictionary of the "Table" type can be added to the batch resource located in the "Integration" directory; adding other resources is not allowed. When importing future versions of the package, the edited resource will not be replaced with the corresponding resource from the package, which allows you to keep the changes you made.

  4. Resolve the conflicts between the resources imported from the file and the existing resources if they occur. Read more about resource conflicts below.
    1. If the name, type, and guid of an imported resource fully match the name, type, and guid of an existing resource, the Conflicts window opens with the table displaying the type and the name of the conflicting resources. Resolve displayed conflicts:
      • To replace the existing resource with a new one, click Replace.

        To replace all conflicting resources, click Replace all.

      • To leave the existing resource, click Skip.

        For dependent resources, that is, resources that are associated with other resources, the Skip option is not available; you can only Replace dependent resources.

        To keep all existing resources, click Skip all.

    2. Click the Resolve button.

    The resources are imported to KUMA. The Secret resources are imported blank.

Importing resources that use the extended event schema

If you import a normalizer that uses one or more fields of the extended event schema, KUMA automatically creates an extended schema field that is used in the normalizer.

If you import other types of resources that use fields of the extended event schema in their logic, the resources are imported successfully. To make sure the imported resources work as intended, you need to create the corresponding extended schema fields in the Settings → Extended event schema fields section or import a normalizer that uses the required fields.

If a normalizer that uses an extended event schema field is imported into KUMA and the same field already exists in KUMA, the previously created field is used.

If a normalizer is imported into KUMA that uses an extended event schema field that does not meet the KUMA requirements, the import is completed, but the extended event schema field is created with the Disabled status and you cannot use this field in other normalizers and resources. An extended event schema field runs afoul of requirements if, for example, its name contains special characters or spaces. If you want to use such a field that does not meet the requirements, you need to fix its problems (for example, by renaming it) and then enable the field.

About conflict resolving

When resources are imported into KUMA from a file, they are compared with existing resources; the following parameters are compared:

  • Name and kind. If an imported resource's name and kind parameters match those of the existing one, the imported resource's name is automatically changed.
  • ID. If identifiers of two resources match, a conflict appears that must be resolved by the user. This could happen when you import resources to the same KUMA server from which they were exported.

When resolving a conflict you can choose either to replace existing resource with the imported one or to keep exiting resource, skipping the imported one.

In this case, if a conflict occurs, the imported resource is added as a new version of the existing resource. An "imported resource" comment is added to this version.

Some resources are linked: for example, in some types of connectors, the connector secret must be specified. The secrets are also imported if they are linked to a connector. Such linked resources are exported and imported together.

Special considerations of import:

  1. Resources are imported to the selected tenant.
  2. If a linked resource was in the Shared tenant, it ends up in the Shared tenant when imported.
  3. In the Conflicts window, the Parent column always displays the top-most parent resource among those that were selected during import.
  4. If a conflict occurs during import and you choose to replace existing resource with a new one, it would mean that all the other resources linked to the one being replaced are automatically replaced with the imported resources.

Known errors:

  1. The linked resource ends up in the tenant specified during the import, and not in the Shared tenant, as indicated in the Conflicts window, under the following conditions:
    1. The linked resource is initially in the Shared tenant.
    2. In the Conflicts window, you select Skip for all parent objects of the linked resource from the Shared tenant.
    3. You leave the linked resource from the Shared tenant for replacement.
  2. After importing, the categories do not have a tenant specified in the filter under the following conditions:
    1. The filter contains linked asset categories from different tenants.
    2. Asset category names are the same.
    3. You are importing this filter with linked asset categories to a new server.
  3. In Tenant 1, the name of the asset category is duplicated under the following conditions:
    1. in Tenant 1, you have a filter with linked asset categories from Tenant 1 and the Shared tenant.
    2. The names of the linked asset categories are the same.
    3. You are importing such a filter from Tenant 1 to the Shared tenant.
  4. You cannot import conflicting resources into the same tenant.

    The error "Unable to import conflicting resources into the same tenant" means that the imported package contains conflicting resources from different tenants and cannot be imported into the Shared tenant.

    Solution: Select a tenant other than Shared to import the package. In this case, during the import, resources originally located in the Shared tenant are imported into the Shared tenant, and resources from the other tenant are imported into the tenant selected during import.

  5. Only the general administrator can import categories into the Shared tenant.

    The error "Only the general administrator can import categories into the Shared tenant" means that the imported package contains resources with linked shared asset categories. You can see the categories or resources with linked shared asset categories in the KUMA Core log. Path to the Core log:

    /opt/kaspersky/kuma/core/log/core

    Solution. Choose one of the following options:

    • Do not import resources to which shared categories are linked: clear the check boxes next to the relevant resources.
    • Perform the import under a General administrator account.
  6. Only the general administrator can import resources into the Shared tenant.

    The error "Only the general administrator can import resources into the Shared tenant" means that the imported package contains resources with linked shared resources. You can see the resources with linked shared resources in the KUMA Core log. Path to the Core log:

    /opt/kaspersky/kuma/core/log/core

    Solution. Choose one of the following options:

    • Do not import resources that have linked resources from the Shared tenant, and the shared resources themselves: clear the check boxes next to the relevant resources.
    • Perform the import under a General administrator account.
Page top
[Topic 242787]

Tag management

To help manage resources, the KUMA Console lets you add tags to resources. You can use tags to search for a component, as well as manage tags, link or unlink tags.

You cannot add tags to resources that are created from the interface of other resources. Tags can be added only from the resource's own card. You also cannot add tags to a resource that is not editable.

Tag management

The list of tags is displayed in the Settings → Tags section and is displayed as a table with the following columns: Name, Tenant, Used in resources.

In the Tags table, you can:

  • Sort tags by Name, Used in resources fields.
  • Filter by values of the Tenant field.
  • Find a tag by the Name field.
  • Go to the list of resources that have the selected tag.

Adding a tag

To add a tag:

  1. Go to the Resources section and select a resource.
  2. In the panel above the table, click Add.
  3. In the Tags field of the selected resource, add a new tag, or select a tag from the list.
  4. Click Create.

The new tag is added.

You can also add tags from existing ones.

When adding a tag, keep in mind the following special considerations:

  • You can add multiple tags.
  • A tag can contain characters of various alphabets (for example, Cyrillic, Latin, or Greek characters), numerals, underscores, and spaces.
  • A tag may not contain any special characters other than the underscore and the space.
  • You can enter the tag in uppercase or lowercase, but after saving, the tag is always displayed in lowercase.
  • The tag inherits the tenant of the resource in which it is used.
  • A tag is part of a resource and exists as long as the resource exists in which the tag was created or is used.
  • Tags are unique within a tenant.
  • Tags are imported or exported together with the resource as part of the resource.

Searching by tags

In the Resources section, you can search for resources:

  • By tags
  • By resource name

The search is performed across all resource types and services.

The search results display a list of resources and services.

To find resources by tags:

  1. Go to the Resources section and select a resource.
  2. In the table of the resource, select the Tags column.
  3. In the Search field that is displayed, enter or select a tag name.

Displays a list of resources if the specified tag is used in those resources.

In the list of resources, you can:

  • Sort the list by name and type of resource or service.
  • Filter resources or services by resource or service type, or by tag.
  • Link or unlink tags.

Linking and unlinking tags

To link tags to a resource or unlink tags from a resource:

  1. Go to the Resources section.
  2. Select the List tab.
  3. In the Name column, select the check boxes next to the relevant resources.
  4. In the panel above the list, select the Tags tab.
  5. Click the Link or Unlink button and select the tags that you want to link or unlink.

The selected tags are linked to or unlinked from the resources.

Page top
[Topic 292781]

Resource usage tracing

For stable operation of KUMA, it is important to understand how some resources affect the performance of other resources, what connections exist between resources and other KUMA objects. You can visualize these interdependencies on an interactive graph in the KUMA Console.

Displaying the links of a resource on a graph

To display the relations of the selected resource:

  1. In the KUMA Console, in the Resources section, select a resource type.

    A list of resources of the selected type is displayed.

  2. Select the resource that you need.

    The Show dependencies button in the panel above the list of resources becomes active. On a narrow display, the button may be hidden under the icon_three vertical dots icon.

  3. Click the Show dependencies button.

    This opens the a window with the dependency graph of the selected resource. If you do not have rights to view the resource, it is marked in the graph with the icon_resource (inaccessible resource) icon. If necessary, you can close the graph window to go back to the list of resources.

Resource dependency graph

The graph displays all relations that are formed based on the universal unique identifier (UUID) of resources used in the configuration of the resource selected for display, as well as relations of resources that have the UUID of the selected resource in their configuration. Downward links, that is, resources referenced (used) by the selected resource, are displayed down to the last level, while for upward links, that is, resources that reference the selected resource, only one level is displayed.

On the graph, you can view the dependencies of the following resources:

Clicking a resource node lets you view the following information about the resource:

  • Name

    Contains a link to the resource; clicking the link opens the resource in a separate tab; this does not close the graph window.

  • Type
  • Path

    Resource path without a link.

  • Tags
  • Tenant
  • Package name

You can open the context menu of the resource and perform the following actions:

  • Show relations of resource

    The dependencies of the selected resource are displayed.

  • Hide resource on graph

    The selected resource is hidden. Resources at the lower level that the selected resource references are marked with "*" as having hidden links. Resources that refer to a hidden resource are marked with the icon-deopendant resourse icon as having hidden links. In this case, the graph becomes broken.

  • Hide "downward" relations of resource on graph

    The selected resource remains. Only those lower-level resources that do not have any links remaining on the first higher level on the graph are hidden. Resources referenced by resources of the first (hidden) level are marked with "*" as having hidden links.

  • Hide all resources of this type on graph

    All resources of the selected type are hidden. This operation is applied to each resource of the selected type.

  • Update resource relations

    You can update the resource state if the resource was edited by another user while you were managing the graph. Only changes of visible links are displayed.

  • Group

    If there is no group node on the screen: the group node appears on the screen, resources of the same type as the selected resource and resources that refer to the same resource are hidden. The edges are redrawn from the group. The Group button is available only when more than 10 links to resource of the same type exist.

    If there is a group node on the screen: the resource is hidden and added to the group, the edges are redrawn from the group.

Several types of relations are displayed on the graph:

  • Solid line without a caption.

    Represents a direct link by UUID, including the use of secrets and proxies in integrations.

    graph_arrow

  • Line captioned <function_name>.

    Represents using an active list in a correlation rule.

    graph_arrow_command

  • Dotted line captioned linked.

    Represents a link by URL, for example, of a destination with a collector, or of a destination with a storage.

    graph_arrow_linked

Resources created inline are shown on the graph as a dotted line with the linked type.

We do not recommend building large dependency graphs. We recommend limiting the number of nodes to 100 nodes.

When you open the graph, the resource selected for display is highlighted with a blinking circle for some time to set it apart graphically from other resources and draw attention to it.

You can look at the map of the graph to get an idea of where you are on the graph. You can use the selector and move it to display the necessary part of the graph.

By clicking the Arrange button, you can improve the display of resources on the graph.

If you select Show links, the focus on the graph does not change, and the resources are displayed so that you do not have to return to where you started.

When you select a group node in the graph, a sidebar is displayed, in which you can hide or show the resources that are part of the group. To do so, select the check box next to the relevant resource and click the Show on graph or Hide on graph button.

The graph retains its state if you displayed something on the graph, then switched to editing a resource, and then reopened the graph tab.

The previously displayed resources on the graph remain in their places when new resources are added to the graph.

When you close the graph, all changes are discarded.

After the resource links are drawn on the graph, you can search for a node:

  • By name
  • By tag
  • By path
  • By package

Nodes, including groups that match the selection criterion, are highlighted with a yellow circle.

You can filter the graph by resource type:

  • Hide or show resources of a certain type.
  • Hide resources of multiple types. Display all types of resources.

With the filter window closed, you can tell the selected filters by the indicator, a red dot in the toolbar.

Your actions when managing the graph (the last 50 actions) are saved in memory; you can undo changes by pressing Ctrl/Command+Z.

You can save the displayed graph can be saved to an SVG file. The visible part of the graph is saved in the file.

Page top
[Topic 294156]

Resource versioning

KUMA stores the change history of resources in the form of versions. A resource version is created automatically when you create a new resource or save changes made to the settings of an existing resource.

The change history is not available for the Dictionaries resource. To save the history of dictionaries, you can export data.

Resource versions are retained for the duration specified in the Settings section. When the age of a resource version reaches the specified value, the version is automatically deleted.

You can view the change history of KUMA resources, compare versions, and restore a previous version of a resource, for example, if it fails and you need to recover it.

To view the change history of a resource:

  1. In the KUMA Console, in the Resources section, select the type of resources that you need.

    This opens a window opens with a table of available resources of this type.

    If you want to view all resources, in the Resources section, go to the List tab.

  2. Select the check box for the resource whose change history you want to view, and click the Show version history button in the upper part of the table.

This opens a window with a table of saved versions of the selected resource. New resources have only one version, the current version.

For each version, the table displays the following information:

  • Version is the serial number of the resource version. When you save changes to the resource and create a new version, the serial number is increased by 1.

    The version with the highest number and the most recent publication date reflects the current state of the resource. Version 1 reflects the state of the resource at the moment when it was created.

  • Published is the date and time when the resource version was created.
  • Author is the login of the user that saved the changes to the resource.

    If the changes were made by the system or by the migration script, the displayed value is system.

  • Comment is a text comment added by the author when saving changes, or a system comment describing the changes made.
  • Retention period is the number of days and the date after which the resource version will be deleted.

    If necessary, you can configure the retention period for resource versions.

  • Actions is the button that restores the resource version.

You can sort the table of resource versions by the Version, Published, and Author columns by clicking the heading and selecting Ascending or Descending. You can also display only changes made by a specific author or authors in the table by clicking the heading of the Author column and selecting the authors as needed.

If you want to view the status of a resource in a specific version, click that version in the table. This opens a window with the resource of the selected version, in which you can:

  • View the settings specified in that version of the resource.
  • Restore this version of the resource by clicking the Restore button.
  • Export this version of the resource to a JSON file by clicking the Export button.

In this section

Comparing resource versions

Restoring a resource version

Configuring the retention period for resource versions

Page top
[Topic 295479]

Comparing resource versions

You can compare any two versions of a resource, for example, if you need to track changes.

To compare versions of a resource:

  1. In the KUMA Console, in the Resources section, select the type of resources that you need.

    This opens a window opens with a table of available resources of this type.

    If you want to view all resources, in the Resources section, go to the List tab.

  2. Select the check box next to a resource and click the Show version history button in the upper part of the table.

    This opens the window with the version history of the resource.

  3. Select the check boxes next to the two versions of the resource that you want to compare and click the Compare button in the upper part of the table.

This opens the resource version comparison window. Resource fields are displayed as a list or in JSON format. Differences between the two versions are highlighted. You can select other versions to compare using the drop-down lists above the resource fields.

Page top
[Topic 295481]

Restoring a resource version

You can restore a previous version of a resource, for example, if you need to recover the resource in case of mistakes made when making changes.

Versions of automatically generated agents cannot be restored separately because they are created when the parent collector is modified. If you want to restore a version of an automatically generated agent, you need to restore the corresponding version of the parent collector.

To restore a previous version of a resource:

  1. In the KUMA Console, in the Resources section, select the type of resources that you need.

    This opens a window opens with a table of available resources of this type.

    If you want to view all resources, in the Resources section, go to the List tab.

  2. Select the check box next to a resource and click the Show version history button in the upper part of the table.

    This opens the window with the version history of the resource.

  3. In the row of the relevant version, in the Action column, click the Restore button.

    You can also restore a version by clicking the row of this version and clicking the Restore button in the lower part of the window.

    You can restore only previous versions of a resource; for the current version, the Restore button is not available.

    If the structure of the resource has changed after a KUMA update, restoring its saved versions may not be possible.

  4. Confirm the action and, if necessary, add a comment. If you do not add a comment, the "Restored from v.<number of the restored version>" comment is automatically added to the version.

The resource version is restored as a new version and become the current version.

If the resource for which you restored the version is added to the active service, this also changes the state of the service. You must restart the service to apply the resource change.

Page top
[Topic 295482]

Configuring the retention period for resource versions

You can change the retention period of resource versions in the KUMA Console in the Settings → General section by changing the Resource history retention period, days setting.

The default setting is 30 days. If you want to keep all versions of resources without time limits, specify 0 (store indefinitely).

Only a user with the General administrator role can view and manage the retention period of resource versions.

The retention period of resource versions is checked daily, and versions of resources that have been stored in KUMA for longer than the specified period are automatically deleted. In the task manager, the Clear resource change history task is created to check the storage duration of resource versions and delete old versions. This task also runs after a restart of the Core component.

You can check the time remaining until a resource version is deleted in the table of versions, in the Retention period column.

Page top
[Topic 295502]

Destinations

Destinations define network settings for sending normalized events. Collectors and correlators use destinations to describe where to send processed events. Typically, correlators and storages act as destinations.

You can specify destination point settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of destination.

Destinations can have the following types:

  • internal – Used for receiving data from KUMA services using the 'internal' protocol.
  • nats-jetstream – Used for communication through NATS.
  • tcp – Used for communication over TCP.
  • http – Used for communication over the HTTP protocol.
  • diode – Used to transmit events using a data diode.
  • kafka – Used for kafka communications.
  • file – Used for writing to a file.
  • storage – Used for sending data to storage.
  • correlator – Used for sending data to a correlator.
  • eventRouter – Used for sending events to an event router.

In this section

Destination, internal type

Destination, type nats-jetstream

Destination, tcp type

Destination, http type

Destination, diode type

Destination, kafka type

Destination, file type

Destination, storage type

Destination, correlator type

Destination, eventRouter type

Predefined destinations

Page top
[Topic 217842]

Destination, internal type

Destinations of the internal typeUsed for receiving data from KUMA services using the 'internal' protocol. You can send the following data over the 'internal' protocol:

  • Internal data, such as event routes.
  • File attributes. If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file, 1c-xml, or 1c-log type, at the Event parsing step, in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:
    • $kuma_fileSourceName to pass the name of the file being processed by the collector in the KUMA event field.
    • $kuma_fileSourcePath to pass the path to the file being processed by the collector in the KUMA event field.

    When you use a file, 1c-xml, or 1c-log connector, the new variables in the normalizer will only work with destinations of the internal type.

  • Events to the event router. The event router can only receive events over the 'internal' protocol, therefore you can only use internal destinations when sending events to the event router.

Settings for a destination of the internal type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: internal.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

Proxy server

The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new.

If you want to edit the settings of an existing proxy server, click the pencil edit-pencil icon next to it.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Health check timeout

Interval, in seconds, for checking the health of the destination.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 292700]

Destination, type nats-jetstream

Expand all | Collapse all

Destinations of the nats-jetstream typeUsed for communication through NATS. Settings for a destination of the nats-jetstream type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: nats-jetstream.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Subject

The topic of NATS messages. Characters are entered in Unicode encoding.

Required setting.

Authorization

Type of authorization when connecting to the URL specified in the URL field:

  • Disabled. This value is selected by default.
  • Plain. If this option is selected, in the Secret drop-down list, specify the secret containing user account credentials for authorization when connecting to the destination. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

  • Custom PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232952]

Destination, tcp type

Destinations of the tcp typeUsed for communication over TCP. Settings for a destination of the tcp type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: tcp.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232960]

Destination, http type

Expand all | Collapse all

Destinations of the http typeUsed for communication over the HTTP protocol. Settings for a destination of the http type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: http.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Authorization

Type of authorization when connecting to the URL specified in the URL field:

  • Disabled. This value is selected by default.
  • Plain. If this option is selected, in the Secret drop-down list, specify the secret containing user account credentials for authorization when connecting to the destination. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

  • Custom PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Proxy server

The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new.

If you want to edit the settings of an existing proxy server, click the pencil edit-pencil icon next to it.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Path

The path that must be added in the request to the URL specified in the URL field on the Basic settings tab. For example, if you specify /input as the path and enter 10.10.10.10 for the URL, the destination will make requests to 10.10.10.10/input.

Health check path

The URL for sending requests to obtain health information about the system that the destination resource is connecting to.

Health check

This toggle switch enables the health check. This toggle switch is turned off by default.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232961]

Destination, diode type

Expand all | Collapse all

Destinations of the diode typeUsed to transmit events using a data diode. Settings for a destination of the diode type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: diode.

Required setting.

Data diode source directory

Path to the directory from which the data diode moves events. The maximum length of the path is 255 Unicode characters.

Limitations when using prefixes in paths on Windows servers

On Windows servers, absolute paths to directories must be specified. Directories with names matching the following regular expressions cannot be used:

  • ^[a-zA-Z]:\\Program Files
  • ^[a-zA-Z]:\\Program Files \(x86\)
  • ^[a-zA-Z]:\\Windows
  • ^[a-zA-Z]:\\Program Files\\Kaspersky Lab\\KUMA

Limitations when using prefixes in paths on Linux servers

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

The paths specified in the Data diode source directory and Temporary directory fields may not be the same.

Temporary directory

Path to the directory in which events are prepared for transmission to the data diode. The maximum length of the path is 255 Unicode characters.

Events are stored in a file when a timeout or a buffer overflow occurs. The default timeout is 10 seconds. The prepared file with events is moved to the directory specified in the Data diode source directory field. The checksum (SHA-256) of the file contents is used as the name of the file with events.

The paths specified in the Data diode source directory and Temporary directory fields may not be the same.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232967]

Destination, kafka type

Expand all | Collapse all

Destinations of the kafka typeUsed for kafka communications. Settings for a destination of the kafka type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: kafka.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Topic

Subject of Kafka messages. The maximum length of the subject is 255 characters. You can use the following characters: a–z, A–Z, 0–9, ".", "_", "-".

Required setting.

Authorization

Type of authorization when connecting to the URL specified in the URL field:

  • Disabled. This value is selected by default.
  • Plain. If this option is selected, in the Secret drop-down list, specify the secret containing user account credentials for authorization when connecting to the destination. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

  • PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencilicon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencilicon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232962]

Destination, file type

Destinations of the file typeUsed for writing to a file. Settings for a destination of the file type are described in the following tables.

When deleting a destination of the file type that is being used in a service, you must restart the service.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: file.

Required setting.

URL

Path to the file to which the events must be written.

Limitations when using prefixes in file paths

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232965]

Destination, storage type

Destinations of the storage typeUsed for sending data to storage. Settings for a destination of the storage type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: storage.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Proxy server

The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new.

If you want to edit the settings of an existing proxy server, click the pencil edit-pencil icon next to it.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Health check timeout

Interval, in seconds, for checking the health of the destination.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232973]

Destination, correlator type

Destinations of the correlator typeUsed for sending data to a correlator. Settings for a destination of the correlator type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: correlator.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Proxy server

The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new.

If you want to edit the settings of an existing proxy server, click the pencil edit-pencil icon next to it.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Health check timeout

Interval, in seconds, for checking the health of the destination.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 232976]

Destination, eventRouter type

Destinations of the eventRouter typeUsed for sending events to an event router. Settings for a destination of the eventRouter type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

State

This toggle switch enables sending events to the destination. This toggle switch is turned on by default.

 

Type

Destination type: eventRouter.

Required setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Buffer flush interval

Interval (in seconds) for sending events to the destination. The default value is 1 second.

Disk buffer size limit

Size of the disk buffer in bytes. The default value is 10 GB.

Handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Output format

Format in which events are sent to the destination:

  • JSON.
  • CEF. If this value is selected, the transmitted events contain the CEF header and only non-empty fields.

Proxy server

The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new.

If you want to edit the settings of an existing proxy server, click the pencil edit-pencil icon next to it.

URL selection policy

Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:

  • Any means events are sent to a randomly selected available URL as long as the URL accepts events. If the URL becomes unavailable, events are sent to another randomly selected available URL. This value is selected by default.
  • Prefer first means events are sent to the first added URL. If the URL becomes unavailable, events are sent to the next added available URL. If the first added URL becomes available again, events are sent to the first added URL again.
  • Round robin means events are evenly balanced among the available URLs. This method does not guarantee that events are evenly balanced among the URLs because the buffer may overflow or events may be sent to the destination. You can specify the buffer size in bytes in the Buffer size limit field; you can also specify the interval in seconds for sending events to the destination in the Buffer flush interval field.

Health check timeout

Interval, in seconds, for checking the health of the destination.

Disk buffer disabled

This toggle switch that enables the disk buffer. This toggle switch is turned on by default.

The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest.

Timeout

The time, in seconds, for which the destination waits for a response from another service or component.

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Filter

Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 274640]

Predefined destinations

Destinations listed in the table below are included in the KUMA distribution kit.

Predefined destinations

Destination name

Description

[OOTB] Correlator

Sends events to a correlator.

[OOTB] Storage

Sends events to storage.

Page top
[Topic 250830]

Normalizers

Normalizers are used for converting raw events that come from various sources in different formats to the KUMA event data model. Normalized events become available for processing by other KUMA resources and services.

A normalizer consists of the main event parsing rule and optional additional event parsing rules. By creating a main parsing rule and a set of additional parsing rules, you can implement complex event processing logic. Data is passed along the tree of parsing rules depending on the conditions specified in the Extra normalization conditions setting. The sequence in which parsing rules are created is significant: the event is processed sequentially and the processing sequence is indicated by arrows.

The following event normalization options are now available:

  • 1 collector — 1 normalizer

    We recommend using this method if you have many events of the same type or many IP addresses from which events of the same type may originate. You can configure one collector with only one normalizer, which is optimal in terms of performance.

  • 1 collector — multiple normalizers linked to IP

    This method is available for collectors with a connector of UDP, TCP, or HTTP type. If a UDP, TCP, or HTTP connector is specified in the collector at the 'Transport' step, then at the 'Event parsing' step, you can specify multiple IP addresses on the 'Parsing settings' tab and choose the normalizer that you want to use for events coming from the specified addresses. The following types of normalizers are available: json, cef, regexp, syslog, csv, kv, xml. For normalizers of the Syslog and regexp types, you can specify extra normalization conditions depending on the value of the DeviceProcessName field.

A normalizer is created in several steps:

  1. Preparing to create a normalizer

    A normalizer can be created in the KUMA Console:

    Then parsing rules must be created in the normalizer.

  2. Creating the main parsing rule for an event

    The main parsing rule is created using the Add event parsing button. This opens the Event parsing window, where you can specify the settings of the main parsing rule:

    The main parsing rule for an event is displayed in the normalizer as a dark circle. You can view or modify the settings of the main parsing rule by clicking this circle. When you hover the mouse over the circle, a plus sign is displayed. Click it to add the parsing rules.

    The name of the main parsing rule is used in KUMA as the normalizer name.

  3. Creating additional event parsing rules

    Clicking the plus icon that is displayed when you hover the mouse over the circle or the block corresponding to the normalizer opens the Additional event parsing window where you can specify the settings of the additional parsing rule:

    The additional event parsing rule is displayed in the normalizer as a dark block. The block displays the triggering conditions for the additional parsing rule, the name of the additional parsing rule, and the event field. When this event field is available, the data is passed to the normalizer. Click the block of the additional parsing rule to view or modify its settings.

    If you hover the mouse over the additional normalizer, a plus button appears. You can use this button to create a new additional event parsing rule. To delete a normalizer, use the button with the trash icon.

  4. Completing the creation of the normalizer

    To finish the creation of the normalizer, click Save.

In the upper right corner, in the search field, you can search for additional parsing rules by name.

For normalizer resources, you can enable the display of control characters in all input fields except the Description field.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the normalizer under ResourcesNormalizers in the web interface.

See also:

Requirements for variables

Page top
[Topic 217942]

Event parsing settings

Expand all | Collapse all

You can configure the rules for converting incoming events to the KUMA format when creating event parsing rules in the normalizer settings window, on the Normalization scheme tab. Available event parsing settings are listed in the table below.

When normalizing events, you can use extended event schema fields in addition to standard KUMA event schema fields.

Available event parsing settings

Setting

Description

Name

Name of the parsing rule. Maximum length of the name: 128 Unicode characters. The name of the main parsing rule is used as the name of the normalizer.

Required setting.

Tenant

The name of the tenant that owns the resource.

This setting is not available for extra parsing rules.

Parsing method

The type of incoming events. Depending on the selected parsing method, you can use the predefined event field matching rules or define your own rules. When you select some parsing methods, additional settings may become available they you must specify. Available parsing methods:

  • json

    This parsing method is used to process JSON data where each object, including its nested objects, occupies a single line in a file.

    When processing files with hierarchically structured data, you can reference the fields of nested objects using the dot notation. For example, the username parameter from the string "user": {"username": "system: node: example-01"} can be accessed by using the user.username query.

    Files are processed line by line. Multi-line objects with nested structures may be normalized incorrectly.

    In complex normalization schemes where additional normalizers are used, all nested objects are processed at the first normalization level, except for cases when the extra normalization conditions are not specified and, therefore, the event being processed is passed to the extra normalizer in its entirety.

    You can use \n and \r\n as newline characters. Strings must be UTF-8 encoded.

    If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

  • cef

    This parsing method is used to process CEF data.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping.

  • regexp

    This parsing method is used to create custom rules for processing data in a format using regular expressions.

    You must add a regular expression (RE2 syntax) with named capturing groups to the field under Normalization. The name of the capturing group and its value are considered the field and value of the raw event that can be converted to an event field in KUMA format.

    To add event handling rules:

    1. If necessary, copy an example of the data you want to process to the Event examples field. We recommend completing this step.
    2. In the field under Normalization, add a RE2 regular expression with named capturing groups, for example, "(?P<name>regexp)". The regular expression added to the field under Normalization must exactly match the event. When designing the regular expression, we recommend using special characters that match the starting and ending positions of the text: ^, $.

      You can add multiple regular expressions or remove regular expressions. To add a regular expression, click Add regular expression. To remove a regular expression, click the delete icon cross next to it.

    3. Click the Copy field names to the mapping table button.

      Capture group names are displayed in the KUMA field column of the Mapping table. You can select the corresponding KUMA field in the column opposite each capturing group. If you followed the CEF format when naming the capturing groups, you can use automatic CEF mapping by selecting the Use CEF syntax for normalization check box.

    Event handling rules are added.

  • syslog

    This parsing method is used to process data in syslog format.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping.

    To parse events in rfc5424 format with a structured-data section, in the Keep extra fields drop-down list, select Yes. This makes the values from the structured-data section available in the Extra fields.

  • csv

    This parsing method is used to create custom rules for processing CSV data.

    When choosing this parsing method, you must specify the separator of values in the string in the Delimiter field. Any single-byte ASCII character can be used as a delimiter for values in a string.

  • kv

    This parsing method is used to process data in key-value pair format. Available parsing method settings are listed in the table below.

    Available parsing method settings

    Setting

    Description

    Pair delimiter

    The character used to separate key-value pairs. You can specify any single-character (1 byte) value. The specified value must not match the value specified in the Value delimiter field.

    Value delimiter

    The character used to separate a key from its value. You can specify any single-character (1 byte) value. The specified value must not match the value specified in the Pair delimiter field.

     

  • xml

    This parsing method is used to process XML data in which each object, including nested objects, occupies a single line in a file. Files are processed line by line.

    If you want to send the raw event for advanced normalization, at each nesting level in the Advanced event parsing window, select Yes in the Keep raw event drop-down list.

    If you select this parsing method, under XML attributes, you can specify the key XML attributes to be extracted from tags. If an XML structure has multiple XML attributes with different values in the same tag, you can identify the necessary value by specifying the key of the value in the Source column of the Mapping table.

    To add key XML attributes:

    1. Click + Add field.
    2. This opens a window; in that window, specify the path to the XML attribute.

    You can add multiple XML attributes or remove XML attributes. To remove an individual XML attribute, click the delete icon cross next to it. To remove all XML attributes, click Reset.

    If XML key attributes are not specified, then in the course of field mapping the unique path to the XML value will be represented by a sequence of tags.

    Tag numbering

    Starting with KUMA 2.1.3, you can use automatic tag numbering in XML events. This lets you parse an event with the identical tags or unnamed tags, such as <Data>.

    As an example, we will number the tags of the EventData attribute of the Microsoft Windows PowerShell event ID 800.

    <Event xmlns="http://schemas .microsoft.com/win/2004/08/events/event">

    <System>

    <Provider Name="Microsoft-Windows-ActiveDirectory_DomainService" Guid="{0e8478c5-3605-4e8c-8497-1e730c959516}" EventSourceName="NTDS" />

    <EventID Qualifiers="0000">0000</EventID>

    <Version>@</Version>

    <Level>4</Level>

    <Task>15</Task>

    <Opcode>0</Opcode>

    <Keywords >0x8080000000000000</Keywords>

    <TimeCreated SystemTime="2000-01-01T00:00:00.659495900Z" />

    <EventRecordID>55647</EventRecordID>

    <Correlation />

    <Execution ProcessID="1" ThreadID="1" />

    <Channel>service</Channel>

    <Computer>computer</Computer>

    <Security UserID="0000" />

    </System>

    <EventData>

    <Data>583</Data>

    <Data>36</Data>

    <Data>192.168.0.1:5084</Data>

    <Data>level</Data>

    <Data>name, lDAPDisplayName</Data>

    <Data />

    <Data>5545</Data>

    <Data>3</Data>

    <Data>0</Data>

    <Data>0</Data>

    <Data>0</Data>

    <Data>15</Data>

    <Data>none</Data>

    </EventData>

    </Event>

    To parse events with identical tags or unnamed tags, you need to configure tag numbering and data mapping for numbered tags with KUMA event fields.

    KUMA 3.0.x supports using XML attributes and tag numbering at the same time in the same extra normalizer. If an XML attribute contains unnamed tags or identical tags, we recommend using tag numbering. If the XML attribute contains only named tags, we recommend using XML attributes.

    To use XML attributes and tag numbering in extra normalizers, you must sequentially enable the Keep raw event setting in each extra normalizer along the path that the event follows to the target extra normalizer, and in the target extra normalizer itself.

    For an example of how tag numbering works, you can refer to the MicrosoftProducts normalizer. The Keep raw event setting is enabled sequentially in both AD FS and 424 extra normalizers.

    To set up the parsing of events with unnamed or identical tags:

    1. Open an existing normalizer or create a new normalizer.
    2. In the Basic event parsing window of the normalizer, in the Parsing method drop-down list, select xml.
    3. In the Tag numbering field, click + Add field.
    4. In the displayed field, enter the full path to the tag to whose elements you want to assign a number, for example, Event.EventData.Data. The first tag gets number 0. If the tag is empty, for example, <Data />, it is also assigned a number.
    5. To configure data mapping, under Mapping, click + Add row and do the following:
      1. In the displayed row, in the Source field, enter the full path to the tag and the index of the tag. For example, for the Microsoft Windows PowerShell event ID 800 from the example above, the full paths to tags and tag indices are as follows:
        • Event.EventData.Data.0,
        • Event.EventData.Data.1,
        • Event.EventData.Data.2 and so on.
      2. In the KUMA field drop-down list, select the field in the KUMA event that will receive the value from the numbered tag after parsing.
    6. Save changes in one of the following ways:
      • If you created a new normalizer, click Save.
      • If you edited an existing normalizer, in the collector to which the normalizer is linked, click Update configuration.

    Parsing is configured.

  • netflow

    This parsing method is used to process data in all supported NetFlow protocol formats: NetFlow v5, NetFlow v9, and IPFIX.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. This takes into account the source fields of all NetFlow versions (NetFlow v5, NetFlow v9, and IPFIX).

    If the netflow parsing method is selected for the main parsing, extra normalization is not available.

    The default mapping rules for the netflow parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

  • netflow5

    This parsing method is used to process data in the NetFlow v5 format.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the netflow5 parsing method is selected for the main parsing, extra normalization is not available.

    The default mapping rules for the netflow5 parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

  • netflow9

    This parsing method is used to process data in the NetFlow v9 format.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the netflow9 parsing method is selected for the main parsing, extra normalization is not available.

    The default mapping rules for the netflow9 parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

  • sflow5

    This parsing method is used to process data in sflow5 format.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the sflow5 parsing method is selected for the main parsing, extra normalization is not available.

  • ipfix

    This parsing method is used to process IPFIX data.

    If you select this parsing method, you can use the predefined rules for converting events to the KUMA format by clicking Apply default mapping. If the ipfix parsing method is selected for the main parsing, extra normalization is not available.

    The default mapping rules for the ipfix parsing method do not specify the protocol type in KUMA event fields. When parsing data in NetFlow format, on the Enrichment normalizer tab, you must create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

  • sql

    The normalizer uses this parsing method to process data obtained by making a selection from the database.

Required setting.

Keep raw event

Keeping raw events in the newly created normalized event. Available values:

  • Don't save—do not save the raw event. This is the default setting.
  • Only errors—save the raw event in the Raw field of the normalized event if errors occurred when parsing it. This value is useful for debugging because an event having a non-empty Raw field indicates a problem.

    If fields containing the names *Address or *Date* do not comply with normalization rules, these fields are ignored. No normalization error occurs in this case, and the values of the fields are not displayed in the Raw field of the normalized event even if the Keep raw eventOnly errors option was selected.

  • Always—always save the raw event in the Raw field of the normalized event.

Required setting. This setting is not available for extra parsing rules.

Keep extra fields

Keep fields and values for which no mapping rules are configured. This data is saved as an array in the Extra event field. Normalized events can be searched and filtered based on the data stored in the Extra field.

Filtering based on data from the Extra event field

Conditions for filters based on data from the Extra event field:

  • Condition—If.
  • Left operand—event field.
  • In this event field, you can specify one of the following values:
    • Extra field.
    • Value from the Extra field in the following format:

      Extra.<field name>

      For example, Extra.app.

      You must specify the value manually.

    • Value from the array written to the Extra field in the following format:

      Extra.<field name>.<array element>

      For example, Extra.array.0.

      The values in the array are numbered starting from 0. You must specify the value manually. To work with a value in the Extra field at a depth of 3 and lower, you must use backticks ``, for example, `Extra.lev1.lev2.lev3`.

  • Operator – =.
  • Right operand—constant.
  • Value—the value by which you need to filter events.

By default, no extra fields are saved.

Required setting.

Description

Description of the resource. Maximum length of the description: 4000 Unicode characters.

This setting is not available for extra parsing rules.

Event examples

Example of data that you want to process.

This setting is not available for the following parsing methods: netflow5, netflow9, sflow5, ipfix, and sql.

If the event was parsed successfully, and the type of the data obtained from the raw event matches the type of the KUMA field, the Event examples field is filled with data obtained from the raw event. For example, the "192.168.0.1" value in quotation marks does not appear in the SourceAddress field. However, the 192.168.0.1 value is displayed in the Event examples field.

Mapping

Settings for configuring the mapping of source event fields to fields of the event in the KUMA format:

  • Source lists the names of the raw event fields that you want to convert into KUMA event fields.

    Next to field names in the Source column, clicking wrench-new opens the Conversion window, in which you can click Add conversion to create rules for modifying the source data before writing them to the KUMA event fields. You can reorder and delete created rules. To change the position of a rule, click DragIcon next to it. To delete a rule, click cross-black next to it.

    Available conversions

    Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

    • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
    • lower—is used to make all characters of the value lowercase
    • upper—is used to make all characters of the value uppercase
    • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
    • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
    • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
      • Replace chars specifies the sequence of characters to be replaced.
      • With chars is the character sequence to be used instead of the character sequence being replaced.
    • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
    • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
    • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
    • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
      • Expression is the RE2 regular expression whose results you want to replace.
      • With chars is the character sequence to be used instead of the character sequence being replaced.
    • Converting encoded strings to text:
      • decodeHexString—used to convert a HEX string to text.
      • decodeBase64String—used to convert a Base64 string to text.
      • decodeBase64URLString—used to convert a Base64url string to text.

      When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

      During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

      If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

    Conversions when using the extended event schema

    Whether or not a conversion can be used depends on the type of extended event schema field being used:

    • For an additional field of the "String" type, all types of conversions are available.
    • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
    • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

     

  • KUMA field lists fields of KUMA events. You can search for fields by entering their names.
  • Label is a unique custom label for event fields that begin with DeviceCustom* and Flex*.

You can add new table rows or delete table rows. To add a new table row, click Add row. To delete a single row in the table, click cross next to it. To delete all table rows, click Clear all.

If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field.

If the size of the KUMA event field is less than the length of the value placed in it, the value is truncated to the size of the event field.

Page top
[Topic 221932]

Extended event schema

You can use the extended event schema fields in normalizers for normalizing events and in other KUMA resources, for example, as widget fields or to filter and search for events. You can view the list of all extended event schema fields that exist in KUMA in the Settings → Extended event schema fields section. The list of extended event schema fields is the same for all tenants.

Only users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Read shared resources, and Manage shared resources roles can view the table of extended event schema fields.

The Extended event schema fields table contains the following information:

  • Type—Data type of the extended event schema field.

    Possible data types:

    Type

    Data type

    S

    String.

    N

    Number.

    F

    Floating point number.

    SA

    Array of strings.

    The order of the array elements is the same as the order of the elements of the raw event.

    NA

    Array of integers.

    The order of the array elements is the same as the order of the elements of the raw event.

    FA

    Array of floats.

    The order of the array elements is the same as the order of the elements of the raw event.

  • Field name—Name of the extended event schema field, without a type.

    You can click the name to edit the settings of the extended event schema field.

  • Status—Whether the extended event schema field can be used in resources.

    You can Enable or Disable the toggle switch to allow or forbid using this extended event schema field in new resources. However, a disabled field is still used in resource configurations that are already operational, until you manually remove the field from the configuration; the field also remains available in the list of table columns in the Events section for managing old events.

    Only a user with the General administrator role can disable an extended event schema field.

  • Update date—Date and time of the last modification of the extended event schema field.
  • Created by—Name of the user that created the extended event schema field.
  • Dependencies—Number of KUMA resources, dashboard layouts, reports, presets, and field sets for searching event sources that use the extended event schema field.

    You can click the number to open a pane with a table of all resources and other KUMA entities that are using this field. For each dependency, the table displays the name, tenant (only for resources), and type. Dependencies in the table are sorted by name. Clicking the name of a dependency takes you to its page (except for dashboard layouts, presets, and saved user queries).

    You can view the dependencies of an extended event schema field only for resources and entities to whose tenants you have access. If you do not have access to the tenant, its resources are not displayed in the table, but still count towards the number of dependencies.

  • Description—Text description of the field.

By default, the table of extended event schema fields is sorted by update date in descending order. If necessary, you can sort the table by clicking a column heading and selecting Ascending or Descending; you can also use context search by field name.

By default, the following service extended event schema fields are automatically added to KUMA:

  • KL_EventRoute, type S for storing information about the route of the event.

    You can use this field in normalizers, as a key or value in active lists, in enrichment rules, as a query field in data collection and analysis rules, in correlation rules. You cannot use this field to detect event sources.

  • The following fields are added to a correlation event:
    • KL_CorrelationRulePriority, type N
    • KL_SourceAssetDisplayName, type S
    • KL_DestinationAssetDisplayName, type S
    • KL_DeviceAssetDisplayName, type S
    • KL_SourceAccountDisplayName, type S
    • KL_DestinationAccountDisplayName, type S

    You cannot use this service fields to search for events.

You cannot edit, delete, export, or disable service fields. All extended event schema fields with the KL_ prefix are service fields and can be managed only from Kaspersky servers. We do not recommend using the KL_ prefix when adding new extended event schema fields.

In this section

Adding extended event schema fields

Editing extended event schema fields

Importing and exporting extended event schema fields

Deleting extended event schema fields

Using extended event schema fields in normalizers

Page top
[Topic 294885]

Adding extended event schema fields

Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Manage shared resources roles can add new extended event schema fields.

To add an extended event schema field:

  1. In the KUMA Console, in the Settings → Extended event schema fields section, click the Add button in the upper part of the table.

    This opens the Create extended schema pane.

  2. Enable or disable the Status toggle switch to enable or disable this extended event schema field for resources.

    The toggle switch is turned on by default. A disabled field remains available in the list of table columns in the Events section for managing old events.

  3. In the Type field, select the data type of the extended event schema field.

    Possible data types

    Type

    Data type

    S

    String.

    N

    Number.

    F

    Floating point number.

    SA

    Array of strings.

    The order of the array elements is the same as the order of the elements of the raw event.

    NA

    Array of integers.

    The order of the array elements is the same as the order of the elements of the raw event.

    FA

    Array of floats.

    The order of the array elements is the same as the order of the elements of the raw event.

  4. In the Name field, specify the name of the extended event schema field.

    Consider the following when naming extended event schema fields:

    • The name must be unique within the KUMA instance.
    • Names are case-sensitive. For example, Field_name and field_name are different names.
    • You can use Latin, Cyrillic characters and numerals. Spaces or " ~ ` @ # $ % ^ & * ( ) + - [ ] { } | \ | / . " < > ; ! , : ? = characters are not allowed.
    • If you want to use the extended event schema fields to search for event sources, you can only use Latin characters and numerals.
    • The maximum length is 128 characters.
  5. If necessary, in the Description field, enter a description for the extended event schema field.

    We recommend describing the purpose of the extended event schema field. Only Unicode characters are allowed in the description. The maximum length is 256 characters.

  6. Click the Save button.

A new extended event schema field is added and displayed at the top of the table. An audit event is generated for the creation of the extended event schema field. If you have enabled the field, you can use it in normalizers and when configuring resources.

Page top
[Topic 294887]

Editing extended event schema fields

Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Manage shared resources roles can edit existing extended event schema fields.

To edit an extended event schema field:

  1. In the KUMA Console, in the Settings → Extended event schema fields section, click the name of the field that you want to edit.

    This opens the Edit extended schema pane. This pane displays the settings of the selected field, as well as the Dependencies table with a list of resources, dashboard layouts, reports, presets, and sets of fields for finding event sources that use this field. Only resources to whose tenants you have access are displayed. If the field is used by resources to whose tenant you do not have access, such resources are not displayed in the table. Resources in the table are sorted by name.

    Clicking the name of a resource or entity takes you to its page (except for dashboard resources, presets, and saved user queries).

  2. Make the changes you need in the available settings.

    You can edit the Type and Field name settings only if the extended event schema field does not have dependencies. You can edit the Status and Description settings for any extended event scheme field. However, a field with the Disabled status is still used in resource configurations that are already operational, until you manually remove the field from the configuration; the field also remains available in the list of table columns in the Events section for managing old events.

    Disabling an extended event schema field using the Status field requires the General administrator role.

  3. Click the Save button.

The extended event schema field is updated. An audit event is generated about the modification of the field.

Page top
[Topic 294888]

Importing and exporting extended event schema fields

You can add multiple new extended event schema fields at once by importing them from a JSON file. You can also export all extended event schema fields with information about them to a file, for example, to propagate the list of fields to other KUMA instances to maintain resources.

Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, and Manage shared resources roles can import an export extended event schema fields. Users with the Read shared resources role can only export extended event schema fields.

To import extended event schema fields into KUMA from a file:

  1. In the KUMA Console, in the Settings → Extended event schema fields section, click the Import button.
  2. This opens a window; in that window, select a JSON file with a list of extended event schema field objects.

    Example JSON file:

    [

    {"kind": "SA",

    "name": "<fieldName1>",

    "description": "<description1>",

    "disabled": false},

    {"kind": "N",

    "name": "<fieldName2>",

    "description": "<description2>",

    "disabled": false},

    ....

    {"kind": "FA",

    "name": "<fieldNameX>",

    "description": "<descriptionX>",

    "disabled": false}

    ]

    When importing fields from a file, their names are checked for possible conflicts with fields of the same type. If a field with the same name and type already exists in KUMA, such fields are not imported from the file.

Extended event schema fields are imported from the file to KUMA. An audit event about the import of fields is generated, and a separate audit event is generated for each added field.

To export extended event schema fields to a file:

  1. In the KUMA Console, go to the Settings → Extended event schema fields section.
  2. If you want to export specific extended event schema fields:
    1. Select the check boxes in the first column of the table for the required fields.

      You cannot select service fields.

    2. Click the Export selected button in the upper part of the table.
  3. If you want to export all extended event schema fields, click the Export all button in the upper part of the table.

A JSON file with a list of extended event schema field objects and information about them is downloaded.

Page top
[Topic 294889]

Deleting extended event schema fields

Only a user with the General administrator role can delete extended event schema fields.

You can delete only those extended event schema fields that are not service fields, that have the Disabled status, and that are not used in KUMA resources and other entities (do not have dependencies). We recommend deleting extended event schema fields after enough time has passed to make sure that all events in which the field was used have been deleted from KUMA. When you delete a field, it is no longer displayed in event tips.

To delete extended event schema fields:

  1. In the KUMA Console, go to the Settings → Extended event schema fields section.
  2. Select the check boxes in the first column of the table next to one or more fields that you want to delete.

    To select all fields, you can select the check box in the heading of the first column.

  3. Click the Delete button in the upper part of the table.

    The Delete button is active only if all selected fields are disabled and have no dependencies. If at least one field is enabled or has a dependency, the button is inactive.

    If you want to delete a field that is used in at least one KUMA resource (has a dependency), but you do not have access to its tenant, the Delete button is active when this field is selected, but an error is displayed when you try to delete it.

The selected fields are deleted. An audit event is generated about the deletion of the fields.

Page top
[Topic 294890]

Using extended event schema fields in normalizers

When using extended event schema fields, the general limit for the maximum size of an event that can be processed by the collector is the same, 4 MB. Information about the types of extended event schema fields is shown in the table below (step 6 of the instructions).

Using many unique fields of the extended event schema can reduce the performance of the system, increase the amount of disk space required for storing events, and make the information difficult to understand.

We recommend consciously choosing a minimal set of additional fields of the extended event schema that you want to use in normalizers and correlation.

To use the fields of the extended event schema:

  1. Open an existing normalizer or create a new normalizer.
  2. Specify the basic settings of the normalizer.
  3. Click Add row.
  4. For the Source setting, enter the name of the source field in the raw event.
  5. For the KUMA field setting, start typing the name of the extended event schema field and select the field from the drop-down list.

    The extended event schema fields in the drop-down list have names in the <type>.<field name> format.

  6. Click the Save button to save the event normalizer.

The normalizer is saved with the selected extended event schema field.

If the data in the fields of the raw event does not match the type of the KUMA field, the value is not saved during the normalization of events if type conversion cannot be performed. For example, the string test cannot be written to the DeviceCustomNumber1 KUMA field of the Number type.

If you want to minimize the load on the storage server when searching events, preparing reports, and performing other operations on events in storage, use KUMA event schema fields as your first preference, extended event schema fields as your second preference, and the Extra fields as your last resort.

Page top
[Topic 294891]

Enrichment in the normalizer

Expand all | Collapse all

When creating event parsing rules in the normalizer settings window, on the Enrichment tab, you can configure the rules for adding extra data to the fields of the normalized event using enrichment rules. Enrichment rules are stored in the settings of the normalizer where they were created.

You can create enrichment rules by clicking the Add enrichment button. To delete an enrichment rule, click cross-black next to it. Extended event schema fields can be used for event enrichment. Available enrichment rule settings are listed in the table below.

Available enrichment rule settings

Setting

Description

Source kind

Enrichment type. Depending on the selected enrichment type, you may see advanced settings that will also need to be completed. Available types of enrichment:

  • constant

    This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Constant

    The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

    Target field

    The KUMA event field that you want to populate with the data.

    If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

    If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

  • dictionary

    This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Dictionary name

    The dictionary from which the values are to be taken.

    Key fields

    Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

    If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

    Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

    If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

    Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

  • table

    This type of enrichment is used if you need to add a value from the dictionary of the Table type.

    When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

    In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

    • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
    • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

    New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

  • event

    This type of enrichment is used when you need to write a value from another event field to the current event field. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Target field

    The KUMA event field that you want to populate with the data.

    Source field

    The event field whose value is written to the target field.

    Clicking wrench-new opens the Conversion window, in which you can click Add conversion to create rules for modifying the source data before writing them to the KUMA event fields. You can reorder and delete created rules. To change the position of a rule, click DragIcon next to it. To delete a rule, click cross-black next to it.

    Available conversions

    Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

    • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
    • lower—is used to make all characters of the value lowercase
    • upper—is used to make all characters of the value uppercase
    • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
    • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
    • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
      • Replace chars specifies the sequence of characters to be replaced.
      • With chars is the character sequence to be used instead of the character sequence being replaced.
    • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
    • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
    • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
    • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
      • Expression is the RE2 regular expression whose results you want to replace.
      • With chars is the character sequence to be used instead of the character sequence being replaced.
    • Converting encoded strings to text:
      • decodeHexString—used to convert a HEX string to text.
      • decodeBase64String—used to convert a Base64 string to text.
      • decodeBase64URLString—used to convert a Base64url string to text.

      When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

      During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

      If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

    Conversions when using the extended event schema

    Whether or not a conversion can be used depends on the type of extended event schema field being used:

    • For an additional field of the "String" type, all types of conversions are available.
    • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
    • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

     

    When using enrichment of events that have event selected as the Source kind and the extended event schema fields are used as arguments, the following special considerations apply:

    • If the source extended event schema field has the "Array of strings" type, and the target extended event schema field has the "String" type, the values are written to the target extended event schema field in TSV format.

      Example: The SA.StringArray extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the event enrichment operation is written to the DeviceCustomString1 extended event schema field. The DeviceCustomString1 extended event schema field contains values: ["string1", "string2", "string3"].

    • If the source and target extended event schema fields have the "Array of strings" type, values of the source extended event schema field are added to the values of the target extended event schema field, and the "," character is used as the delimiter character.

      Example: The SA.StringArrayOne field of the extended event scheme contains the ["string1", "string2", "string3"] values, and the SA.StringArrayTwo field of the extended event scheme contains the ["string4", "string5", "string6"] values. An event enrichment operation is performed. The result of the event enrichment operation is written to the SA.StringArrayTwo field of the extended event scheme. The SA.StringArrayTwo extended event schema field contains values: ["string4", "string5", "string6", "string1", "string2", "string3"].

  • template

    This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Template

    The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

    Target field

    The KUMA event field that you want to populate with the data.

    If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

    • {{.SA.StringArrayOne}}
    • {{- range $index, $element := . SA.StringArrayOne -}}

      {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

    To convert the data in an array field in a template into the TSV format, use the toString function, for example:

    template {{toString .SA.StringArray}}

Required setting.

Target field

The KUMA event field that you want to populate with the data.

Required setting. This setting is not available for the enrichment source of the Table type.

Page top
[Topic 242993]

Conditions for forwarding data to an extra normalizer

When creating additional event parsing rules, you can specify the conditions. When these conditions are met, the events are sent to the created parsing rule for processing. Conditions can be specified in the Additional event parsing window, on the Extra normalization conditions tab. This tab is not available for the basic parsing rules.

Available settings:

  • Use raw event — If you want to send a raw event for extra normalization, select Yes in the Keep raw event drop-down list. The default value is No. We recommend passing a raw event to normalizers of json and xml types. If you want to send a raw event for extra normalization to the second, third, etc nesting levels, at each nesting level, select Yes in the Keep raw event drop-down list.
  • Field to pass into normalizer—indicates the event field if you want only events with fields configured in normalizer settings to be sent for additional parsing.

    If this field is blank, the full event is sent to the extra normalizer for processing.

  • Set of filters—used to define complex conditions that must be met by the events received by the normalizer.

    You can use the Add condition button to add a string containing fields for identifying the condition (see below).

    You can use the Add group button to add a group of filters. Group operators can be switched between AND, OR, and NOT. You can add other condition groups and individual conditions to filter groups.

    You can swap conditions and condition groups by dragging them by the DragIcon icon; you can also delete them using the cross icon.

Filter condition settings:

  • Left operand and Right operand—used to specify the values to be processed by the operator.

    In the left operand, you must specify the source field of events coming into the normalizer. For example, if the eventType - DeviceEventClass mapping is configured in the Basic event parsing window, then in the Additional event parsing window on the Extra normalization conditions tab, you must specify eventType in the left operand field of the filter. Data is processed only as text strings.

  • Operators:
    • = – full match of the left and right operands.
    • startsWith – the left operand starts with the characters specified in the right operand.
    • endsWith – the left operand ends with the characters specified in the right operand.
    • match – the left operand matches the regular expression (RE2) specified in the right operand.
    • in – the left operand matches one of the values specified in the right operand.

The incoming data can be converted by clicking the wrench-new button. The Conversion window opens, where you can use the Add conversion button to create the rules for converting the source data before any actions are performed on them. In the Conversion window, you can swap the added rules by dragging them by the DragIcon icon; you can also delete them using the cross-black icon.

Available conversions

Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

  • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
  • lower—is used to make all characters of the value lowercase
  • upper—is used to make all characters of the value uppercase
  • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
  • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
  • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
    • Replace chars specifies the sequence of characters to be replaced.
    • With chars is the character sequence to be used instead of the character sequence being replaced.
  • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
  • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
  • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
  • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
    • Expression is the RE2 regular expression whose results you want to replace.
    • With chars is the character sequence to be used instead of the character sequence being replaced.
  • Converting encoded strings to text:
    • decodeHexString—used to convert a HEX string to text.
    • decodeBase64String—used to convert a Base64 string to text.
    • decodeBase64URLString—used to convert a Base64url string to text.

    When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

    During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

    If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

Conversions when using the extended event schema

Whether or not a conversion can be used depends on the type of extended event schema field being used:

  • For an additional field of the "String" type, all types of conversions are available.
  • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
  • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

Page top
[Topic 221934]

Supported event sources

KUMA supports the normalization of events coming from systems listed in the "Supported event sources" table. Normalizers for these systems are included in the distribution kit.

Supported event sources

System name

Normalizer name

Type

Normalizer description

1C EventJournal

[OOTB] 1C EventJournal Normalizer

xml

Designed for processing the event log of the 1C system. The event source is the 1C log.

1C TechJournal

[OOTB] 1C TechJournal Normalizer

regexp

Designed for processing the technology event log. The event source is the 1C technology log.

Absolute Data and Device Security (DDS)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

AhnLab Malware Defense System (MDS)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Ahnlab UTM

[OOTB] Ahnlab UTM

regexp

Designed for processing events from the Ahnlab system. The event sources is system logs, operation logs, connections, the IPS module.

AhnLabs MDS

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Apache Cassandra

[OOTB] Apache Cassandra file

regexp

Designed for processing events from the logs of the Apache Cassandra database version 4.0.

Aruba ClearPass

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Atlassian Conflunce

[OOTB] Atlassian Jira Conflunce file

regexp

Designed for processing events of Atlassian Jira, Atlassian Confluence systems (Jira 9.12, Confluence 8.5) stored in files.

Atlassian Jira

[OOTB] Atlassian Jira Conflunce file

regexp

Designed for processing events of Atlassian Jira, Atlassian Confluence systems (Jira 9.12, Confluence 8.5) stored in files.

Avanpost FAM

[OOTB] Avanpost FAM syslog

regexp

Designed for processing events of the Avanpost Federated Access Manager (FAM) 1.9 received via syslog.

Avanpost IDM

[OOTB] Avanpost IDM syslog

regexp

Designed for processing events of the Avanpost IDM system received via syslog.

Avaya Aura Communication Manager

[OOTB] Avaya Aura Communication Manager syslog

regexp

Designed for processing some of the events received from Avaya Aura Communication Manager 7.1 via syslog.

Avigilon Access Control Manager (ACM)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Ayehu eyeShare

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Arbor Pravail

[OOTB] Arbor Pravail syslog

Syslog

Designed for processing events of the Arbor Pravail system received via syslog.

Aruba Aruba AOS-S

[OOTB] Aruba Aruba AOS-S syslog

regexp

Designed for processing certain types of events received from Aruba network devices with Aruba AOS-S 16.10 firmware via syslog. The normalizer supports the following types of events: accounting events, ACL events, ARP protect events, authentication events, console events, loop protect events.

Barracuda Cloud Email Security Gateway

[OOTB] Barracuda Cloud Email Security Gateway syslog

regexp

Designed for processing events from Barracuda Cloud Email Security Gateway via syslog.

Barracuda Networks NG Firewall

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Barracuda Web Security Gateway

[OOTB] Barracuda Web Security Gateway syslog

Syslog

Designed for processing some of the events received from Barracuda Web Security Gateway 15.0 via syslog.

BeyondTrust Privilege Management Console

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

BeyondTrust’s BeyondInsight

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Bifit Mitigator

[OOTB] Bifit Mitigator Syslog

Syslog

Designed for processing events from the DDOS Mitigator protection system received via Syslog.

Bloombase StoreSafe

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

BMC CorreLog

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Bricata ProAccel

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Brinqa Risk Analytics

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Broadcom Symantec Advanced Threat Protection (ATP)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Broadcom Symantec Endpoint Protection

[OOTB] Broadcom Symantec Endpoint Protection

regexp

Designed for processing events from the Symantec Endpoint Protection system.

Broadcom Symantec Endpoint Protection Mobile

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Broadcom Symantec Threat Hunting Center

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Canonical LXD

[OOTB] Canonical LXD syslog

Syslog

Designed for processing events received via syslog from the Canonical LXD system version 5.18.

Checkpoint

[OOTB] Checkpoint syslog

Syslog

[OOTB] Checkpoint syslog — designed for processing events received from the Checkpoint R81 firewall via the Syslog protocol.

[OOTB] Checkpoint Syslog CEF by CheckPoint — designed for processing events in CEF format received from the Checkpoint firewall via the Syslog protocol.

Cisco Access Control Server (ACS)

[OOTB] Cisco ACS syslog

regexp

Designed for processing events of the Cisco Access Control Server (ACS) system received via Syslog.

Cisco ASA

[OOTB] Cisco ASA and IOS syslog

Syslog

Designed for certain events of Cisco ASA and Cisco IOS devices received via syslog.

Cisco Email Security Appliance (WSA)

[OOTB] Cisco WSA AccessFile

regexp

Designed for processing the event log of the Cisco Email Security Appliance (WSA) proxy server, the access.log file.

Cisco Firepower Threat Defense

[OOTB] Cisco ASA and IOS syslog

Syslog

Designed for processing events for network devices: Cisco ASA, Cisco IOS, Cisco Firepower Threat Defense (version 7.2) received via syslog.

Cisco Identity Services Engine (ISE)

[OOTB] Cisco ISE syslog

regexp

Designed for processing events of the Cisco Identity Services Engine (ISE) system received via Syslog.

Cisco IOS

[OOTB] Cisco ASA and IOS syslog

Syslog

Designed for certain events of Cisco ASA and Cisco IOS devices received via syslog.

Cisco Netflow v5

[OOTB] NetFlow v5

netflow5

Designed for processing events from Cisco Netflow version 5.

Cisco NetFlow v9

[OOTB] NetFlow v9

netflow9

Designed for processing events from Cisco Netflow version 9.

Cisco Prime

[OOTB] Cisco Prime syslog

Syslog

Designed for processing events of the Cisco Prime system version 3.10 received via syslog.

Cisco Secure Email Gateway (SEG)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Cisco Secure Firewall Management Center

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Cisco WLC

[OOTB] Cisco WLC syslog

 

regexp

 

Normalizer for some types of events received from Cisco WLC network devices (2500 Series Wireless Controllers, 5500 Series Wireless Controllers, 8500 Series Wireless Controllers, Flex 7500 Series Wireless Controllers) via Syslog.

Cisco WSA

[OOTB] Cisco WSA file

regexp

Designed for processing the event log of the Cisco WSA 14.2, 15.0 proxy server. The normalizer supports processing events generated using the template: %t %e %a %w/%h %s %2r %A %H/%d %c %D %Xr %?BLOCK_SUSPECT_USER_AGENT,MONITOR_SUSPECT_USER_AGENT?%<User-Agent:%!%-%. %) %q %k %u %m.

Citrix NetScaler

[OOTB] Citrix NetScaler syslog

regexp

Designed for processing events received from the Citrix NetScaler 13.7 load balancer, Citrix ADC NS13.0.

Claroty Continuous Threat Detection

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

CloudPassage Halo

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Codemaster Mirada

[OOTB] Codemaster Mirada syslog

Syslog

Designed for processing events of the Codemaster Mirada system received via syslog.

CollabNet Subversion Edge

[OOTB] CollabNet Subversion Edge syslog

Syslog

Designed for processing events received from the Subversion Edge (version 6.0.2) system via syslog.

CommuniGate Pro

[OOTB] CommuniGate Pro

regexp

Designed to process events of the CommuniGate Pro 6.1 system sent by the KUMA agent via TCP.

Corvil Network Analytics

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Cribl Stream

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

CrowdStrike Falcon Host

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

CyberArk Privileged Threat Analytics (PTA)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

CyberPeak Spektr

[OOTB] CyberPeak Spektr syslog

Syslog

Designed for processing events of the CyberPeak Spektr system version 3 received via syslog.

Cyberprotect Cyber Backup

[OOTB] Cyberprotect Cyber Backup SQL

sql

Designed for processing events received by the connector from the database of the Cyber Backup system (version 16.5).

DeepInstinct

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Delinea Secret Server

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Digital Guardian Endpoint Threat Detection

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

BIND DNS server

[OOTB] BIND Syslog

[OOTB] BIND file

Syslog

regexp

[OOTB] BIND Syslog is designed for processing events of the BIND DNS server received via Syslog. [OOTB] BIND file is designed for processing event logs of the BIND DNS server.

Docsvision

[OOTB] Docsvision syslog

Syslog

Designed for processing audit events received from the Docsvision system via syslog.

Dovecot

[OOTB] Dovecot Syslog

Syslog

Designed for processing events of the Dovecot mail server received via Syslog. The event source is POP3/IMAP logs.

Dragos Platform

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Dr.Web Enterprise Security Suite

[OOTB] Syslog-CEF

syslog

Designed for processing Dr.Web Enterprise Security Suite 13.0.1 events in the CEF format.

EclecticIQ Intelligence Center

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Edge Technologies AppBoard and enPortal

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Eltex ESR

[OOTB] Eltex ESR syslog

Syslog

Designed to process part of the events received from Eltex ESR network devices via syslog.

Eltex MES

[OOTB] Eltex MES syslog

regexp

Designed for processing events received from Eltex MES network devices via syslog (supported device models: MES14xx, MES24xx, MES3708P).

Eset Protect

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Extreme Networks Summit Wireless Controller

 

[OOTB] Extreme Networks Summit Wireless Controller

 

regexp

 

Normalizer for certain audit events of the Extreme Networks Summit Wireless Controller (model: WM3700, firmware version: 5.5.5.0-018R).

 

Factor-TS Dionis NX

[OOTB] Factor-TS Dionis NX syslog

regexp

Designed for processing some audit events received from the Dionis-NX system (version 2.0.3) via syslog.

F5 Advanced Web Application Firewall

[OOTB] F5 Advanced Web Application Firewall syslog

regexp

Designed for processing audit events received from the F5 Advanced Web Application Firewall system via syslog.

F5 Big­IP Advanced Firewall Manager (AFM)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

FFRI FFR yarai

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

FireEye CM Series

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

FireEye Malware Protection System

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Forcepoint NGFW

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Forcepoint SMC

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Fortinet FortiAnalyzer

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Fortinet FortiGate

[OOTB] Syslog-CEF

regexp

Designed for processing events in the CEF format.

Fortinet FortiGate

[OOTB] FortiGate syslog KV

Syslog

Designed for processing events from FortiGate firewalls (version 7.0) via syslog. The event source is FortiGate logs in key-value format.

Fortinet Fortimail

[OOTB] Fortimail

regexp

Designed for processing events of the FortiMail email protection system. The event source is Fortimail mail system logs.

Fortinet FortiSOAR

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

FreeBSD

[OOTB] FreeBSD file

regexp

Designed for processing events of the FreeBSD operating system (version 13.1-RELEASE) stored in a file.

The normalizer can process files produced by the praudit utility.

Example:

praudit -xl /var/audit/AUDITFILE >> file_name.log

FreeIPA

[OOTB] FreeIPA

json

Designed for processing events from the FreeIPA system. The event source is Free IPA directory service logs.

FreeRADIUS

[OOTB] FreeRADIUS syslog

Syslog

Designed for processing events of the FreeRADIUS system received via Syslog. The normalizer supports events from FreeRADIUS version 3.0.

GajShield Firewall

[OOTB] GajShield Firewall syslog

regexp

Designed for processing part of the events received from the GajShield Firewall version GAJ_OS_Bulwark_Firmware_v4.35 via syslog.

Garda Monitor

[OOTB] Garda Monitor syslog

syslog

Designed for processing events of the Garda Monitor system version 3.4 received via syslog.

Gardatech GardaDB

[OOTB] Gardatech GardaDB syslog

Syslog

Designed for processing events of the Gardatech Perimeter system version 5.3, 5.4 received via syslog.

Gardatech Perimeter

[OOTB] Gardatech Perimeter syslog

Syslog

Designed for processing events of the Gardatech Perimeter system version 5.3 received via syslog.

Gigamon GigaVUE

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

HAProxy

[OOTB] HAProxy syslog

Syslog

Designed for processing logs of the HAProxy system. The normalizer supports events of the HTTP log, TCP log, Error log type from HAProxy version 2.8.

HashiCorp Vault

[OOTB] HashiCorp Vault json

json

Designed for processing events received from the HashiCorp Vault system version 1.16 in JSON format. The normalizer package is available in KUMA 3.0 and later.

Huawei Eudemon

[OOTB] Huawei Eudemon

regexp

Designed for processing events from Huawei Eudemon firewalls. The event source is logs of Huawei Eudemon firewalls.

Huawei iManager 2000

[OOTB] Huawei iManager 2000 file

 

regexp

 

This normalizer supports processing some of the events of the Huawei iManager 2000 system, which are stored in the \client\logs\rpc, \client\logs\deploy\ossDeployment files.

 

Huawei USG

[OOTB] Huawei USG Basic

Syslog

Designed for processing events received from Huawei USG security gateways via Syslog.

Huawei VRP

[OOTB] Huawei VRP syslog

regexp

Designed for processing some types of Huawei VRP system events received via syslog. The normalizer makes a partial selection of event data. The normalizer is available in KUMA 3.0 and later.

IBM InfoSphere Guardium

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Ideco UTM

[OOTB] Ideco UTM Syslog

Syslog

Designed for processing events received from Ideco UTM via Syslog. The normalizer supports events of Ideco UTM 14.7, 14.10, 17.5.

Illumio Policy Compute Engine (PCE)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Imperva Incapsula

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Imperva SecureSphere

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Indeed Access Manager

[OOTB] Indeed Access Manager syslog

Syslog

Designed for processing events received from the Indeed Access Manager system via syslog.

Indeed PAM

[OOTB] Indeed PAM syslog

Syslog

Designed for processing events of Indeed PAM (Privileged Access Manager) version 2.6.

Indeed SSO

[OOTB] Indeed SSO xml

xml

Designed for processing events of the Indeed SSO (Single Sign-On) system. The normalizer supports KUMA 2.1.3 and later.

InfoWatch Person Monitor

[OOTB] InfoWatch Person Monitor SQL

sql

Designed for processing system audit events from the MS SQL database of InfoWatch Person Monitor 10.2.

InfoWatch Traffic Monitor

[OOTB] InfoWatch Traffic Monitor SQL

sql

Designed for processing events received by the connector from the database of the InfoWatch Traffic Monitor system.

Intralinks VIA

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

IPFIX

[OOTB] IPFIX

ipfix

Designed for processing events in the IP Flow Information Export (IPFIX) format.

Juniper JUNOS

[OOTB] Juniper - JUNOS

regexp

Designed for processing audit events received from Juniper network devices.

Kaspersky Anti Targeted Attack (KATA)

[OOTB] KATA

cef

Designed for processing alerts or events from the Kaspersky Anti Targeted Attack activity log.

Kaspersky CyberTrace

[OOTB] CyberTrace

regexp

Designed for processing Kaspersky CyberTrace events.

Kaspersky Endpoint Detection and Response (KEDR)

[OOTB] KEDR telemetry

json

Designed for processing Kaspersky EDR telemetry tagged by KATA. The event source is kafka, EnrichedEventTopic

KICS/KATA

[OOTB] KICS4Net v2.x

cef

Designed for processing KICS/KATA version 2.x events.

KICS/KATA

[OOTB] KICS4Net v3.x

Syslog

Designed for processing KICS/KATA version 3.x events.

KICS/KATA 4.2

[OOTB] Kaspersky Industrial CyberSecurity for Networks 4.2 syslog

Syslog

Designed for processing events received from the KICS/KATA 4.2 system via syslog.

Kaspersky KISG

[OOTB] Kaspersky KISG syslog

Syslog

Designed for processing events received from Kaspersky IoT Secure Gateway (KISG) 3.0 via syslog.

Open Single Management Platform

[OOTB] KSC

cef

Designed for processing Open Single Management Platform events received in CEF format.

Open Single Management Platform

[OOTB] KSC from SQL

sql

Designed for processing events received by the connector from the database of the Open Single Management Platform system.

Kaspersky Security for Linux Mail Server (KLMS)

[OOTB] KLMS Syslog CEF

Syslog

Designed for processing events from Kaspersky Security for Linux Mail Server in CEF format via Syslog.

Kaspersky Security for MS Exchange SQL

 

[OOTB] Kaspersky Security for MS Exchange SQL

 

sql

 

Normalizer for Kaspersky Security for Exchange (KSE) 9.0 events stored in the database.

 

Kaspersky Secure Mail Gateway (KSMG)

[OOTB] KSMG Syslog CEF

Syslog

Designed for processing events of Kaspersky Secure Mail Gateway version 2.0 in CEF format via Syslog.

Kaspersky Web Traffic Security (KWTS)

[OOTB] KWTS Syslog CEF

Syslog

Designed for processing events received from Kaspersky Web Traffic Security in CEF format via Syslog.

Kaspersky Web Traffic Security (KWTS)

[OOTB] KWTS (KV)

Syslog

Designed for processing events in Kaspersky Web Traffic Security for Key-Value format.

Kemptechnologies LoadMaster

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Kerio Control

[OOTB] Kerio Control

Syslog

Designed for processing events of Kerio Control firewalls.

KUMA

[OOTB] KUMA forwarding

json

Designed for processing events forwarded from KUMA.

Libvirt

[OOTB] Libvirt syslog

Syslog

Designed for processing events of Libvirt version 8.0.0 received via syslog.

Lieberman Software ERPM

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Linux

[OOTB] Linux audit and iptables Syslog v1

Syslog

Designed for processing events of the Linux operating system. This normalizer does not support processing events in the "ENRICHED" format.

MariaDB

[OOTB] MariaDB Audit Plugin Syslog

Syslog

Designed for processing events coming from the MariaDB audit plugin over Syslog.

Microsoft 365 (Office 365)

[OOTB] Microsoft Office 365 json

json

This normalizer is designed for processing Microsoft 365 events.

Microsoft Active Directory Federation Service (AD FS)

[OOTB] Microsoft Products for KUMA 3

xml

Designed for processing Microsoft AD FS events. The [OOTB] Microsoft Products for KUMA 3 normalizer supports this event source in KUMA 3.0.1 and later versions.

Microsoft Active Directory Domain Service (AD DS)

[OOTB] Microsoft Products for KUMA 3

xml

Designed for processing Microsoft AD DS events. The [OOTB] Microsoft Products for KUMA 3 normalizer supports this event source in KUMA 3.0.1 and later versions.

Microsoft Defender

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3

xml

Designed for processing Microsoft Defender events.

Microsoft DHCP

[OOTB] MS DHCP file

regexp

Designed for processing Microsoft DHCP server events. The event source is Windows DHCP server logs.

Microsoft DNS

[OOTB] DNS Windows

regexp

Designed for processing Microsoft DNS server events. The event source is Windows DNS server logs.

Microsoft Exchange

[OOTB] Exchange CSV

csv

Designed for processing the event log of the Microsoft Exchange system. The event source is Exchange server MTA logs.

Microsoft Hyper-V

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3

xml

Designed for processing Microsoft Windows events.

The event source is Microsoft Hyper-V logs: Microsoft-Windows-Hyper-V-VMMS-Admin, Microsoft-Windows-Hyper-V-Compute-Operational, Microsoft-Windows-Hyper-V-Hypervisor-Operational, Microsoft-Windows-Hyper-V-StorageVSP-Admin, Microsoft-Windows-Hyper-V-Hypervisor-Admin, Microsoft-Windows-Hyper-V-VMMS-Operational, Microsoft-Windows-Hyper-V-Compute-Admin.

Microsoft IIS

[OOTB] IIS Log File Format

regexp

The normalizer processes events in the format described at https://learn.microsoft.com/en-us/windows/win32/http/iis-logging. The event source is Microsoft IIS logs.

Microsoft Network Policy Server (NPS)

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3

xml

The normalizer is designed for processing events of the Microsoft Windows operating system. The event source is Network Policy Server events.

Microsoft SCCM

[OOTB] Microsoft SCCM file

regexp

Designed for processing events of the Microsoft SCCM system version 2309. The normalizer supports processing of some of the events stored in the AdminService.log file.

Microsoft SharePoint Server

[OOTB] Microsoft SharePoint Server diagnostic log file

regexp

The normalizer supports processing part of Microsoft SharePoint Server 2016 events stored in diagnostic logs.

Microsoft Sysmon

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3

xml

This normalizer is designed for processing Microsoft Sysmon module events.

Microsoft Windows 7, 8.1, 10, 11

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN

 

xml

Designed for processing part of events from the Security, System, Application logs of the Microsoft Windows operating system. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.

 

Microsoft PowerShell

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN

xml

Designed for processing Microsoft Windows PowerShell log events. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.

Microsoft SQL Server

[Deprecated][OOTB] Microsoft SQL Server xml

xml

Designed for processing events of MS SQL Server versions 2008, 2012, 2014, 2016. The normalizer supports KUMA 2.1.3 and later.

Microsoft Windows Remote Desktop Services

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN

xml

Designed for processing Microsoft Windows events. The event source is the log at Applications and Services Logs - Microsoft - Windows - TerminalServices-LocalSessionManager - Operational The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.

 

Microsoft Windows Server 2008 R2, 2012 R2, 2016, 2019, 2022

[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN

xml

Designed for processing part of events from the Security, System logs of the Microsoft Windows Server operating system. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.

Microsoft Windows XP/2003

[OOTB] SNMP. Windows {XP/2003}

json

Designed for processing events received from workstations and servers running Microsoft Windows XP, Microsoft Windows 2003 operating systems using the SNMP protocol.

Microsoft WSUS

[OOTB] Microsoft WSUS file

regexp

Designed for processing events of the Gardatech Perimeter system version 5.3, 5.4 received via syslog.

MikroTik

[OOTB] MikroTik syslog

regexp

Designed for events received from MikroTik devices via Syslog.

Minerva Labs Minerva EDR

[OOTB] Minerva EDR

regexp

Designed for processing events from the Minerva EDR system.

MongoDb

[OOTB] MongoDb syslog

Syslog

Designed for processing part of events received from the MongoDB 7.0 database via syslog.

Multifactor Radius Server for Windows

[OOTB] Multifactor Radius Server for Windows syslog

Syslog

Designed for processing events received from the Multifactor Radius Server 1.0.2 for Microsoft Windows via Syslog.

MySQL 5.7

[OOTB] MariaDB Audit Plugin Syslog

Syslog

Designed for processing events coming from the MariaDB audit plugin over Syslog.

NetApp ONTAP (AFF, FAM)

[OOTB] NetApp syslog, [OOTB] NetApp file

regexp

[OOTB] NetApp syslog — designed for processing events of the NetApp system (version — ONTAP 9.12) received via syslog.

[OOTB] NetApp file — designed for processing events of the NetApp system (version — ONTAP 9.12) stored in a file.

NetApp SnapCenter

[OOTB] NetApp SnapCenter file

regexp

Designed to process part of the events of the NetApp SnapCenter system (SnapCenter Server 5.0). The normalizer supports processing some of the events from the C:\Program Files\NetApp\SnapCenter WebApp\App_Data\log\napManagerWeb.*.log file. Types of supported events in xml format from the SnapManagerWeb.*.log file: SmDiscoverPluginRequest, SmDiscoverPluginResponse, SmGetDomainsResponse, SmGetHostPluginStatusRequest, SmGetHostPluginStatusResponse, SmGetHostRequest, SmGetHostResponse, SmRequest. The normalizer supports processing some of the events from the C:\Program Files\NetApp\SnapCenter WebApp\App_Data\log\audit.log file.

NetIQ Identity Manager

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

NetScout Systems nGenius Performance Manager

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Netskope Cloud Access Security Broker

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Netwrix Auditor

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Nextcloud

[OOTB] Nextcloud syslog

Syslog

Designed for events of Nextcloud version 26.0.4 received via syslog. The normalizer does not save information from the Trace field.

Nexthink Engine

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Nginx

[OOTB] Nginx regexp

regexp

Designed for processing Nginx web server log events.

NIKSUN NetDetector

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

One Identity Privileged Session Management

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

OpenLDAP

[OOTB] OpenLDAP

regexp

Designed for line-by-line processing of some events of the OpenLDAP 2.5 system in an auditlog.ldif file.

Open VPN

[OOTB] OpenVPN file

regexp

Designed for processing the event log of the OpenVPN system.

Oracle

[OOTB] Oracle Audit Trail

sql

Designed for processing database audit events received by the connector directly from an Oracle database.

OrionSoft Termit

[OOTB] OrionSoft Termit syslog

syslog

Designed for processing events received from the OrionSoft Termit 2.2 system via syslog.

Orion soft zVirt

[OOTB] Orion Soft zVirt syslog

regexp

Designed for processing events of the Orion soft zVirt 3.1 virtualization system.  

PagerDuty

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Palo Alto Cortex Data Lake

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Palo Alto Networks NGFW

[OOTB] PA-NGFW (Syslog-CSV)

Syslog

Designed for processing events from Palo Alto Networks firewalls received via Syslog in CSV format.

Palo Alto Networks PAN­OS

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Parsec ParsecNet

[OOTB] Parsec ParsecNet

sql

Designed for processing events received by the connector from the database of the Parsec ParsecNet 3 system.

Passwork

[OOTB] Passwork syslog

Syslog

Designed for processing events received from the Passwork version 050219 system via Syslog.

Penta Security WAPPLES

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Positive Technologies ISIM

[OOTB] PTsecurity ISIM

regexp

Designed for processing events from the PT Industrial Security Incident Manager system.

Positive Technologies Sandbox

[OOTB] PTsecurity Sandbox

regexp

Designed for processing events of the PT Sandbox system.

Positive Technologies Web Application Firewall

[OOTB] PTsecurity WAF

Syslog

Designed for processing events from the PTsecurity (Web Application Firewall) system.

Postfix

[OOTB] Postfix syslog

regexp

The [OOTB] Postfix package contains a set of resources for processing Postfix 3.6 events. It supports processing syslog events received over TCP. The package is available for KUMA 3.0 and newer versions.

PostgreSQL pgAudit

[OOTB] PostgreSQL pgAudit Syslog

Syslog

Designed for processing events of the pgAudit audit plug-in for the PostgreSQL database received via Syslog.

PowerDNS

[OOTB] PowerDNS syslog

Syslog

Designed for processing events of PowerDNS Authoritative Server 4.5 received via Syslog.

Proofpoint Insider Threat Management

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Proxmox

[OOTB] Proxmox file

regexp

Designed for processing events of the Proxmox system version 7.2-3 stored in a file. The normalizer supports processing of events in access and pveam logs.

PT NAD

[OOTB] PT NAD json

json

Designed for processing events coming from PT NAD in json format. This normalizer supports events from PT NAD version 11.1, 11.0.

QEMU - hypervisor logs

[OOTB] QEMU - Hypervisor file

regexp

Designed for processing events of the QEMU hypervisor stored in a file. QEMU 6.2.0 and Libvirt 8.0.0 are supported.

QEMU - virtual machine logs

[OOTB] QEMU - Virtual Machine file

regexp

Designed for processing events from logs of virtual machines of the QEMU hypervisor version 6.2.0, stored in a file.

Radware DefensePro AntiDDoS

[OOTB] Radware DefensePro AntiDDoS

Syslog

Designed for processing events from the DDOS Mitigator protection system received via Syslog.

Reak Soft Blitz Identity Provider

[OOTB] Reak Soft Blitz Identity Provider file

regexp

Designed for processing events of the Reak Soft Blitz Identity Provider system version 5.16, stored in a file.

RedCheck Desktop

[OOTB] RedCheck Desktop file

regexp

Designed for processing logs of the RedCheck Desktop 2.6 system stored in a file.

RedCheck WEB

[OOTB] RedCheck WEB file

regexp

Designed for processing logs of the RedCheck Web 2.6 system stored in files.

RED SOFT RED ADM

[OOTB] RED SOFT RED ADM syslog

regexp

Designed for processing events received from the RED ADM system (RED ADM: Industrial edition 1.1) via syslog.

The normalizer supports processing:

- Management subsystem events

- Controller events

ReversingLabs N1000 Appliance

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Rubicon Communications pfSense

[OOTB] pfSense Syslog

Syslog

Designed for processing events from the pfSense firewall received via Syslog.

Rubicon Communications pfSense

[OOTB] pfSense w/o hostname

Syslog

Designed for processing events from the pfSense firewall. The Syslog header of these events does not contain a hostname.

SailPoint IdentityIQ

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

SecurityCode Continent 4

[OOTB] SecurityCode Continent 4 syslog

regexp

Designed for processing events of the SecurityCode Continent system version 4 received via syslog.

Sendmail

[OOTB] Sendmail syslog

Syslog

Designed for processing events of Sendmail version 8.15.2 received via syslog.

SentinelOne

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Skype for Business

[OOTB] Microsoft Products for KUMA 3

xml

Designed for processing some of the events from the log of the Skype for Business system, the Lync Server log.

Snort

[OOTB] Snort 3 json file

json

Designed for processing events of Snort version 3 in JSON format.

Sonicwall TZ

[OOTB] Sonicwall TZ Firewall

Syslog

Designed for processing events received via Syslog from the SonicWall TZ firewall.

SolarWinds DameWare MRC

 

[OOTB] SolarWinds DameWare MRC xml

 

xml

 

This normalizer supports processing some of the DameWare Mini Remote Control (MRC) 7.5 events stored in the Application log of Windows. The normalizer processes events generated by the "dwmrcs" provider.

 

Sophos Firewall

[OOTB] Sophos Firewall syslog

regexp

Designed for processing events received from Sophos Firewall 20 via syslog.

Sophos XG

[OOTB] Sophos XG

regexp

Designed for processing events from the Sophos XG firewall.

Squid

[OOTB] Squid access Syslog

Syslog

Designed for processing events of the Squid proxy server received via the Syslog protocol.

Squid

[OOTB] Squid access.log file

regexp

Designed for processing Squid log events from the Squid proxy server. The event source is access.log logs

S-Terra VPN Gate

[OOTB] S-Terra

Syslog

Designed for processing events from S-Terra VPN Gate devices.

Suricata

[OOTB] Suricata json file

json

This package contains a normalizer for Suricata 7.0.1 events stored in a JSON file.

The normalizer supports processing the following event types: flow, anomaly, alert, dns, http, ssl, tls, ftp, ftp_data, ftp, smb, rdp, pgsql, modbus, quic, dhcp, bittorrent_dht, rfb.

ThreatConnect Threat Intelligence Platform

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

ThreatQuotient

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Tionix Cloud Platform

[OOTB] Tionix Cloud Platform syslog

Syslog

Designed for processing events of the Tionix Cloud Platform system version 2.9 received via syslog. The normalizer makes a partial selection of event data. The normalizer is available in KUMA 3.0 and later.

Tionix VDI

 

[OOTB] Tionix VDI file

 

regexp

 

This normalizer supports processing some of the Tionix VDI system (version 2.8) events stored in the tionix_lntmov.log file.

 

TrapX DeceptionGrid

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Trend Micro Control Manager

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Trend Micro Deep Security

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Trend Micro NGFW

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Trustwave Application Security DbProtect

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Unbound

[OOTB] Unbound Syslog

Syslog

Designed for processing events from the Unbound DNS server received via Syslog.

UserGate

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format received from the UserGate system via Syslog.

Varonis DatAdvantage

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Veriato 360

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

ViPNet TIAS

[OOTB] Vipnet TIAS syslog

Syslog

Designed for processing events of ViPNet TIAS 3.8 received via Syslog.

VMware ESXi

[OOTB] VMware ESXi syslog

regexp

Designed for processing VMware ESXi events (support for a limited number of events from ESXi versions 5.5, 6.0, 6.5, 7.0) received via Syslog.

VMWare Horizon

[OOTB] VMware Horizon - Syslog

Syslog

Designed for processing events received from the VMware Horizon 2106 system via Syslog.

VMwareCarbon Black EDR

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Vormetric Data Security Manager

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Votiro Disarmer for Windows

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Wallix AdminBastion

[OOTB] Wallix AdminBastion syslog

regexp

Designed for processing events received from the Wallix AdminBastion system via Syslog.

WatchGuard - Firebox

[OOTB] WatchGuard Firebox

Syslog

Designed for processing WatchGuard Firebox events received via Syslog.

Webroot BrightCloud

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Winchill Fracas

[OOTB] PTC Winchill Fracas

regexp

Designed for processing events of the Windchill FRACAS failure registration system.

Yandex Browser corporate

[OOTB] Yandex Browser

json

Designed for processing events received from the corporate version of Yandex Browser 23 or 24.4.

Yandex Cloud

[OOTB] Yandex Cloud

regexp

Designed for processing part of Yandex Cloud audit events. The normalizer supports processing audit log events of the configuration level: IAM (Yandex Identity and Access Management), Compute (Yandex Compute Cloud), Network (Yandex Virtual Private Cloud), Storage (Yandex Object Storage), Resourcemanager (Yandex Resource Manager).

Zabbix

[OOTB] Zabbix SQL

sql

Designed for processing events of Zabbix 6.4.

Zecurion DLP

[OOTB] Zecurion DLP syslog

regexp

Designed for processing events of the Zecurion DLP system version 12.0 received via syslog.

ZEEK IDS

[OOTB] ZEEK IDS json file

json

Designed for processing logs of the ZEEK IDS system in JSON format. The normalizer supports events from ZEEK IDS version 1.8.

Zettaset BDEncrypt

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

Zscaler Nanolog Streaming Service (NSS)

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format.

IT-Bastion – SKDPU

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format received from the IT-Bastion SKDPU system via Syslog.

A-Real Internet Control Server (ICS)

[OOTB] A-real IKS syslog

regexp

Designed for processing events of the A-Real Internet Control Server (ICS) system received via Syslog. The normalizer supports events from A-Real ICS version 7.0 and later.

Apache web server

[OOTB] Apache HTTP Server file

regexp

Designed for processing Apache HTTP Server 2.4 events stored in a file. The normalizer supports processing of events from the Application log in the Common or Combined Log formats, as well as the Error log.

Expected format of the Error log events:

"[%t] [%-m:%l] [pid %P:tid %T] [server\ %v] [client\ %a] %E: %M;\ referer\ %-{Referer}i"

Apache web server

[OOTB] Apache HTTP Server syslog

Syslog

Designed for processing events of the Apache HTTP Server received via syslog. The normalizer supports processing of Apache HTTP Server 2.4 events from the Access log in the Common or Combined Log format, as well as the Error log.

Expected format of the Error log events:

"[%t] [%-m:%l] [pid %P:tid %T] [server\ %v] [client\ %a] %E: %M;\ referer\ %-{Referer}i"

Lighttpd web server

[OOTB] Lighttpd syslog

Syslog

Designed for processing Access events of the Lighttpd system received via syslog. The normalizer supports processing of Lighttpd version 1.4 events.

Expected format of Access log events:

$remote_addr $http_request_host_name $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"

IVK Kolchuga-K

[OOTB] Kolchuga-K Syslog

Syslog

Designed for processing events from the IVK Kolchuga-K system, version LKNV.466217.002, via Syslog.

infotecs ViPNet IDS

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format received from the infotecs ViPNet IDS system via Syslog.

infotecs ViPNet Coordinator

[OOTB] VipNet Coordinator Syslog

Syslog

Designed for processing events from the ViPNet Coordinator system received via Syslog.

Kod Bezopasnosti — Continent

[OOTB][regexp] Continent IPS/IDS & TLS

regexp

Designed for processing events of Continent IPS/IDS device log.

Kod Bezopasnosti — Continent

[OOTB] Continent SQL

sql

Designed for getting events of the Continent system from the database.

Kod Bezopasnosti SecretNet 7

[OOTB] SecretNet SQL

sql

Designed for processing events received by the connector from the database of the SecretNet system.

Confident - Dallas Lock

[OOTB] Confident Dallas Lock

regexp

Designed for processing events from the Dallas Lock 8 information protection system.

CryptoPro NGate

[OOTB] Ngate Syslog

Syslog

Designed for processing events received from the CryptoPro NGate system via Syslog.

H3C (Huawei-3Com) routers

 

[OOTB] H3C Routers syslog

 

regexp

 

Normalizer for some types of events received from H3C (Huawei-3Com) SR6600 network devices (Comware 7 firmware) via Syslog. The normalizer supports the "standard" event format (RFC 3164-compliant format).

 

NT Monitoring and Analytics

[OOTB] Syslog-CEF

Syslog

Designed for processing events in the CEF format received from the NT Monitoring and Analytics system via Syslog.

BlueCoat proxy server

[OOTB] BlueCoat Proxy v0.2

regexp

Designed to process BlueCoat proxy server events. The event source is the BlueCoat proxy server event log.

SKDPU NT Access Gateway

[OOTB] Bastion SKDPU-GW 

Syslog

Designed for processing events of the SKDPU NT Access gateway system received via Syslog.

Solar Dozor

[OOTB] Solar Dozor Syslog

Syslog

Designed for processing events received from the Solar Dozor system version 7.9 via Syslog. The normalizer supports custom format events and does not support CEF format events.

-

[OOTB] Syslog header

Syslog

Designed for processing events received via Syslog. The normalizer parses the header of the Syslog event, the message field of the event is not parsed. If necessary, you can parse the message field using other normalizers.

Page top
[Topic 255782]

Aggregation rules

Aggregation rules let you combine repetitive events of the same type and replace them with one common event. Aggregation rules support fields of the standard KUMA event schema as well as fields of the extended event schema. In this way, you can reduce the number of similar events sent to the storage and/or the correlator, reduce the workload on services, conserve data storage space and licensing quota (EPS). An aggregation event is created when a time or number of events threshold is reached, whichever occurs first.

For aggregation rules, you can configure a filter and apply it only to events that match the specified conditions.

You can configure aggregation rules under Resources → Aggregation rules, and then select the created aggregation rule from the drop-down list in the collector settings. You can also configure aggregation rules directly in collector settings. Available aggregation rule settings are listed in the table below.

Available aggregation rule settings

Setting

Description

Name

Unique name of the resource. Maximum length of the name: 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Threshold

Threshold on the number of events. After accumulating the specified number of events with identical fields, the collector creates an aggregation event and begins accumulating events for the next aggregated event. The default value is 100.

Triggered rule lifetime

Threshold on time in seconds. When the specified time expires, the accumulation of base events stops, the collector creates an aggregated event and starts obtaining events for the next aggregated event. The default value is 60.

Required setting.

Description

Description of the resource. Maximum length of the description: 4000 Unicode characters.

Identical fields

Fields of normalized events whose values must match. For example, for network events, SourceAddress, DestinationAddress, and DestinationPort normalized event fields can be used. In the aggregation event, these normalized event fields are populated with the values of the base events.

Required setting.

Unique fields

Fields whose range of values must be preserved in the aggregated event. For example, if the DestinationPort field is specified under Unique fields and not Identical fields, the aggregated event combines base connection events for a variety of ports, and the DestinationPort field of the aggregated event contains a list of all ports to which connections were made.

Sum fields

Fields whose values are summed up during aggregation and written to the same-name fields of the aggregated event. The following special considerations are relevant to field behavior:

  • The values of fields of the "Number" and "Float" types are summed up.
  • The values of fields of the "String" type are concatenated with commas added as separators.
  • The values of fields with the types "Array of strings", "Array of numbers" and "Array of floats" are appended to the end of the array.

Filter

Conditions for determining which events must be processed by the resource. In the drop-down list, you can select an existing filter Create new to create a new filter.

In aggregation rules, do not use filters with the TI operand or the TIDetect, inActiveDirectoryGroup, or hasVulnerability operators. The Active Directory fields for which you can use the inActiveDirectoryGroup operator will appear during the enrichment stage (after aggregation rules are executed).

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

The KUMA distribution kit includes aggregation rules listed in the table below.

Predefined aggregation rules

Aggregation rule name

Description

[OOTB] Netflow 9

The rule is triggered after 100 events or 10 seconds.

Events are aggregated by the following fields:

  • DestinationAddress
  • DestinationPort
  • SourceAddress
  • TransportProtocol
  • DeviceVendor
  • DeviceProduct

The DeviceCustomString1 and BytesIn fields are summed up.

Page top
[Topic 217722]

Enrichment rules

Expand all | Collapse all

Event enrichment involves adding information to events that can be used to identify and investigate an incident.

Enrichment rules let you add supplementary information to event fields by transforming data that is already present in the fields, or by querying data from external systems. For example, suppose that a user name is recorded in the event. You can use an enrichment rule to add information about the department, position, and manager of this user to the event fields.

Enrichment rules can be used in the following KUMA services and features:

  • Collector. In the collector, you can create an enrichment rule, and it becomes a resource that you can reuse in other services. You can also link an enrichment rule created as a standalone resource.
  • Correlator. In the correlator, you can create an enrichment rule, and it becomes a resource that you can reuse in other services. You can also link an enrichment rule created as a standalone resource.
  • Normalizer. In the normalizer, you can only create an enrichment rule linked to that normalizer. Such a rule will not be available as a standalone resource for reuse in other services.

Available enrichment rule settings are listed in the table below.

Basic settings tab

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

Source kind

Required setting.

Drop-down list for selecting the type of incoming events. Depending on the selected type, you may see the following additional settings:

  • constant

    This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Constant

    The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

    Target field

    The KUMA event field that you want to populate with the data.

    If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

    If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

  • dictionary

    This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Dictionary name

    The dictionary from which the values are to be taken.

    Key fields

    Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

    If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

    Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

    If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

    Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

  • table

    This type of enrichment is used if you need to add a value from the dictionary of the Table type.

    When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

    In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

    • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
    • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

    New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

  • event

    This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

    • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • In the Source field drop-down list, select the event field whose value will be written to the target field.
    • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can use the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

      Available conversions

      Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

      • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
      • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
      • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
        • Replace chars specifies the sequence of characters to be replaced.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
      • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
        • Expression is the RE2 regular expression whose results you want to replace.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

       

  • template

    This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

    Available enrichment type settings

    Setting

    Description

    Template

    The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

    Target field

    The KUMA event field that you want to populate with the data.

    If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

    • {{.SA.StringArrayOne}}
    • {{- range $index, $element := . SA.StringArrayOne -}}

      {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

    To convert the data in an array field in a template into the TSV format, use the toString function, for example:

    template {{toString .SA.StringArray}}

  • dns

    This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa. IP addresses are converted to DNS names only for private addresses: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 100.64.0.0/10.

    Available settings:

    • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can use the Add URL button to specify multiple URLs.
    • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
    • Workers—maximum number of requests per one point in time. The default value is 1.
    • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
    • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
  • cybertrace

    This type of enrichment is deprecated, we recommend using cybertrace-http instead.

    This type of enrichment is used to add information from CyberTrace data streams to event fields.

    Available settings:

    • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests. The default CyberTrace port is 9999.
    • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • RPS—maximum number of requests sent to the server per second. The default value is 1,000.
    • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
    • Maximum number of events in the enrichment queue—maximum number of events stored in the enrichment queue for re-sending. The default value is 1,000,000,000.
    • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

      Available types of CyberTrace indicators:

      • ip
      • url
      • hash

      In the mapping table, you must provide at least one string. You can use the Add row button to add a string, and can use the cross button to remove a string.

  • cybertrace-http

    This is a new streaming event enrichment type in CyberTrace that allows you to send a large number of events with a single request to the CyberTrace API. Recommended for systems with a lot of events. Cybertrace-http outperforms the previous 'cybertrace' type, which is still available in KUMA for backward compatibility.

    Limitations:

    • The cybertrace-http enrichment type cannot be used for retroscan in KUMA.
    • If the cybertrace-http enrichment type is being used, detections are not saved in CyberTrace history in the Detections window.

    Available settings:

    • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests and the port that CyberTrace API is using. The default port is 443.
    • Secret (required) is a drop-down list in which you can select the secret which stores the credentials for the connection.
    • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
    • Key fields (required) is the list of event fields used for enriching events with data from CyberTrace.
    • Maximum number of events in the enrichment queue—maximum number of events stored in the enrichment queue for re-sending. The default value is 1,000,000,000. After reaching 1 million events received from the CyberTrace server, events stop being enriched until the number of received events is reduced to less than 500,000.
  • timezone

    This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

    When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

    Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

    When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

    By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

    Permissible time formats when enriching the DeviceTimeZone field

    When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

    Time format in a processed event

    Example

    +-hh:mm

    -07:00

    +-hhmm

    -0700

    +-hh

    -07

    If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

  • geographic data

    This type of enrichment is used to add IP address geographic data to event fields. Learn more about linking IP addresses to geographic data.

    When this type is selected, in the Mapping geographic data to event fields settings block, you must specify from which event field the IP address will be read, select the required attributes of geographic data, and define the event fields in which geographic data will be written:

    1. In the Event field with IP address drop-down list, select the event field from which the IP address is read. Geographic data uploaded to KUMA is matched against this IP address.

      You can use the Add event field with IP address button to specify multiple event fields with IP addresses that require geographic data enrichment. You can delete event fields added in this way by clicking the Delete event field with IP address button.

      When the SourceAddress, DestinationAddress, and DeviceAddress event fields are selected, the Apply default mapping button becomes available. You can use this button to add preconfigured mapping pairs of geographic data attributes and event fields.

    2. For each event field you need to read the IP address from, select the type of geographic data and the event field to which the geographic data should be written.

      You can use the Add geodata attribute button to add field pairs for Geodata attributeEvent field to write to. You can also configure different types of geographic data for one IP address to be written to different event fields. To delete a field pair, click cross-red.

      • In the Geodata attribute field, select which geographic data corresponding to the read IP address should be written to the event. Available geographic data attributes: Country, Region, City, Longitude, Latitude.
      • In the Event field to write to, select the event field which the selected geographic data attribute must be written to.

      You can write identical geographic data attributes to different event fields. If you configure multiple geographic data attributes to be written to the same event field, the event will be enriched with the last mapping in the sequence.

     

     

Debug

You can use this toggle switch to enable the logging of service operations. Logging is disabled by default.

Description

Resource description: up to 4,000 Unicode characters.

Filter

Group of settings in which you can specify the conditions for identifying events that must be processed by this resource. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Predefined enrichment rules

The KUMA distribution kit includes enrichment rules listed in the table below.

Predefined enrichment rules

Enrichment rule name

Description

[OOTB] KATA alert

Used to enrich events received from KATA in the form of a hyperlink to an alert.

The hyperlink is put in the DeviceExternalId field.

Page top
[Topic 217863]

Data collection and analysis rules

Data collection and analysis rules are used to recognize events from stored data.

Data collection and analysis rules, in contrast to real-time streaming correlation, allow using the SQL language to recognize and analyze events stored in the database.

To manage the section, you need one of the following roles: General administrator, Tenant administrator, Tier 1 analyst, Tier 2 analyst.

When creating or editing data collection and analysis rules, you need to specify the settings listed in the table below.

Settings of data collection and analysis rules

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

If you have access to only one tenant, this field is filled in automatically. If you have access to multiple tenants, the name of the first tenant from your list of available tenants is inserted. You can select any tenant from this list.

Sql

Required setting.

The SQL query must contain an aggregation function with a LIMIT and/or a data grouping with a LIMIT.

You must use a LIMIT value between 1 and 10,000.

Examples of SQL queries

  • A query containing only an aggregation function:

    SELECT count(DeviceCustomFloatingPoint1) AS `Aggregate` FROM `events`  WHERE  Type = 1  ORDER BY  Aggregate DESC  LIMIT  10

  • A query containing only data grouping:

    SELECT SourceAddress,DeviceCustomFloatingPoint1 FROM `events` WHERE  Type = 1  GROUP BY  SourceAddress, DeviceCustomFloatingPoint1  ORDER BY  DeviceCustomFloatingPoint1 DESC  LIMIT 10

  • A query containing an aggregation function and data grouping:

    SELECT SourceAddress, sum(DeviceCustomFloatingPoint1) FROM `events`  WHERE  Type = 1  GROUP BY  SourceAddress, DeviceCustomFloatingPoint1  ORDER BY  DeviceCustomFloatingPoint1 DESC  LIMIT  10

  • A query containing an expression using aggregation functions:

    SELECT stddevPop(DeviceCustomFloatingPoint1) + avg(DeviceCustomFloatingPoint1) AS `Aggregate` FROM `events`  WHERE  Type = 1  ORDER BY  Aggregate DESC  LIMIT  10

You can also use SQL function sets: enrich and lookup.

Query interval

Required setting.

The interval for executing the SQL query. You can specify the interval in minutes, hours, and days. The minimum interval is 1 minute.

The default timeout of the SQL query is equal to the interval that you specify in this field.

If the execution of the SQL query takes longer than the timeout, an error occurs. In this case, we recommend increasing the interval. For example, if the interval is 1 minute, and the query takes 80 seconds to execute, we recommend setting the interval to at least 90 seconds.

Tags

Optional setting.

Tags for resource search.

Depth

Optional setting.

Expression for the lower bound of the interval for searching events in the database.

To select a value from the list or to specify the depth as a relative interval, place the cursor in the field. For example, if you want to find all events from one hour ago to now, set the relative interval of now-1h.

Description

Optional setting.

Description of data collection and analysis rules.

Mapping

Settings of the mapping the fields of an SQL query result to KUMA events:

Source field is the field from the SQL query result that you want to convert into a KUMA event.

Event field is the KUMA event field. You can select one of the values in the list by placing the mouse cursor in this field.

Label is a unique custom label for event fields that begin with DeviceCustom*.

You can add new table rows or delete table rows. To add a new table row, click Add mapping. To delete a row, select the check box next to the row and click the button.

If you do not want to fill in the fields manually, you can click the Add mapping from SQL button. The field mapping table is populated with the values of the SQL query fields, including aliases (if any). For example, if the value of an SQL query field is SourceAddress and this value is the same as the name of an event field, this value is inserted into in the Event field column in the field mapping table.

Clicking the Add mapping from SQL button again does not refresh the table, and fields from the SQL query are added to it again.

You can create a data collection and analysis rule in one of the following ways:

  • In the Resources → Resources and services → Data collection and analysis rules section.
  • In the Events section.

To create a data collection and analysis rule in the Events section:

  1. Create or generate an SQL query and click the Правила сбора и анализа данных button.

    A new browser tab for creating a data collection and analysis rule is opened in the browser with pre-filled SQL query and Depth fields. The field mapping table is also be populated automatically if you did not use an asterisk (*) in the SQL query.

  2. Fill in the required fields.

    If necessary, you can change the value in the Query interval field.

  3. Save the settings.

The data collection and analysis rule is saved and is available in the Resources and services → Data collection and analysis rules section.

Page top
[Topic 295030]

Configuring the scheduler for a data collection and analysis rule

For a data collection and analysis rule to run, you must create a scheduler for it.

The scheduler makes SQL queries to specified storage partitions with the interval and search depth configured in the rule, and then converts the SQL query results into base events, which it then sends to the correlator.

SQL query results converted to base events are not stored in the storage.

For the scheduler to work correctly, you must configure the link between the data collection and analysis rule, the storage, and the correlators in the Resources → Data collection and analysis section.

To manage this section, you need one of the following roles: General administrator, Tenant administrator, Tier 2 analyst, Access to shared resources, Manage shared resources.

The schedulers are arranged in the table by the date of their last launch. You can sort the data in columns in ascending or descending order by clicking the Arrow_Down_Icone icon in the column heading.

Available columns of the table of schedulers:

  • Rule name is the name of the data collection and analysis rule for which you created the scheduler.
  • Tenant name is the name of the tenant to which the data collection and analysis rule belongs.
  • Status is the status of the scheduler. The following values are possible:
    • Enabled means the scheduler is running, and the data collection and analysis rule will be started in accordance with the specified schedule.
    • Disabled means the scheduler is not running.

      This is the default status of a newly created scheduler. For the scheduler to run, it must be Enabled.

  • The scheduler finished at is the last time the scheduler's data collection and analysis rule was started.
  • Rule run status is the status with which the scheduler has finished. The following values are possible:
    • Ok means the scheduler finished without errors, the rule was started.
    • Unknown means the scheduler was Enabled and its status is currently unknown. The Unknown status is displayed if you have linked storages and correlators on the corresponding tabs and Enabled the scheduler, but have not yet started it.
    • Stopped means the scheduler is stopped, the rule is not running.
    • Error means the scheduler has finished, and the rule was completed with an error.
  • Last error lists errors (if any) that occurred during the execution of the data collection and analysis rule.

Failure to send events to the configured correlator does not constitute an error.

You can use the toolbar in the upper part of the table to perform actions on schedulers.

  • Add a new scheduler. Click the Add button, and in the displayed window, select the check boxes next to the names of the data collection and analysis rules for which you want to create a scheduler.

    In this window, you can select only data collection and analysis rules that have been created previously. You cannot create a new rule.

  • Remove a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to delete and click the Delete button. The scheduler and links are removed. The data collection and analysis rule is not removed.
  • Enable a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to enable and click the Enable on a schedule button. The data collection and analysis rule for which this scheduler was created will be executed in accordance with the schedule configured in the settings of this rule.
  • Disable a scheduler. In the table of schedulers, select the check boxes next to the schedulers that you want to disable and click the Disable button. The data collection and analysis rule is paused, but the scheduler itself and the links are not deleted.
  • Start the scheduler. In the schedulers table, select the check boxes next to the enabled schedulers and click the Run now button. This scheduler's data collection and analysis rule is executed immediately.

To edit the scheduler, click the corresponding line in the table.

Available scheduler settings for data collection and analysis rules are described below.

General tab

On this tab you can:

  • Enable or disable the scheduler using a toggle switch.

    If the toggle switch is enabled, the data collection and analysis rule runs in accordance with the schedule configured in its settings.

  • Edit the following settings of the data collection and analysis rule:
    • Name
    • Query interval
    • Depth
    • Sql
    • Description
    • Mapping

The Linked storages tab

On this tab you need to specify the storage to which the scheduler will send SQL queries.

To specify a storage:

  1. Click the Link button in the toolbar.
  2. This opens a window; in that window, specify the name of the storage to which you want to add the link, as well as the name of the section of the selected storage.

    You can select only one storage, but multiple sections of that storage.

  3. Click Add.

The link is created and displayed in the table on the Linked storages tab.

If necessary, you can remove the links by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.

The Linked correlators tab

On this tab, you must add correlators for handling base events.

To add a correlator:

  1. Click the Link button in the toolbar.
  2. This opens a window; in that window, hover over the Correlator field.
  3. In the displayed list of correlators, select check boxes next to the correlators you want to add.
  4. Click Add.

The correlators are added and displayed in the table on the Linked correlators tab.

If necessary, you can remove the correlators by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.

You can also view the result of the scheduler in the Core log; to do so, you must first configure the Debug mode in Core settings. To download the log, select the Resources → Active services in KUMA, then select the Core service and click the Log button.

Log records with scheduler results have the datamining scheduler prefix.

Page top
[Topic 295865]

Correlation rules

Correlation rules are used to recognize specific sequences of processed events and to take certain actions after recognition, such as creating correlation events/alerts or interacting with an active list.

Correlation rules can be used in the following KUMA services and features:

  • Correlator.
  • Notification rule.
  • Links of segmentation rules.
  • Retroscan.

The available correlation rule settings depend on the selected type. Types of correlation rules:

  • standard—used to find correlations between several events. Resources of this kind can create correlation events.

    This rule kind is used to determine complex correlation patterns. For simpler patterns you should use other correlation rule kinds that require less resources to operate.

  • simple—used to create correlation events if a certain event is found.
  • operational—used for operations with Active lists and context tables. This rule kind cannot create correlation events.

For these resources, you can enable the display of control characters in all input fields except the Description field.

If a correlation rule is used in the correlator and an alert was created based on it, any change to the correlation rule will not result in a change to the existing alert even if the correlator service is restarted. For example, if the name of a correlation rule is changed, the name of the alert will remain the same. If you close the existing alert, a new alert will be created and it will take into account the changes made to the correlation rule.

In this section

Correlation rules of the 'standard' type

Correlation rules of the 'simple' type

Correlation rules of the 'operational' type

Variables in correlators

Adding a temporary exclusion list for a correlation rule

Predefined correlation rules

MITRE ATT&CK matrix coverage

Page top
[Topic 217783]

Correlation rules of the 'standard' type

Expand all | Collapse all

Correlation rules of the standard type are used for identifying complex patterns in processed events.

The search for patterns is conducted by using buckets

Bucket is a data container that is used by the Correlation rule resources to determine if the correlation event should be created. It has the following functions:

  • Group together events that were matched by the filters in the Selectors group of settings of the Correlation rule resource. Events are grouped by the fields that were selected by user in the Identical fields field.
  • Determine the instance when the Correlation rule should trigger, affecting the events that are grouped in the bucket.
  • Perform the actions that are selected in the Actions group of settings.
  • Create correlation events.

Available states of the Bucket:

  • Empty—the bucket has no events. This can happen only when it was created by the correlation rule triggering.
  • Partial Match—the bucket has some of the expected events (recovery events are not counted).
  • Full Match—the bucket has all of the expected events (recovery events are not counted). When this condition is achieved:
    • The Correlation rule triggers
    • Events are cleared from the bucket
    • The trigger counter of the bucket is updated
    • The state of the bucket becomes Empty
  • False Match—this state of the Bucket is possible:
    • when the Full Match state was achieved but the join-filter returned false.
    • when Recovery check box was selected and the recovery events were received.

    When this condition is achieved the Correlation rule does not trigger. Events are cleared from the bucket, the trigger counter is updated, and the state of the bucket becomes Empty.

Settings for a correlation rule of the standard type are described in the following tables.

General tab

This tab lets you specify the general settings of the correlation rule.

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Correlation rule type: standard.

Required setting.

Tags

Tags for resource search.

Optional setting.

Identical fields

Event fields that must be grouped in a Bucket. The hash of the values of the selected event fields is used as the Bucket key. If one of the selectors specified on the Selectors tab is triggered, the selected event fields are copied to the correlation event.

If different selectors of the correlation rule use event fields that have different meanings in the events, do not specify such event fields in the Identical fields drop-down list.

You can specify local variables. To refer to a local variable, its name must be preceded with the $ character.
For an example of using local variables, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.

Required setting.

Window, sec

Bucket lifetime in seconds. The time starts counting when the bucket is created, when the bucket receives the first event.

When the bucket lifetime expires, the trigger specified on the Actions → On timeout tab is triggered, and the container is deleted. Triggers specified on the Actions → On every threshold and On subsequent thresholds tabs can trigger more than once during the lifetime of the bucket.

Required setting.

Unique fields

Unique event fields to be sent to the bucket. If you specify unique event fields, only these event fields will be sent to the container. The hash of values of the selected fields is used as the Bucket key.

You can specify local variables. To refer to a local variable, its name must be preceded with the $ character.
For an example of using local variables, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.

Rate limit

Maximum number of times a correlation rule can be triggered per second. The default value is 0.

If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to 1000000.

Base events keep policy

This drop-down list lets you select base events that you want to put in the correlation event:

  • first—this option is used to store the first base event of the event collection that triggered creation of the correlation event. This value is selected by default.
  • last—this option is used to store the last base event of the event collection that triggered creation of the correlation event.
  • all—this option is used to store all base events of the event collection that triggered creation of the correlation event.

Severity

Base coefficient used to determine the importance of a correlation rule:

  • Critical
  • High
  • Medium
  • Low (default)

Order by

Event field to be used by selectors of the correlation rule to track the evolution of the situation. This can be useful, for example, if you want to configure a correlation rule to be triggered when several types of events occur in a sequence.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

MITRE techniques

Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix.

Use unique field mapping

 

This toggle switch allows you to save the values of unique fields to an array and pass it to a correlation event field. If the toggle switch is enabled, in the lower part of the General tab, an additional Unique field mapping group of settings is displayed, in which you can configure the mapping of the source original unique fields to correlation event fields.

When processing an event using a correlation rule, field mapping takes place first, and then operations from the Actions tab are applied to the correlation event resulting from the initial mapping.

The toggle switch is turned off by default.

Optional setting.

Unique field mapping group of settings

If you need to pass values of fields listed under Unique fields to the correlation event, here you can configure the mapping of unique fields to correlation event fields. This group of settings is displayed on the General tab if the Use unique field mapping toggle switch is enabled. Values of unique fields are an array, therefore the field in the correlation event must have the appropriate type: SA, NA, FA.

You can add a mapping by clicking the Add button and selecting a field from the drop-down list in the Raw event field column. You can select fields specified in the Unique fields parameter. In the drop-down list in the Target event field column, select the correlation event field to which you want to write the array of values of the source field. You can select fields whose type matches the type of the array (SA, NA, or FA, depending on the type of the source field).

You can delete one or more mappings by selecting the check boxes next to the relevant mappings and clicking Delete.

Selectors tab

This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. To add a selector, click the + Add selector button. You can add multiple selectors, reorder selectors, or remove selectors. To reorder selectors, use the reorder DragIcon icons. To remove a selector, click the delete cross-black icon next to it.

Each selector has a Settings tab and a Local variables tab.

The settings available on the Settings tab are described in the table below.

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Selector threshold (event count)

The number of events that must be received for the selector to trigger. The default value is 1.

Required setting.

Recovery

This toggle switch lets the correlation rule not trigger when the selector receives the number of events specified in the Selector threshold (event count) field. This toggle switch is turned off by default.

Filter

The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Filtering based on data from the Extra event field

Conditions for filters based on data from the Extra event field:

  • Condition—If.
  • Left operand—event field.
  • In this event field, you can specify one of the following values:
    • Extra field.
    • Value from the Extra field in the following format:

      Extra.<field name>

      For example, Extra.app.

      You must specify the value manually.

    • Value from the array written to the Extra field in the following format:

      Extra.<field name>.<array element>

      For example, Extra.array.0.

      The values in the array are numbered starting from 0. You must specify the value manually. To work with a value in the Extra field at a depth of 3 and lower, you must use backticks ``, for example, `Extra.lev1.lev2.lev3`.

  • Operator – =.
  • Right operand—constant.
  • Value—the value by which you need to filter events.

The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter.

Consider two examples of selector filters that select successful authentication events in Microsoft Windows.

Selector filter 1:

Condition 1: DeviceProduct = Microsoft Windows.

Condition 2: DeviceEventClassID = 4624.

Selector filter 2:

Condition 1: DeviceEventClassID = 4624.

Condition 2: DeviceProduct = Microsoft Windows.

The order of conditions specified in selector filter 2 is preferable because it places less load on the system.

On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.

In the selector of the correlation rule, you can use regular expressions conforming to the RE2 standard. Using regular expressions in correlation rules is computationally intensive compared to other operations. When designing correlation rules, we recommend limiting the use of regular expressions to the necessary minimum and using other available operations.

To use a regular expression, you must use the match operator. The regular expression must be placed in a constant. The use of capture groups in regular expressions is optional. For the correlation rule to trigger, the field text matched against the regexp must exactly match the regular expression.

For a primer on the syntax and examples of correlation rules that use regular expressions in their selectors, see the following rules that are provided with KUMA:

  • R105_04_Suspicious PowerShell commands. Suspected obfuscation.
  • R333_Suspicious creation of files in the autorun folder.

Actions tab

You can use this tab to configure the triggers of the correlation rule. You can configure triggers on the following tabs:

  • On first threshold triggers when the Bucket registers the first triggering of the selector during the lifetime of the Bucket.
  • On subsequent thresholds triggers when the Bucket registers the second and all subsequent triggering of the selector during the lifetime of the Bucket.
  • On every threshold triggers every time the Bucket registers the triggering of the selector.
  • On timeout triggers when the lifetime of the Bucket ends, and is used together with a selector that has the Recovery check box selected in its settings. Thus, this trigger activates if the situation detected by the correlation rule is not resolved within the specified lifetime.

Available trigger settings are listed in the table below.

Setting

Description

Output

This check box enables the sending of correlation events for post-processing, that is, for external enrichment outside the correlation rule, for response, and to destinations. By default, this check box is cleared.

Loop to correlator

This check box enables the processing of the created correlation event by the rule chain of the current correlator. This makes hierarchical correlation possible. By default, this check box is cleared.

If the Output and Loop to correlator check boxes are selected, the correlation rule is sent to post-processing first, and then to the selectors of the current correlation rule.

No alert

The check box disables the creation of alerts when the correlation rule is triggered. By default, this check box is cleared.

If you do not want to create an alert when a correlation rule is triggered, but you still want to send a correlation event to the storage, select the Output and No alert check boxes. If you select only the No alert check box, a correlation event is not saved in the storage.

Enrichment

Enrichment rules for modifying the values of correlation event fields. Enrichment rules are stored in the correlation rule where they were created. To create an enrichment rule, click the + Add enrichment button.

Available enrichment rule settings:

  • Original type is the type of the enrichment. When you select some enrichment types, additional settings may become available that you must specify.

    Available types of enrichment:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Constant

      The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Dictionary name

      The dictionary from which the values are to be taken.

      Key fields

      Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

      If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

      If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

    • table

      This type of enrichment is used if you need to add a value from the dictionary of the Table type.

      When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

      In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

      • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
      • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

      New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Target field

      The KUMA event field that you want to populate with the data.

      Source field

      The event field whose value is written to the target field.

      Clicking wrench-new opens the Conversion window, in which you can click Add conversion to create rules for modifying the source data before writing them to the KUMA event fields. You can reorder and delete created rules. To change the position of a rule, click DragIcon next to it. To delete a rule, click cross-black next to it.

      Available conversions

      Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

      • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
      • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
      • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
        • Replace chars specifies the sequence of characters to be replaced.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
      • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
        • Expression is the RE2 regular expression whose results you want to replace.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

       

      When using enrichment of events that have event selected as the Source kind and the extended event schema fields are used as arguments, the following special considerations apply:

      • If the source extended event schema field has the "Array of strings" type, and the target extended event schema field has the "String" type, the values are written to the target extended event schema field in TSV format.

        Example: The SA.StringArray extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the event enrichment operation is written to the DeviceCustomString1 extended event schema field. The DeviceCustomString1 extended event schema field contains values: ["string1", "string2", "string3"].

      • If the source and target extended event schema fields have the "Array of strings" type, values of the source extended event schema field are added to the values of the target extended event schema field, and the "," character is used as the delimiter character.

        Example: The SA.StringArrayOne field of the extended event scheme contains the ["string1", "string2", "string3"] values, and the SA.StringArrayTwo field of the extended event scheme contains the ["string4", "string5", "string6"] values. An event enrichment operation is performed. The result of the event enrichment operation is written to the SA.StringArrayTwo field of the extended event scheme. The SA.StringArrayTwo extended event schema field contains values: ["string4", "string5", "string6", "string1", "string2", "string3"].

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Template

      The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

      • {{.SA.StringArrayOne}}
      • {{- range $index, $element := . SA.StringArrayOne -}}

        {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

      To convert the data in an array field in a template into the TSV format, use the toString function, for example:

      template {{toString .SA.StringArray}}

    Required setting.

  • The Debug toggle switch enables resource logging. This toggle switch is turned off by default.
  • Tags

You can create multiple enrichment rules, reorder enrichment rules, or delete enrichment rules. To reorder enrichment rules, use the reorder DragIcon icons. To delete an enrichment rule, click the delete cross-black icon next to it.

Categorization

Categorization rules for assets involved in the event. Using categorization rules, you can link and unlink only reactive categories to and from assets. To create an enrichment rule, click the + Add categorization button.

Available categorization rule settings:

  • Action is the operation applied to the category:
    • Add: Link the category to the asset.
    • Delete: Unlink the category from the asset.

    Required setting.

  • Event field is the field of the event that contains the asset to which the operation will be applied.

    Required setting.

  • Category ID is the category to which the operation will be applied.

    Required setting.

You can create multiple categorization rules, reorder categorization rules, or delete categorization rules. To reorder categorization rules, use the reorder DragIcon icons. To delete a categorization rule, click the delete cross-black icon next to it.

Active lists update

Operations with active lists. To create an operation with an active list, click the + Add active list action button.

Available parameters of an active list operation:

  • Name specifies the active list to which the operation is applied. If you want to edit the settings of an active list, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the active list:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the active list.
    • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
    • Get—get the Active list entry and write the values of the selected fields into the correlation event.
    • Delete—delete the Active list entry.

    Required setting.

  • Key fields are event fields that are used to create an active list entry. The specified event fields are also used as the key of the active list entry.

    The active list entry key depends on the available event fields and does not depend on the order in which they are displayed in the KUMA Console.

    Required setting.

  • Mapping: Rules for mapping active list fields to event fields. You can use mapping rules if in the Operation drop-down list, you selected Get or Set. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Active list field is the active list field that is mapped to the event field. The field must not contain special characters or numbers only.
    • KUMA field is the event field to which the active list field is mapped.
    • Constant is a constant that is assigned to the active list field. You need to specify a constant if in the Operation drop-down list, you selected Set.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder DragIcon icons. To delete an operation with an active list, click the delete cross-black icon next to it.

Updating context tables

Operations with context tables. To create an operation with a context table, click the + Add context table action button.

Available parameters of a context table operation:

  • Name specifies the context table to which the operation is applied. If you want to edit the settings of a context table, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the context table:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the context table. This operation is used only for fields of Number and Float types.
    • Set—write the values of the selected fields of the correlation event into the context table by creating a new or updating an existing context table entry. When the context table entry is updated, the data is merged and only the specified fields are overwritten.
    • Merge—append the value of a correlation event field, local variable, or constant to the current value of a field of the context table.
    • Get—get the fields of the context table and write the values of the specified fields into the correlation event. Table fields of the boolean type and lists of boolean values are excluded from mapping because the event does not contain boolean fields.
    • Delete—delete the context table entry.

    Required setting.

  • Mapping: Rules for mapping context table fields to event fields or variables. You can use mapping rules if in the Operation drop-down list, you selected something other than Delete. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Context table field is the context table field that is mapped to an event field. You cannot specify a context table field that is already used in a mapping. You can specify tabulation characters, special characters, or just numbers. The maximum length of a context table field name is 128 characters. A context table field name cannot begin with an underscore.
    • KUMA field is the event field or local variable to which the context table field is mapped.
    • Constant is a constant that is assigned to the context table field. You need to specify a constant if in the Operation drop-down list, you selected Set, Merge, or Sum. The maximum length of a constant is 1024 characters.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder DragIcon icons. To delete an operation with a context tables, click the delete cross-black icon next to it.

Correlators tab

This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.

To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.

You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.

Page top
[Topic 221197]

Correlation rules of the 'simple' type

Expand all | Collapse all

Correlation rules of the simple type are used to define simple sequences of events. Settings for a correlation rule of the simple type are described in the following tables.

General tab

This tab lets you specify the general settings of the correlation rule.

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Correlation rule type: simple.

Required setting.

Tags

Tags for resource search.

Optional setting.

Propagated fields

Event fields by which events are selected. If a selector specified on the Selectors tab is triggered, the selected event fields are copied to the correlation event.

Rate limit

Maximum number of times a correlation rule can be triggered per second. The default value is 0.

If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to 1000000.

Severity

Base coefficient used to determine the importance of a correlation rule:

  • Critical
  • High
  • Medium
  • Low (default)

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

MITRE techniques

Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix.

Selectors tab

This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. A selector has a Settings tab and a Local variables tab.

The settings available on the Settings tab are described in the table below.

Setting

Description

Filter

The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Filtering based on data from the Extra event field

Conditions for filters based on data from the Extra event field:

  • Condition—If.
  • Left operand—event field.
  • In this event field, you can specify one of the following values:
    • Extra field.
    • Value from the Extra field in the following format:

      Extra.<field name>

      For example, Extra.app.

      You must specify the value manually.

    • Value from the array written to the Extra field in the following format:

      Extra.<field name>.<array element>

      For example, Extra.array.0.

      The values in the array are numbered starting from 0. You must specify the value manually. To work with a value in the Extra field at a depth of 3 and lower, you must use backticks ``, for example, `Extra.lev1.lev2.lev3`.

  • Operator – =.
  • Right operand—constant.
  • Value—the value by which you need to filter events.

The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter.

Consider two examples of selector filters that select successful authentication events in Microsoft Windows.

Selector filter 1:

Condition 1: DeviceProduct = Microsoft Windows.

Condition 2: DeviceEventClassID = 4624.

Selector filter 2:

Condition 1: DeviceEventClassID = 4624.

Condition 2: DeviceProduct = Microsoft Windows.

The order of conditions specified in selector filter 2 is preferable because it places less load on the system.

On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.

Actions tab

You can use this tab to configure the trigger of the correlation rule. A correlation rule of the simple type can have only one trigger, which is activated each time the bucket registers the selector triggering. Available trigger settings are listed in the table below.

Setting

Description

Output

This check box enables the sending of correlation events for post-processing, that is, for external enrichment outside the correlation rule, for response, and to destinations. By default, this check box is cleared.

Loop to correlator

This check box enables the processing of the created correlation event by the rule chain of the current correlator. This makes hierarchical correlation possible. By default, this check box is cleared.

If the Output and Loop to correlator check boxes are selected, the correlation rule is sent to post-processing first, and then to the selectors of the current correlation rule.

No alert

The check box disables the creation of alerts when the correlation rule is triggered. By default, this check box is cleared.

If you do not want to create an alert when a correlation rule is triggered, but you still want to send a correlation event to the storage, select the Output and No alert check boxes. If you select only the No alert check box, a correlation event is not saved in the storage.

Enrichment

Enrichment rules for modifying the values of correlation event fields. Enrichment rules are stored in the correlation rule where they were created. To create an enrichment rule, click the + Add enrichment button.

Available enrichment rule settings:

  • Original type is the type of the enrichment. When you select some enrichment types, additional settings may become available that you must specify.

    Available types of enrichment:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Constant

      The value to be added to the event field. Maximum length of the value: 255 Unicode characters. If you leave this field blank, the existing event field value is removed.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using the event enrichment functions for extended schema fields of "String", "Number", or "Float" type with a constant, the constant is added to the field.

      If you are using the event enrichment functions for extended schema fields of "Array of strings", "Array of numbers", or "Array of floats" type with a constant, the constant is added to the elements of the array.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary of the Dictionary type. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Dictionary name

      The dictionary from which the values are to be taken.

      Key fields

      Event fields whose values are to be used for selecting a dictionary entry. To add an event field, click Add field. You can add multiple event fields.

      If you are using event enrichment with the dictionary type selected as the Source kind setting, and an array field is specified in the Key enrichment fields setting, when an array is passed as the dictionary key, the array is serialized into a string in accordance with the rules of serializing a single value in the TSV format.

      Example: The Key fields setting of the enrichment uses the SA.StringArrayOne extended schema field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c". The following values are passed to the dictionary as the key: ['a','b','c'].

      If the Key enrichment fields setting uses an array extended schema field and a regular event schema field, the field values are separated by the "|" character when the dictionary is queried.

      Example: The Key enrichment fields setting uses the SA.StringArrayOne extended schema field and the Code string field. The SA.StringArrayOne extended schema field contains the values "a", "b", "c", and the Code string field contains the myCode sequence of characters. The following values are passed to the dictionary as the key: ['a','b','c']|myCode.

    • table

      This type of enrichment is used if you need to add a value from the dictionary of the Table type.

      When this enrichment type is selected in the Dictionary name drop-down list, select the dictionary for providing the values. In the Key fields group of settings, click the Add field button to select the event fields whose values are used for dictionary entry selection.

      In the Mapping table, configure the dictionary fields to provide data and the event fields to receive data:

      • In the Dictionary field column, select the dictionary field. The available fields depend on the selected dictionary resource.
      • In the KUMA field column, select the event field to which the value is written. For some of the selected fields (*custom* and *flex*), in the Label column, you can specify a name for the data written to them.

      New table rows can be added by clicking the Add new element button. Columns can be deleted by clicking the cross button.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Target field

      The KUMA event field that you want to populate with the data.

      Source field

      The event field whose value is written to the target field.

      Clicking wrench-new opens the Conversion window, in which you can click Add conversion to create rules for modifying the source data before writing them to the KUMA event fields. You can reorder and delete created rules. To change the position of a rule, click DragIcon next to it. To delete a rule, click cross-black next to it.

      Available conversions

      Conversions are modifications that are applied to a value before it is written to the event field. You can select one of the following conversion types from the drop-down list:

      • entropy is used for converting the value of the source field using the information entropy calculation function and placing the conversion result in the target field of the float type. The result of the conversion is a number. Calculating the information entropy allows detecting DNS tunnels or compromised passwords, for example, when a user enters the password instead of the login and the password gets logged in plain text.
      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using a specified RE2 regular expression. When you select this type of conversion, a field is displayed in which you must specify the RE2 regular expression.
      • substring is used to extract characters in a specified range of positions. When you select this type of conversion, the Start and End fields are displayed, in which you must specify the range of positions.
      • replace—is used to replace specified character sequence with the other character sequence. When you select this type of conversion, the following fields are displayed:
        • Replace chars specifies the sequence of characters to be replaced.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • trim removes the specified characters from the beginning and from the end of the event field value. When you select this type of conversion, the Chars field is displayed in which you must specify the characters. For example, if a trim conversion with the Micromon value is applied to Microsoft-Windows-Sysmon, the new value is soft-Windows-Sys.
      • append appends the specified characters to the end of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • prepend prepends the specified characters to the beginning of the event field value. When you select this type of conversion, the Constant field is displayed in which you must specify the characters.
      • replace with regexp is used to replace RE2 regular expression results with the specified character sequence. When you select this type of conversion, the following fields are displayed:
        • Expression is the RE2 regular expression whose results you want to replace.
        • With chars is the character sequence to be used instead of the character sequence being replaced.
      • Converting encoded strings to text:
        • decodeHexString—used to convert a HEX string to text.
        • decodeBase64String—used to convert a Base64 string to text.
        • decodeBase64URLString—used to convert a Base64url string to text.

        When converting a corrupted string or if conversion error occur, corrupted data may be written to the event field.

        During event enrichment, if the length of the encoded string exceeds the size of the field of the normalized event, the string is truncated and is not decoded.

        If the length of the decoded string exceeds the size of the event field into which the decoded value is to be written, the string is truncated to fit the size of the event field.

      Conversions when using the extended event schema

      Whether or not a conversion can be used depends on the type of extended event schema field being used:

      • For an additional field of the "String" type, all types of conversions are available.
      • For fields of the "Number" and "Float" types, the following types of conversions are available: regexp, substring, replace, trim, append, prepend, replaceWithRegexp, decodeHexString, decodeBase64String, and decodeBase64URLString.
      • For fields of "Array of strings", "Array of numbers", and "Array of floats" types, the following types of conversions are available: append and prepend.

       

      When using enrichment of events that have event selected as the Source kind and the extended event schema fields are used as arguments, the following special considerations apply:

      • If the source extended event schema field has the "Array of strings" type, and the target extended event schema field has the "String" type, the values are written to the target extended event schema field in TSV format.

        Example: The SA.StringArray extended event schema field contains values: "string1", "string2", "string3". An event enrichment operation is performed. The result of the event enrichment operation is written to the DeviceCustomString1 extended event schema field. The DeviceCustomString1 extended event schema field contains values: ["string1", "string2", "string3"].

      • If the source and target extended event schema fields have the "Array of strings" type, values of the source extended event schema field are added to the values of the target extended event schema field, and the "," character is used as the delimiter character.

        Example: The SA.StringArrayOne field of the extended event scheme contains the ["string1", "string2", "string3"] values, and the SA.StringArrayTwo field of the extended event scheme contains the ["string4", "string5", "string6"] values. An event enrichment operation is performed. The result of the event enrichment operation is written to the SA.StringArrayTwo field of the extended event scheme. The SA.StringArrayTwo extended event schema field contains values: ["string4", "string5", "string6", "string1", "string2", "string3"].

    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. We recommend matching the value and the size of the field. Available enrichment type settings are listed in the table below.

      Available enrichment type settings

      Setting

      Description

      Template

      The Go template. Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script, for example, {{.DestinationAddress}} attacked from {{.SourceAddress}}.

      Target field

      The KUMA event field that you want to populate with the data.

      If you are using enrichment of events that have template selected as the Source kind, and in which the target field has the "String" type, and the source field is an extended event schema field containing an array of strings, you can use one of the following examples for the template:

      • {{.SA.StringArrayOne}}
      • {{- range $index, $element := . SA.StringArrayOne -}}

        {{- if $index}}, {{end}}"{{$element}}"{{- end -}}

      To convert the data in an array field in a template into the TSV format, use the toString function, for example:

      template {{toString .SA.StringArray}}

    Required setting.

  • The Debug toggle switch enables resource logging. This toggle switch is turned off by default.
  • Tags

You can create multiple enrichment rules, reorder enrichment rules, or delete enrichment rules. To reorder enrichment rules, use the reorder DragIcon icons. To delete an enrichment rule, click the delete cross-black icon next to it.

Categorization

Categorization rules for assets involved in the event. Using categorization rules, you can link and unlink only reactive categories to and from assets. To create an enrichment rule, click the + Add categorization button.

Available categorization rule settings:

  • Action is the operation applied to the category:
    • Add: Link the category to the asset.
    • Delete: Unlink the category from the asset.

    Required setting.

  • Event field is the field of the event that contains the asset to which the operation will be applied.

    Required setting.

  • Category ID is the category to which the operation will be applied.

    Required setting.

You can create multiple categorization rules, reorder categorization rules, or delete categorization rules. To reorder categorization rules, use the reorder DragIcon icons. To delete a categorization rule, click the delete cross-black icon next to it.

Active lists update

Operations with active lists. To create an operation with an active list, click the + Add active list action button.

Available parameters of an active list operation:

  • Name specifies the active list to which the operation is applied. If you want to edit the settings of an active list, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the active list:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the active list.
    • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
    • Get—get the Active list entry and write the values of the selected fields into the correlation event.
    • Delete—delete the Active list entry.

    Required setting.

  • Key fields are event fields that are used to create an active list entry. The specified event fields are also used as the key of the active list entry.

    The active list entry key depends on the available event fields and does not depend on the order in which they are displayed in the KUMA Console.

    Required setting.

  • Mapping: Rules for mapping active list fields to event fields. You can use mapping rules if in the Operation drop-down list, you selected Get or Set. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Active list field is the active list field that is mapped to the event field. The field must not contain special characters or numbers only.
    • KUMA field is the event field to which the active list field is mapped.
    • Constant is a constant that is assigned to the active list field. You need to specify a constant if in the Operation drop-down list, you selected Set.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder DragIcon icons. To delete an operation with an active list, click the delete cross-black icon next to it.

Updating context tables

Operations with context tables. To create an operation with a context table, click the + Add context table action button.

Available parameters of a context table operation:

  • Name specifies the context table to which the operation is applied. If you want to edit the settings of a context table, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the context table:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the context table. This operation is used only for fields of Number and Float types.
    • Set—write the values of the selected fields of the correlation event into the context table by creating a new or updating an existing context table entry. When the context table entry is updated, the data is merged and only the specified fields are overwritten.
    • Merge—append the value of a correlation event field, local variable, or constant to the current value of a field of the context table.
    • Get—get the fields of the context table and write the values of the specified fields into the correlation event. Table fields of the boolean type and lists of boolean values are excluded from mapping because the event does not contain boolean fields.
    • Delete—delete the context table entry.

    Required setting.

  • Mapping: Rules for mapping context table fields to event fields or variables. You can use mapping rules if in the Operation drop-down list, you selected something other than Delete. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Context table field is the context table field that is mapped to an event field. You cannot specify a context table field that is already used in a mapping. You can specify tabulation characters, special characters, or just numbers. The maximum length of a context table field name is 128 characters. A context table field name cannot begin with an underscore.
    • KUMA field is the event field or local variable to which the context table field is mapped.
    • Constant is a constant that is assigned to the context table field. You need to specify a constant if in the Operation drop-down list, you selected Set, Merge, or Sum. The maximum length of a constant is 1024 characters.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder DragIcon icons. To delete an operation with a context tables, click the delete cross-black icon next to it.

Correlators tab

This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.

To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.

You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.

Page top
[Topic 221199]

Correlation rules of the 'operational' type

Expand all | Collapse all

Correlation rules of the operational type are used for working with active lists. Settings for a correlation rule of the operational type are described in the following tables.

General tab

This tab lets you specify the general settings of the correlation rule.

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Correlation rule type: operational.

Required setting.

Tags

Tags for resource search.

Optional setting.

Rate limit

Maximum number of times a correlation rule can be triggered per second. The default value is 0.

If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to 1000000.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

MITRE techniques

Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix.

Selectors tab

This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. A selector has a Settings tab and a Local variables tab.

The settings available on the Settings tab are described in the table below.

Setting

Description

Filter

The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new.

If you want to edit the settings of an existing filter, click the pencil edit-pencil icon next to it.

How to create a filter?

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Filtering based on data from the Extra event field

Conditions for filters based on data from the Extra event field:

  • Condition—If.
  • Left operand—event field.
  • In this event field, you can specify one of the following values:
    • Extra field.
    • Value from the Extra field in the following format:

      Extra.<field name>

      For example, Extra.app.

      You must specify the value manually.

    • Value from the array written to the Extra field in the following format:

      Extra.<field name>.<array element>

      For example, Extra.array.0.

      The values in the array are numbered starting from 0. You must specify the value manually. To work with a value in the Extra field at a depth of 3 and lower, you must use backticks ``, for example, `Extra.lev1.lev2.lev3`.

  • Operator – =.
  • Right operand—constant.
  • Value—the value by which you need to filter events.

The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter.

Consider two examples of selector filters that select successful authentication events in Microsoft Windows.

Selector filter 1:

Condition 1: DeviceProduct = Microsoft Windows.

Condition 2: DeviceEventClassID = 4624.

Selector filter 2:

Condition 1: DeviceEventClassID = 4624.

Condition 2: DeviceProduct = Microsoft Windows.

The order of conditions specified in selector filter 2 is preferable because it places less load on the system.

On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.

Actions tab

You can use this tab to configure the trigger of the correlation rule. A correlation rule of the operational type can have only one trigger, which is activated each time the bucket registers the selector triggering. Available trigger settings are listed in the table below.

Setting

Description

Active lists update

Operations with active lists. To create an operation with an active list, click the + Add active list action button.

Available parameters of an active list operation:

  • Name specifies the active list to which the operation is applied. If you want to edit the settings of an active list, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the active list:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the active list.
    • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
    • Get—get the Active list entry and write the values of the selected fields into the correlation event.
    • Delete—delete the Active list entry.

    Required setting.

  • Key fields are event fields that are used to create an active list entry. The specified event fields are also used as the key of the active list entry.

    The active list entry key depends on the available event fields and does not depend on the order in which they are displayed in the KUMA Console.

    Required setting.

  • Mapping: Rules for mapping active list fields to event fields. You can use mapping rules if in the Operation drop-down list, you selected Get or Set. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Active list field is the active list field that is mapped to the event field. The field must not contain special characters or numbers only.
    • KUMA field is the event field to which the active list field is mapped.
    • Constant is a constant that is assigned to the active list field. You need to specify a constant if in the Operation drop-down list, you selected Set.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder DragIcon icons. To delete an operation with an active list, click the delete cross-black icon next to it.

Updating context tables

Operations with context tables. To create an operation with a context table, click the + Add context table action button.

Available parameters of a context table operation:

  • Name specifies the context table to which the operation is applied. If you want to edit the settings of a context table, click the pencil edit-pencil icon next to it.

    Required setting.

  • Operation is the operation that is applied to the context table:
    • Sum—add a constant, the value of a correlation event field, or the value of a local variable to the value of the context table. This operation is used only for fields of Number and Float types.
    • Set—write the values of the selected fields of the correlation event into the context table by creating a new or updating an existing context table entry. When the context table entry is updated, the data is merged and only the specified fields are overwritten.
    • Merge—append the value of a correlation event field, local variable, or constant to the current value of a field of the context table.
    • Get—get the fields of the context table and write the values of the specified fields into the correlation event. Table fields of the boolean type and lists of boolean values are excluded from mapping because the event does not contain boolean fields.
    • Delete—delete the context table entry.

    Required setting.

  • Mapping: Rules for mapping context table fields to event fields or variables. You can use mapping rules if in the Operation drop-down list, you selected something other than Delete. To create a mapping rule, click the +Add button.

    Available mapping rule settings:

    • Context table field is the context table field that is mapped to an event field. You cannot specify a context table field that is already used in a mapping. You can specify tabulation characters, special characters, or just numbers. The maximum length of a context table field name is 128 characters. A context table field name cannot begin with an underscore.
    • KUMA field is the event field or local variable to which the context table field is mapped.
    • Constant is a constant that is assigned to the context table field. You need to specify a constant if in the Operation drop-down list, you selected Set, Merge, or Sum. The maximum length of a constant is 1024 characters.

    You can create multiple mapping rules or delete mapping rules. To delete a mapping rule, select the check box next to it and click Delete.

You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder DragIcon icons. To delete an operation with a context tables, click the delete cross-black icon next to it.

Correlators tab

This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.

To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.

You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.

Page top
[Topic 221203]

Variables in correlators

If tracking values in event fields, active lists, or dictionaries is not enough to cover some specific security scenarios, you can use global and local variables. You can use them to take various actions on the values received by the correlators by implementing complex logic for threat detection. Variables can be declared in the correlator (global variables) or in the correlation rule (local variables) by assigning a function to them, then querying them from correlation rules as if they were ordinary event fields and receiving the triggered function result in response.

Usage scope of variables:

Variables can be queried the same way as event fields by preceding their names with the $ character.

You can use extended event schema fields in correlation rules, local variables, and global variables.

In this section

Local variables in identical and unique fields

Local variables in selector

Local Variables in event enrichment

Local variables in active list enrichment

Properties of variables

Requirements for variables

Functions of variables

Declaring variables

Page top
[Topic 234114]

Local variables in identical and unique fields

You can use local variables in the Identical fields and Unique fields sections of 'standard' type correlation rules. To use a local variable, its name must be preceded with the "$" character.

For an example of using local variables in the Identical fields and Unique fields sections, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.

Page top
[Topic 260640]

Local variables in selector

To use a local variable in a selector:

  1. Add a local variable to the rule.
  2. In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
  3. In Correlation rules window, go to the Selectors tab, select an existing filter or create a new filter and click Add condition.
  4. Select the event field as the operand.
  5. Select the local variable as the event field value and prefix the variable name with a "$" character.
  6. Specify the remaining filter settings.
  7. Click Save.

For an example of using local variables, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.

Page top
[Topic 260641]

Local Variables in event enrichment

You can use 'standard' and 'simple' correlation rules to enrich events with local variables.

Enrichment with text and numbers

You can enrich events with text (strings). To do so, you can use functions that modify strings: to_lower, to_upper, str_join, append, prepend, substring, tr, replace, str_join.

You can enrich events with numbers. To do so, you can use the following functions: addition ("+"), subtraction ("-"), multiplication ("*"), division ("/"), round, ceil, floor, abs, pow.

You can also use regular expressions to manage data in local variables.

Using regular expressions in correlation rules is computationally intensive compared to other operations. Therefore, when designing correlation rules, we recommend limiting the use of regular expressions to the necessary minimum and using other available operations.

Timestamp enrichment

You can enrich events with timestamps (date and time). To do so, you can use functions that let you get or modify timestamps: now, extract_from_timestamp, parse_timestamp, format_timestamp, truncate_timestamp, time_diff.

Operations with active lists and tables

You can enrich events with local variables and data from active lists and tables.

To enrich events with data from an active list, use the active_list, active_list_dyn functions.

To enrich events with data from a table, use the table_dict, dict functions.

You can create conditional statements by using the 'conditional' function in local variables. In this way, the variable can return one of the values depending on what data was received for processing.

Enriching events with a local variable

To use a local variable to enrich events:

  1. Add a local variable to the rule.
  2. In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
  3. In the Correlation rules window, go to the Actions tab, and under Enrichment, in the Source kind drop-down list, select Event.
  4. From the Target field drop-down list, select the KUMA event field to which you want to pass the value of the local variable.
  5. From the Source field drop-down list, select a local variable. Prefix the local variable name with a "$" character.
  6. Specify the remaining rule settings.
  7. Click Save.
Page top
[Topic 260642]

Local variables in active list enrichment

You can use local variables to enrich active lists.

To enrich the active list with a local variable:

  1. Add a local variable to the rule.
  2. In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
  3. In the Correlation rules window, go to the Actions tab and under Active lists update, add the local variable to the Key fields field. Prefix the local variable name with a "$" character.
  4. Under Mapping, specify the correspondence between the event fields and the active list fields.
  5. Click the Save button.
Page top
[Topic 260644]

Properties of variables

Local and global variables

The properties of global variables differ from the properties of local variables.

Global variables:

  • Global variables are declared at the correlator level and are applied only within the scope of this correlator.
  • The global variables of the correlator can be queried from all correlation rules that are specified in it.
  • In standard correlation rules, the same global variable can take different values in each selector.
  • It is not possible to transfer global variables between different correlators.

Local variables:

  • Local variables are declared at the correlation rule level and are applied only within the limits of this rule.
  • In standard correlation rules, the scope of a local variable consists of only the selector in which the variable was declared.
  • Local variables can be declared in any type of correlation rule.
  • Local variables cannot be transferred between rules or selectors.
  • A local variable cannot be used as a global variable.

Variables used in various types of correlation rules

  • In operational correlation rules, on the Actions tab, you can specify all variables available or declared in this rule.
  • In standard correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Identical fields field.
  • In simple correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Inherited Fields field.

Page top
[Topic 234737]

Requirements for variables

When adding a variable function, you must first specify the name of the function, and then list its parameters in parentheses. Basic mathematical operations (addition, subtraction, multiplication, division) are an exception to this requirement. When these operations are used, parentheses are used to designate the severity of the operations.

Requirements for function names:

  • Must be unique within the correlator.
  • Must contain 1 to 128 Unicode characters.
  • Must not begin with the character $.
  • Must be written in camelCase or CamelCase.

Special considerations when specifying functions of variables:

  • The sequence of parameters is important.
  • Parameters are separated by a comma: ,.
  • String parameters are passed in single quotes: '.
  • Event field names and variables are specified without quotation marks.
  • When querying a variable as a parameter, add the $ character before its name.
  • You do not need to add a space between parameters.
  • In all functions in which a variable can be used as parameters, nested functions can be created.
Page top
[Topic 234739]

Functions of variables

Operations with active lists and dictionaries

"active_list" and "active_list_dyn" functions

These functions allow you to receive information from an active list and dynamically generate a field name for an active list and key.

You must specify the parameters in the following sequence:

  1. Name of the active list.
  2. Expression that returns the field name of the active list.
  3. One or more expressions whose results are used to generate the key.

    Usage example

    Result

    active_list('Test', to_lower('DeviceHostName'), to_lower(DeviceCustomString2), to_lower(DeviceCustomString1))

    Gets the field value of the active list.

Use these functions to query the active list of the shared tenant from a variable. To do so, add the @Shared suffix after the name of the active list (case sensitive). For example, active_list('exampleActiveList@Shared', 'score', SourceAddress, SourceUserName).

"table_dict" function

Gets information about the value in the specified column of a dictionary of the table type.

You must specify the parameters in the following sequence:

  1. Dictionary name
  2. Dictionary column name
  3. One or more expressions whose results are used to generate the dictionary row key.

    Usage example

    Result

    table_dict('exampleTableDict', 'office', SourceUserName)

    Gets data from the exampleTableDict dictionary from the row with the SourceUserName key in the office column.

    table_dict('exampleTableDict', 'office', SourceAddress, to_lower(SourceUserName))

    Gets data from the exampleTableDict dictionary from a composite key string from the SourceAddress field value and the lowercase value of the SourceUserName field from the office column.

Use this function to access the dictionary of the shared tenant from a variable. To do so, add the @Shared suffix after the name of the active list (case sensitive). For example, table_dict('exampleTableDict@Shared', 'office', SourceUserName).

"dict" function

Gets information about the value in the specified column of a dictionary of the dictionary type.

You must specify the parameters in the following sequence:

  1. Dictionary name
  2. One or more expressions whose results are used to generate the dictionary row key.

    Usage example

    Result

    dict('exampleDictionary', SourceAddress)

    Gets data from exampleDictionary from the row with the SourceAddress key.

    dict('exampleDictionary', SourceAddress, to_lower(SourceUserName))

    Gets data from the exampleDictionary from a composite key string from the SourceAddress field value and the lowercase value of the SourceUserName field.

Use this function to access the dictionary of the shared tenant from a variable. To do so, add the @Shared suffix after the name of the active list (case sensitive). For example, dict('exampleDictionary@Shared', SourceAddress).

Operations with context tables

"context_table" function

Returns the value of the specified field in the base type (for example, integer, array of integers).

You must specify the parameters in the following sequence:

  1. Name of the context table. The name must be specified.
  2. Expression that returns the field name of context table.
  3. Expression that returns the name of key field 1 of the context table.
  4. Expression that returns the value of key field 1 of the context table.

The function must contain at least 4 parameters.

Usage example

Result

context_table('tbl1', 'list_field1', 'key1', 'key1_val')

Get the value of the specified field. If the context table or context table field does not exist, an empty string is returned.

"len" function

Returns the length of a string or array.

The function returns the length of the array if the passed array is of one of the following types:

  • array of integers
  • array of floats
  • array of strings
  • array of booleans

If an array of a different type is passed, the data of the array is cast to the string type, and the function returns the length of the resulting string.

Usage examples

len(context_table('tbl1', 'list_field1', 'key1', 'key1_val'))

len(DeviceCustomString1)

"distinct_items" function

Returns a list of unique elements in an array.

The function returns the list of unique elements of the array if the passed array is of one of the following types:

  • array of integers
  • array of floats
  • array of strings
  • array of booleans

If an array of a different type is passed, the data of the array is cast to the string type, and the function returns a string consisting of the unique characters from the original string.

Usage examples

distinct_items(context_table('tbl1', 'list_field1', 'key1', 'key1_val'))

distinct_items(DeviceCustomString1)

"sort_items" function

Returns a sorted list of array elements.

You must specify the parameters in the following sequence:

  1. Expression that returns the object of the sorting.
  2. Sorting order Possible values: asc, desc. If the parameter is not specified, the default value is asc.

The function returns the list of sorted elements of the array if the passed array is of one of the following types:

  • array of integers
  • array of floats
  • array of strings

For a boolean array, the function returns the list of array elements in the original order.

If an array of a different type is passed, the data of the array is cast to the string type, and the function returns a string of sorted characters.

Usage examples

sort_items(context_table('tbl1', 'list_field1', 'key1', 'key1_val'), 'asc')

sort_items(DeviceCustomString1)

"item" function

Returns the array element with the specified index or the character of a string with the specified index if an array of integers, floats, strings, or boolean values is passed.

You must specify the parameters in the following sequence:

  1. Expression that returns the object of the indexing.
  2. Expression that returns the index of the element or character.

The function must contain at least 2 parameters.

The function returns the array element with the specified index or the string character with the specified index if the index falls within the range of the array and the passed array is of one of the following types:

  • array of integers
  • array of floats
  • array of strings
  • array of booleans

If an array of a different type is passed and the index falls within the range of the array, the data is cast to the string type, and the function returns the string character with the specified index. If an array of a different type is passed and the index is outside the range of the array, the function returns an empty string.

Usage examples

item(context_table('tbl1', 'list_field1', 'key1', 'key1_val'), 1)

item(DeviceCustomString1, 0)

Operations with strings

"to_lower" function

Converts characters in a string to lowercase. Supported for standard fields and extended event schema fields of the "string" type.

A string can be passed as a string, field name or variable.

Usage examples

to_lower(SourceUserName)

to_lower('SomeText')

to_lower($otherVariable)

"to_upper" function

Converts characters in a string to uppercase. Supported for standard fields and extended event schema fields of the "string" type. A string can be passed as a string, field name or variable.

Usage examples

to_upper(SourceUserName)

to_upper('SomeText')

to_upper($otherVariable)

"append" function

Adds characters to the end of a string. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Added string.

Strings can be passed as a string, field name or variable.

Usage examples

Usage result

append(Message, '123')

The string 123 is added to the end of this string from the Message field.

append($otherVariable, 'text')

The string text is added to the end of this string from the variable otherVariable.

append(Message, $otherVariable)

A string from otherVariable is added to the end of this string from the Message field.

"prepend" function

Adds characters to the beginning of a string. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Added string.

Strings can be passed as a string, field name or variable.

Usage examples

Usage result

prepend(Message, '123')

The string 123 is added to the beginning of this string from the Message field.

prepend($otherVariable, 'text')

The string text is added to the beginning of this string from otherVariable.

prepend(Message, $otherVariable)

A string from otherVariable is added to the beginning of this string from the Message field.

"substring" function

Returns a substring from a string. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Substring start position (natural number or 0).
  3. (Optional) substring end position.

Strings can be passed as a string, field name or variable. If the position number is greater than the original data string length, an empty string is returned.

Usage examples

Usage result

substring(Message, 2)

Returns a part of the string from the Message field: from 3 characters to the end.

substring($otherVariable, 2, 5)

Returns a part of the string from the otherVariable variable: from 3 to 6 characters.

substring(Message, 0, len(Message) - 1)

Returns the entire string from the Message field except the last character.

"index_of" function

The "index_of" function returns the position of the first occurrence of a character or substring in a string; the first character in the string has index 0. If the function does not find the substring, the function returns -922337203685477580.

The function accepts the following parameters:

  • As source data, an event field, another variable, or constant.
  • Any expression out of those that are available in local variables.

To use this function, you must specify the parameters in the following order:

  1. Character or substring whose position you want to find.
  2. String to be searched.

Usage examples

Usage result

index_of('@', SourceUserName)

The function looks for the "@" character in the SourceUserName field. The SourceUserName field contains the "user@example.com" string.

Result = 4

The function returns the index of the first occurrence of the character in the string. The first character in the string has index 0.

index_of('m', SourceUserName)

The function looks for the "m" character in the SourceUserName field. The SourceUserName field contains the "user@example.com" string.

Result = 8

The function returns the index of the first occurrence of the character in the string. The first character in the string has index 0.

"last_index_of" function

The "last_index_of" function returns the position of the last occurrence of a character or substring in a string; the first character in the string has index 0. If the function does not find the substring, the function returns -922337203685477580.

The function accepts the following parameters:

  • As source data, an event field, another variable, or constant.
  • Any expression out of those that are available in local variables.

To use this function, you must specify the parameters in the following order:

  1. Character or substring whose position you want to find.
  2. String to be searched.

Usage examples

Usage result

last_index_of('m', SourceUserName)

The function looks for the "m" character in the SourceUserName field. The SourceUserName field contains the "user@example.com" string.

Result = 15

The function returns the index of the last occurrence of the character in the string. The first character in the string has index 0.

"tr" function

Removes the specified characters from the beginning and end of a string. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. (Optional) string that should be removed from the beginning and end of the original string.

Strings can be passed as a string, field name or variable. If you do not specify a string to be deleted, spaces will be removed from the beginning and end of the original string.

Usage examples

Usage result

tr(Message)

Spaces have been removed from the beginning and end of the string from the Message field.

tr($otherVariable, '_')

If the otherVariable variable has the _test_ value, the string _test_ is returned.

tr(Message, '@example.com')

If the Message event field contains the string user@example.com, the string user is returned.

"replace" function

Replaces all occurrences of character sequence A in a string with character sequence B. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: sequence of characters to be replaced.
  3. Replacement string: sequence of characters to replace the search string.

Strings can be passed as an expression.

Usage examples

Usage result

replace(Name, 'UserA', 'UserB')

Returns a string from the Name event field in which all occurrences of UserA are replaced with UserB.

replace($otherVariable, ' text ', '_text_')

Returns a string from otherVariable in which all occurrences of ' text' are replaced with '_text_'.

"regexp_replace" function

Replaces a sequence of characters that match a regular expression with a sequence of characters and regular expression capturing groups. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: regular expression.
  3. Replacement string: sequence of characters to replace the search string, and IDs of the regular expression capturing groups. A string can be passed as an expression.

Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.

In regular expressions used in variable functions, each backslash character must be additionally escaped. For example, ^example\\\\ must be used instead of the regular expression ^example\\.

Usage examples

Usage result

regexp_replace(SourceAddress, '([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})', 'newIP: $1.$2.$3.10')

Returns a string from the SourceAddress event field in which the text newIP is inserted before the IP addresses. In addition, the last digits of the address are replaced with 10.

"regexp_capture" function

Gets the result matching the regular expression condition from the original string. Supported for standard fields and extended event schema fields of the "string" type.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: regular expression.

Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.

In regular expressions used in variable functions, each backslash character must be additionally escaped. For example, ^example\\\\ must be used instead of the regular expression ^example\\.

Usage examples

Example values

Usage result

regexp_capture(Message, '(\\\\d{1,3}\\\\.\\\\d{1,3}\\\\.\\\\d{1,3}\\\\.\\\\d{1,3})')

Message = 'Access from 192.168.1.1 session 1'

Message = 'Access from 45.45.45.45 translated address 192.168.1.1 session 1'

'192.168.1.1'

'45.45.45.45'

"template" function

Returns the string specified in the function, with variables replaced with their values. Variables for substitution can be passed in the following ways:

  • Inside the string.
  • After the string. In this case, inside the string, you must specify variables in the {{index.<n>}} notation, where <n> is the index of the variable passed after the string. The index is 0-based.

    Usage examples

    template('Very long text with values of rule={{.DeviceCustomString1}} and {{.Name}} event fields, as well as values of {{index.0}} and {{index.1}} local variables and then {{index.2}}', $var1, $var2, $var10)

Operations with timestamps

now function

Gets a timestamp in epoch format. Runs with no arguments.

Usage examples

now()

"extract_from_timestamp" function

Gets atomic time representations (year, month, day, hour, minute, second, day of the week) from fields and variables with time in the epoch format.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Notation of the atomic time representation. This parameter is case sensitive.

    Possible variants of atomic time notation:

    • y refers to the year in number format.
    • M refers to the month in number notation.
    • d refers to the number of the month.
    • wd refers to the day of the week: Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday.
    • h refers to the hour in 24-hour format.
    • m refers to the minutes.
    • s refers to the seconds.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    extract_from_timestamp(Timestamp, 'wd')

    extract_from_timestamp(Timestamp, 'h')

    extract_from_timestamp($otherVariable, 'h')

    extract_from_timestamp(Timestamp, 'h', 'Europe/Moscow')

"parse_timestamp" function

Converts the time from RFC3339 format (for example, "2022-05-24 00:00:00", "2022-05-24 00:00:00+0300) to epoch format.

Usage examples

parse_timestamp(Message)

parse_timestamp($otherVariable)

"format_timestamp" function

Converts the time from epoch format to RFC3339 format.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Time format notation: RFC3339.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    format_timestamp(Timestamp, 'RFC3339')

    format_timestamp($otherVariable, 'RFC3339')

    format_timestamp(Timestamp, 'RFC3339', 'Europe/Moscow')

"truncate_timestamp" function

Rounds the time in epoch format. After rounding, the time is returned in epoch format. Time is rounded down.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Rounding parameter:
    • 1s rounds to the nearest second.
    • 1m rounds to the nearest minute.
    • 1h rounds to the nearest hour.
    • 24h rounds to the nearest day.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    Examples of rounded values

    Usage result

    truncate_timestamp(Timestamp, '1m')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654631760000 (7 June 2022, 19:56:00)

    truncate_timestamp($otherVariable, '1h')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654628400000 (7 June 2022, 19:00:00)

    truncate_timestamp(Timestamp, '24h', 'Europe/Moscow')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654560000000 (7 June 2022, 0:00:00)

"time_diff" function

Gets the time interval between two timestamps in epoch format.

The parameters must be specified in the following sequence:

  1. Interval end time. Event field of the timestamp type, or variable.
  2. Interval start time. Event field of the timestamp type, or variable.
  3. Time interval notation:
    • ms refers to milliseconds.
    • s refers to seconds.
    • m refers to minutes.
    • h refers to hours.
    • d refers to days.

    Usage examples

    time_diff(EndTime, StartTime, 's')  

    time_diff($otherVariable, Timestamp, 'h')

    time_diff(Timestamp, DeviceReceiptTime, 'd')

Mathematical operations

These are comprised of basic mathematical operations and functions.

Basic mathematical operations

Supported for integer and float fields of the extended event schema.

Operations:

  • Addition
  • Subtraction
  • Multiplication
  • Division
  • Modulo division

Parentheses determine the sequence of actions

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Real numbers

    When modulo dividing, only natural numbers can be used as arguments.

Usage constraints:

  • Division by zero returns zero.
  • Mathematical operations on a number and a strings return the number unchanged. For example, 1 + abc returns 1.
  • Integers resulting from operations are returned without a dot.

    Usage examples

    (Type=3; otherVariable=2; Message=text)

    Usage result

    Type + 1

    4

    $otherVariable - Type

    -1

    2 * 2.5

    5

    2 / 0

    0

    Type * Message

    0

    (Type + 2) * 2

    10

    Type % $otherVariable

    1

"round" function

Rounds numbers. Supported for integer and float fields of the extended event schema.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.75; DeviceCustomFloatingPoint2=7.5 otherVariable=7.2)

    Usage result

    round(DeviceCustomFloatingPoint1)

    8

    round(DeviceCustomFloatingPoint2)

    8

    round($otherVariable)

    7

"ceil" function

Rounds up numbers. Supported for integer and float fields of the extended event schema.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)

    Usage result

    ceil(DeviceCustomFloatingPoint1)

    8

    ceil($otherVariable)

    9

"floor" function

Rounds down numbers. Supported for integer and float fields of the extended event schema.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)

    Usage result

    floor(DeviceCustomFloatingPoint1)

    7

    floor($otherVariable)

    8

"abs" function

Gets the modulus of a number. Supported for integer and float fields of the extended event schema.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomNumber1=-7; otherVariable=-2)

    Usage result

    abs(DeviceCustomFloatingPoint1)

    7

    abs($otherVariable)

    2

"pow" function

Exponentiates a number. Supported for integer and float fields of the extended event schema.

The parameters must be specified in the following sequence:

  1. Base — real numbers.
  2. Power — natural numbers.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    pow(DeviceCustomNumber1, DeviceCustomNumber2)

    pow($otherVariable, DeviceCustomNumber1)

"str_join" function

Join multiple strings into one using a separator. Supported for integer and float fields of the extended event schema.

The parameters must be specified in the following sequence:

  1. Separator. String.
  2. String1, string2, stringN. At least 2 expressions.

    Usage examples

    Usage result

    str_join('|', to_lower(Name), to_upper(Name), Name)

    String.

"conditional" function

Get one value if a condition is met and another value if the condition is not met. Supported for integer and float fields of the extended event schema.

The parameters must be specified in the following sequence:

  1. Condition. String. The syntax is similar to the conditions of the Where statement in SQL. You can use the functions of the KUMA variables and references to other variables in a condition.
  2. The value if the condition is met. Expression.
  3. The value if the condition is not met. Expression.

Supported operators:

  • AND
  • OR
  • NOT
  • =
  • !=
  • <
  • <=
  • >
  • >=
  • LIKE (RE2 regular expression is used, rather than an SQL expression)
  • ILIKE (RE2 regular expression is used, rather than an SQL expression)
  • BETWEEN
  • IN
  • IS NULL (check for an empty value, such as 0 or an empty string)

    Usage examples (the value depends on arguments 2 and 3)

    conditional('SourceUserName = \\'root\\' AND DestinationUserName = SourceUserName', 'match', 'no match')

    conditional(`DestinationUserName ILIKE 'svc_.*'`, 'match', 'no match')

    conditional(`DestinationUserName NOT LIKE 'svc_.*'`, 'match', 'no match')

Operations for extended event schema fields

For extended event schema fields of the "string" type, the following kinds of operations are supported:

  • "len" function
  • "to_lower" function
  • "to_upper" function
  • "append" function
  • "prepend" function
  • "substring" function
  • "tr" function
  • "replace" function
  • "regexp_replace" function
  • "regexp_capture" function

For extended event schema fields of the integer or float type, the following kinds of operations are supported:

  • Basic mathematical operations:
  • "round" function
  • "ceil" function
  • "floor" function
  • "abs" function
  • "pow" function
  • "str_join" function
  • "conditional" function

For extended event schema fields of the "array of integers", "array of floats", and "array of strings" types, KUMA supports the following functions:

  • Get the i-th element of the array. Example: item(<type>.someStringArray).
  • Get an array of values. Example: <type>.someStringArray. Returns ["string1", "string2", "string3"].
  • Get the count of elements in an array. Example: len(<type>.someStringArray). Returns ["string1", "string2"].
  • Gett unique elements from an array. Example: distinct_items(<type>.someStringArray).
  • Generate a TSV string of array elements. Example: to_string(<type>.someStringArray).
  • Sort the elements of the array. Example: sort_items(<type>.someStringArray).

    In the examples, instead of <type>, you must specify the array type: NA for an array of integers, FA for an array of floats, SA for an array of strings.

For fields of the "array of integers" and "array of floats" types, the following functions are supported:

• math_min — returns the minimum element of an array. Example: math_min(NA.NumberArray), math_min(FA.FloatArray)

• math_max — returns the maximum element of an array Example: math_max(NA.NumberArray), math_max(FA.FloatArray)

• math_avg — returns the average value of an array. Example: math_avg(NA.NumberArray), math_avg(FA.FloatArray)

Page top
[Topic 234740]

Declaring variables

Expand all | Collapse all

To declare variables, they must be added to a correlator or correlation rule.

To add a global variable to an existing correlator:

  1. In the KUMA Console, under ResourcesCorrelators, select the resource set of the relevant correlator.

    The Correlator Installation Wizard opens.

  2. Select the Global variables step of the Installation Wizard.
  3. click the Add variable button and specify the following parameters:
    • In the Variable window, enter the name of the variable.

      Variable naming requirements

      • Must be unique within the correlator.
      • Must contain 1 to 128 Unicode characters.
      • Must not begin with the character $.
      • Must be written in camelCase or CamelCase.
    • In the Value window, enter the variable function.

      When entering functions, you can use autocomplete as a list of hints with possible function names, their brief description and usage examples. You can select a function from the list and insert it together with its list of arguments into the input field.

      To display the list of all hints in the field, press Ctrl+Space. Press Enter to select a function from the list. Press Tab to go to the next argument in the list of arguments of the selected function.

      Description of variable functions.

    Multiple variables can be added. Added variables can be edited or deleted by using the cross icon.

  4. Select the Setup validation step of the Installation Wizard and click Save.

A global variable is added to the correlator. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.

To add a local variable to an existing correlation rule:

  1. In the KUMA Console, under ResourcesCorrelation rules, select the relevant correlation rule.

    The correlation rule settings window opens. The parameters of a correlation rule can also be opened from the correlator to which it was added by proceeding to the Correlation step of the Installation Wizard.

  2. Click the Selectors tab.
  3. In the selector, open the Local variables tab, click the Add variable button and specify the following parameters:
    • In the Variable window, enter the name of the variable.

      Variable naming requirements

      • Must be unique within the correlator.
      • Must contain 1 to 128 Unicode characters.
      • Must not begin with the character $.
      • Must be written in camelCase or CamelCase.
    • In the Value window, enter the variable function.

      When entering functions, you can use autocomplete as a list of hints with possible function names, their brief description and usage examples. You can select a function from the list and insert it together with its list of arguments into the input field.

      To display the list of all hints in the field, press Ctrl+Space. Press Enter to select a function from the list. Press Tab to go to the next argument in the list of arguments of the selected function.

      Description of variable functions.

    Multiple variables can be added. Added variables can be edited or deleted by using the cross icon.

    For standard correlation rules, repeat this step for each selector in which you want to declare variables.

  4. Click Save.

The local variable is added to the correlation rule. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.

Added variables can be edited or deleted. If the correlation rule queries an undeclared variable (for example, if its name has been changed), an empty string is returned.

If you change the name of a variable, you will need to manually change the name of this variable in all correlation rules where you have used it.

Page top
[Topic 234738]

Adding a temporary exclusion list for a correlation rule

Users that do not have the rights to edit correlation rules in the KUMA Console, can create a temporary list of exclusions (for example, create exclusions for false positives when managing alerts). A user with the rights to edit correlation rules can then add the exclusions to the rule and remove them from the temporary list.

To add exclusions to a correlation rule when managing alerts:

  1. Go to the Alerts section and select an alert.
  2. Click the Find in events button.

    Events of the alert are displayed on the events page.

  3. Open the correlation event.

    This opens the event card in which each field has a icon_arrow_add to exclusions (arrow) button that lets you add an exclusion.

  4. Click the icon_arrow_add to exclusions button and select Add to exclusions.

    A sidebar is displayed, containing the following fields: Correlation rule, Exclusion, Alert, Comment.

  5. Click the Create button.

The exclusion rule is added.

The exclusion is added to the temporary list. This list is available to anyone with rights to read correlation rules: in the Resources → Correlation rules section, in the toolbar of the rule list, click the List of exclusions button. If you want to view the exclusions of a specific rule, open the card of the rule and select the Exclusions tab.

The exclusion list contains entries with the following parameters:

  • Exclusion

    Exclusion condition.

  • Correlation rule

    Name of the correlation rule.

  • Alert

    Name of the alert from which the exclusion was added.

  • Tenant

    The tenant to which the rule and the exclusion apply.

  • Condition

    Generated automatically based on the selected field of the correlation event.

  • Сreation date

    Date and time when the exclusion was added.

  • Expires

Date and time when the exclusion will be automatically removed from the list.

  • Created

    Name of the user that added the exclusion.

  • Comment

After the exclusion is added, by default, the correlation rule takes the exclusion into account for 7 days. In the Options → General section, you can configure the duration of exclusions by editing the corr_rule_exclusion_ttl_hours parameter in the Core properties section. You can configure the lifetime of exclusions in hours and days. The minimum value is 1 hour, the maximum is 365 days. This setting is available only for users with the General administrator role.

For fields from base events to be propagated to correlation events, these fields must be specified in the card of the correlation rule on the General tab, in the Propagated fields field. If the fields of base events are not mapped to the correlation event, these fields cannot be added to exclusions.

To remove exclusions from a correlation rule:

  1. Go to the Resources → Correlation rules section.
  2. In the toolbar of the rule list, click List of exclusions button.

    This opens the window with the list of exclusions.

  3. Select the exclusions that you want to delete and click the Delete button.

Exceptions are deleted from the correlation rule.

KUMA generates an audit event whenever an exception is created or deleted. You can view the changes of event settings in the Event details window.

Page top
[Topic 294859]

Predefined correlation rules

The KUMA distribution kit includes correlation rules listed in the table below.

Predefined correlation rules

Correlation rule name

Description

[OOTB] KATA alert

Used for enriching KATA events.

[OOTB] Successful Bruteforce

Triggers when a successful authentication attempt is detected after multiple unsuccessful authentication attempts. This rule works based on the events of the sshd daemon.

[OOTB][AD] Account created and deleted within a short period of time

Detects instances of creation and subsequent deletion of accounts on Microsoft Windows hosts.

[OOTB][AD] An account failed to log on from different hosts

Detects multiple unsuccessful attempts to authenticate on different hosts.

[OOTB][AD] Membership of sensitive group was modified

Works based on Microsoft Windows events.

[OOTB][AD] Multiple accounts failed to log on from the same host

Triggers after multiple failed authentication attempts are detected on the same host from different accounts.

[OOTB][AD] Successful authentication with the same account on multiple hosts

Detects connections to different hosts under the same account. This rule works based on Microsoft Windows events.

[OOTB][AD] The account added and deleted from the group in a short period of time

Detects the addition of a user to a group and subsequent removal. This rule works based on Microsoft Windows events.

[OOTB][Net] Possible port scan

Detects suspected port scans. This rule works based on Netflow, Ipfix events.

Page top
[Topic 250832]

MITRE ATT&CK matrix coverage

If you want to assess the coverage of the MITRE ATT&CK matrix by your correlation rules:

  1. Download the list of MITRE techniques from the official MITRE ATT&CK repository and import it into KUMA.
  2. Map MITRE techniques to correlation rules.
  3. Export correlation rules to MITRE ATT&CK Navigator.

As a result, you can visually assess the coverage of the MITRE ATT&CK matrix.

Importing the list of MITRE techniques

Only a user with the General Administrator role can import the list of MITRE techniques.

To import the list of MITRE ATT&CK techniques:

  1. Download the list of MITRE ATT&CK techniques from the GitHub portal.

    KUMA 3.2 supports only the MITRE ATT&CK technique list version 14.1.

  2. In the KUMA Console, go to the SettingsOther.
  3. In the MITRE technique list settings, click Import from file.

    This opens the file selection window.

  4. Select the downloaded MITRE ATT&CK technique list and click Open.

    This closes the file selection window.

The list of MITRE ATT&CK techniques is imported into KUMA. You can see the list of imported techniques and the version of the MITRE ATT&CK technique list by clicking View list.

Mapping MITRE techniques to correlation rules

To map MITRE ATT&CK techniques to correlation rules:

  1. In the KUMA Console, go to the ResourcesCorrelation rules section.
  2. Click the name of the correlation rule to open the correlation rule editing window.

    This opens the correlation rule editing window.

  3. On the General tab, clicking the MITRE techniques field opens a list of available techniques. For the convenience of searching, a filter is provided, in which you can enter the name of a technique or the ID of a technique or tactic. One or more MITRE ATT&CK techniques are available for linking to a correlation rule.
  4. Click the Save button.

The MITRE ATT&CK techniques are mapped to the correlation rule. In the web interface, in the ResourcesCorrelation rules section, the MITRE techniques column of the edited rule displays the ID of the selected technique, and when you hover over the item, the full name of the technique is displayed, including the ID of the technique and tactic.

Exporting correlation rules to MITRE ATT&CK Navigator

To export correlation rules with mapped MITRE techniques to MITRE ATT&CK Navigator:

  1. In the KUMA Console, go to the ResourcesCorrelation rules section.
  2. Click the more button in the upper-right corner.
  3. In the drop-down list, click Export to MITRE ATT&CK Navigator.
  4. This opens a window; in that window, select the correlation rules that you want to export.
  5. Click OK.

    A file with exported rules is downloaded to your computer.

  6. Upload the file from your computer to MITRE ATT&CK Navigator to assess the coverage of the MITRE ATT&CK matrix.

You can assess the coverage of the MITRE ATT&CK matrix.

Page top
[Topic 272743]

Filters

Expand all | Collapse all

Filters let you select events based on specified conditions. The collector service uses filters to select events that you want to send to KUMA. Events that satisfy the filter conditions are sent to KUMA for further processing.

You can use filters in the following KUMA services and features:

You can use standalone filters or built-in filters that are stored in the service or resource in which they were created. For resources in input fields except the Description field, you can enable the display of control characters. Available filter settings are listed in the table below.

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Inline filters are created in other resources or services and do not have names.

Tenant

The name of the tenant that owns the resource.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

You can create filter conditions and filter groups, or add existing filters to a filter.

To create filtering criteria, you can use builder mode or source code mode. In builder mode, you can create or edit filter criteria by selecting filter conditions and operators from drop-down lists. In source code mode, you can use text commands to create and edit search queries. The builder mode is used by default.

You can freely switch between modes when creating filtering criteria. To switch to source code mode, select the Code tab. When switching between modes, the created condition filters are preserved. If the filter code is not displayed on the Code tab after linking the created filter to the resource, go to the Builder tab and then go back to the Code tab to display the filter code.

Creating filtering criteria in builder mode

To create filtering criteria in builder mode, you need to select one of the following operators from the drop-down list:

  • AND: The filter selects events that match all of the specified conditions.
  • OR: The filter selects events that match one of the specified conditions.
  • NOT: The filter selects events that match none of the specified conditions.

You can add filtering criteria in one of the following ways:

  • To add a condition, click the + Add condition button.
  • To add a group of conditions, click the + Add group button. When adding groups of conditions, you can also select the AND, OR, and NOT operators. In turn, you can add conditions and condition groups to a condition group.

You can add multiple filtering criteria, reorder the filtering criteria, or remove filtering criteria. To reorder filtering criteria, use the reorder DragIcon icons. To remove a filtering criterion, click the delete cross-black icon next to it.

Available condition settings are listed in the table below.

Setting

Description

<Condition type>

Condition type. The default is If. You can click the default value and select If not from the displayed drop-down list.

Required setting.

<Left operand> and <Right operand>

Values to be processed by the operator. The available types of values of the right operand depend on the selected operator.

Operands of filters

  • In the Event fields section, you can specify the event field that you want to use as a filter operand.
  • In the Active lists section, you can specify an active list or field of an active list that you want to use as an operand of the filter. When selecting an active list, you must specify one or more event fields that are used to create an active list entry and act as the key of the active list entry. To finish specifying event fields, press Ctrl/Command+F1.

    If you have not specified the inActiveList operator, you need to specify the name of the active list field that you want to use as a filter operand.

  • In the Context tables section, you can specify the value of the context table that you want to use as the filter operand. When selecting a context table, you must specify an event field:
    • context table name (required) is the context table that you want to use.
    • key fields (required) are event fields or local variables that are used to create a context table record and serve as the key for the context table record.
    • field is the name of the context table field from which you want to get the value of the operand.
    • index is the index of the list field of the table from which you want to get the value of the operand.
  • Dictionary is a value from the dictionary resource that you want to assign to the operand. Advanced settings:
    • dictionary (required) is the dictionary that you want to use.
    • key fields (required) are event fields that are used to generate the key of the dictionary value.
  • Constant is a user-defined value that you want to assign to the operand. Advanced settings:
    • value (required) is the constant that you want to assign to the operand.
  • Table specifies user-defined values that you want to assign to the operand. Advanced settings:
    • dictionary (required) is the type of the dictionary. You need to select the Table dictionary type.
    • key fields (required) are event fields that are used to generate the key of the dictionary value.
  • List specifies user-defined values that you want to assign to the operand. Advanced settings:
    • value (required) are the constants that you want to assign to the operand. When you type the value in the field and press ENTER, the value is added to the list and you can enter a new value.
  • TI specifies the settings for reading CyberTrace threat intelligence (TI) data from the events. Advanced settings:
    • stream (required) is the CyberTrace threat category.
    • key fields (required) is the event field with CyberTrace threat indicators.
    • field (required) is the field of the CyberTrace feed with threat indicators.

Required settings.

<Operator>

Condition operator. When selecting a condition operator in the drop-down list, you can select the do not match case check box if you want the operator to ignore the case of values. This check box is ignored if the inSubnet, inActiveList, inCategory, InActiveDirectoryGroup, hasBit, and inDictionary operators are selected. By default, this check box is cleared.

Filter operators

  • =—the left operand equals the right operand.
  • <—the left operand is less than the right operand.
  • <=—the left operand is less than or equal to the right operand.
  • >—the left operand is greater than the right operand.
  • >=—the left operand is greater than or equal to the right operand.
  • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
  • contains—the left operand contains values of the right operand.
  • startsWith—the left operand starts with one of the values of the right operand.
  • endsWith—the left operand ends with one of the values of the right operand.
  • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
  • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

    The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

    If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

  • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

    If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

  • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
  • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
  • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
  • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
  • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
  • inContextTable—presence of the entry in the specified context table.
  • intersect—presence in the left operand of the list items specified in the right operand.

You can change or delete the specified operator. To change the operator, click it and specify a new operator. To delete the operator, click it, then press Backspace.

The available operand kinds depends on whether the operand is left (L) or right (R).

Available operand kinds for left (L) and right (R) operands

Operator

Event field type

Active list type

Dictionary type

Context table type

Table type

TI type

Constant type

List type

=

L,R

L,R

L,R

L,R

L,R

L,R

R

R

>

L,R

L,R

L,R

L,R (only when looking up a table value by index)

L,R

L

R

нет значения.

>=

L,R

L,R

L,R

L,R (only when looking up a table value by index)

L,R

L

R

нет значения.

<

L,R

L,R

L,R

L,R (only when looking up a table value by index)

L,R

L

R

нет значения.

<=

L,R

L,R

L,R

L,R (only when looking up a table value by index)

L,R

L

R

нет значения.

inSubnet

L,R

L,R

L,R

L,R

L,R

L,R

R

R

contains

L,R

L,R

L,R

L,R

L,R

L,R

R

R

startsWith

L,R

L,R

L,R

L,R

L,R

L,R

R

R

endsWith

L,R

L,R

L,R

L,R

L,R

L,R

R

R

match

L

L

L

L

L

L

R

R

hasVulnerability

L

L

L

L

L

нет значения.

нет значения.

нет значения.

hasBit

L

L

L

L

L

нет значения.

R

R

inActiveList

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

inDictionary

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

inCategory

L

L

L

L

L

нет значения.

R

R

inContextTable

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

inActiveDirectoryGroup

L

L

L

L

L

нет значения.

R

R

TIDetect

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

нет значения.

You can use hotkeys when managing filters. Hotkeys are described in the table below.

Hotkeys and their functions

Key

Function

e

Invokes a filter by the event field

d

Invokes a filter by the dictionary field

a

Invokes a filter by the active list field

c

Invokes a filter by the context table field

t

Invokes a filter by the table field

f

Invokes a filter

t+i

Invokes a filter using TI

Ctrl+Enter

Finish editing a condition

The usage of extended event schema fields of the "String", "Number", or "Float" types is the same as the usage of fields of the KUMA event schema.

When using filters with extended event schema fields of the "Array of strings", "Array of numbers", and "Array of floats" types, you can use the following operations:

  • The contains operation returns True if the specified substring is present in the array, otherwise it returns False.
  • The match operation matches the string against a regular expression.
  • The intersec operation.

Creating filtering criteria in source code mode

The source code mode allows you to quickly edit conditions, select and copy blocks of code. In the right part of the builder, you can find the navigator, which lets you to navigate the filter code. Line wrapping is performed automatically at AND, OR, NOT logical operators, or at commas that separate the items in the list of values.

Names of resources used in the filter are automatically specified. Fields containing the names of linked resources cannot be edited. The names of shared resource categories are not displayed in the filter if you do not have the "Access to shared resources" role. To view the list of resources for the selected operand inside the expression, press Ctrl+Space. This displays a list of resources.

The filters listed in the table below are included in the KUMA kit.

Predefined filters

Filter name

Description

[OOTB][AD] A member was added to a security-enabled global group (4728)

Selects events of adding a user to an Active Directory security-enabled global group.

[OOTB][AD] A member was added to a security-enabled universal group (4756)

Selects events of adding a user to an Active Directory security-enabled universal group.

[OOTB][AD] A member was removed from a security-enabled global group (4729)

Selects events of removing a user from an Active Directory security-enabled global group.

[OOTB][AD] A member was removed from a security-enabled universal group (4757)

Selects events of removing a user from an Active Directory security-enabled universal group.

[OOTB][AD] Account Created

Selects Windows user account creation events.

[OOTB][AD] Account Deleted

Selects Windows user account deletion events.

[OOTB][AD] An account failed to log on (4625)

Selects Windows logon failure events.

[OOTB][AD] Successful Kerberos authentication (4624, 4768, 4769, 4770)

Selects successful Windows logon events and events with IDs 4769, 4770 that are logged on domain controllers.

[OOTB][AD][Technical] 4768. TGT Requested

Selects Microsoft Windows events with ID 4768.

[OOTB][Net] Possible port scan

Selects events that may indicate a port scan.

[OOTB][SSH] Accepted Password

Selects events of successful SSH connections with a password.

[OOTB][SSH] Failed Password

Selects attempts to connect over SSH with a password.

Page top
[Topic 217880]

Active lists

The active list is a bucket for data that is used by KUMA correlators for analyzing events according to the correlation rules.

For example, for a list of IP addresses with a bad reputation, you can:

  1. Create a correlation rule of the operational type and add these IP addresses to the active list.
  2. Create a correlation rule of the standard type and specify the active list as filtering criteria.
  3. Create a correlator with this rule.

    In this case, KUMA selects all events that contain the IP addresses in the active list and creates a correlation event.

You can fill active lists automatically using correlation rules of the simple type or import a file that contains data for the active list.

You can add, copy, or delete active lists.

Active lists can be used in the following KUMA services and features:

The same active list can be used by different correlators. However, a separate entity of the active list is created for each correlator. Therefore, the contents of the active lists used by different correlators differ even if the active lists have the same names and IDs.

Only data based on correlation rules of the correlator are added to the active list.

You can add, edit, duplicate, delete, and export records in the active correlator sheet.

During the correlation process, when entries are deleted from active lists after their lifetime expires, service events are generated in the correlators. These events only exist in the correlators, and they are not redirected to other destinations. Correlation rules can be configured to track these events so that they can be processed and used to identify threats. Service event fields for deleting an entry from the active list are described below.

Event field

Value or comment

ID

Event identifier

Timestamp

Time when the expired entry was deleted

Name

"active list record expired"

DeviceVendor

"Kaspersky"

DeviceProduct

"KUMA"

ServiceID

Correlator ID

ServiceName

Correlator name

DeviceExternalID

Active list ID

DevicePayloadID

Key of the expired entry

BaseEventCount

Number of deleted entry updates increased by one

S.<active list field>

Dropped-out entry of the active list in the following format:

S.<active list field> = <value of active list field>

Page top
[Topic 217707]

Viewing the table of active lists

To view the table of correlator active lists:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

The Correlator active lists table is displayed.

The table contains the following data:

  • Name—the name of the correlator list.
  • Records—the number of record the active list contains.
  • Size on disk—the size of the active list.
  • Directory—the path to the active list on the KUMA Core server.
Page top
[Topic 239552]

Adding active list

To add active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click the Active lists button.
  3. Click the Add active list button.
  4. Do the following:
    1. In the Name field, enter a name for the active list.
    2. In the Tenant drop-down list, select the tenant that owns the resource.
    3. In the TTL field, specify time the record added to the active list is stored in it.

      When the specified time expires, the record is deleted. The time is specified in seconds.

      The default value is 0. If the value of the field is 0, the record is retained for 36,000 days (roughly 100 years).

    4. In the Description field, provide any additional information.

      You can use up to 4,000 Unicode characters.

      This field is optional.

  5. Click the Save button.

The active list is added.

Page top
[Topic 239532]

Viewing the settings of an active list

To view the settings of an active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click the Active lists button.
  3. In the Name column, select the active list whose settings you want to view.

This opens the active list settings window. It displays the following information:

  • ID—identifier selected Active list.
  • Name—unique name of the resource.
  • Tenant—the name of the tenant that owns the resource.
  • TTL—the record added to the active list is stored in it for this time. This value is specified in seconds.
  • Description—any additional information about the resource.
Page top
[Topic 239553]

Changing the settings of an active list

To change the settings of an active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click the Active lists button.
  3. In the Name column, select the active list whose settings you want to change.
  4. Specify the values of the following parameters:
    • Name—unique name of the resource.
    • TTL—the record added to the active list is stored in it for this time. This value is specified in seconds.

      If the field is set to 0, the record is stored indefinitely.

    • Description—any additional information about the resource.

    The ID and Tenant fields are not editable.

Page top
[Topic 239557]

Duplicating the settings of an active list

To copy an active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click the Active lists button.
  3. Select the check box next to the active lists you want to copy.
  4. Click Duplicate.
  5. Specify the necessary settings.
  6. Click the Save button.

The active list is copied.

Page top
[Topic 239786]

Deleting an active list

To delete an active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click the Active lists button.
  3. Select the check boxes next to the active lists you want to delete.

    To delete all lists, select the check box next to the Name column.

    At least one check box must be selected.

  4. Click the Delete button.
  5. Click OK.

The active lists are deleted.

Page top
[Topic 239785]

Viewing records in the active list

To view the records in the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

A table of records for the selected list is opened.

The table contains the following data:

  • Key – the value of the record key.
  • Record repetitions – total number of times the record was mentioned in events and identical records were downloaded when importing active lists to KUMA.
  • Expiration date – date and time when the record must be deleted.

    If the TTL field had the value of 0 when the active list was created, the records of this active list are retained for 36,000 days (roughly 100 years).

  • Created – the time when the active list was created.
  • Updated – the time when the active list was last updated.
Page top
[Topic 239534]

Searching for records in the active list

To find a record in the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

    A window with the records for the selected list is opened.

  6. In the Search field, enter the record key value or several characters from the key.

The table of records of the active list displays only the records with the key containing the entered characters.

Page top
[Topic 239644]

Adding a record to an active list

To add a record to the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the required correlator.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

    A window with the records for the selected list is opened.

  6. Click Add.

    The Create record window opens.

  7. Specify the values of the following parameters:
    1. In the Key field, enter the name of the record.

      You can specify several values separated by the "|" character.

      The Key field cannot be empty. If the field is not filled in, KUMA returns an error when trying to save the changes.

    2. In the Value field, specify the values for fields in the Field column.

      KUMA takes field names from the correlation rules with which the active list is associated. These names are not editable. You can delete these fields if necessary.

    3. Click the Add new element button to add more values.
    4. In the Field column, specify the field name.

      The name must meet the following requirements:

      • To be unique
      • Do not contain tab characters
      • Do not contain special characters except for the underscore character
      • The maximum number of characters is 128.

        The name must not begin with an underscore and contain only numbers.

    5. In the Value column, specify the value for this field.

      It must meet the following requirements:

      • Do not contain tab characters.
      • Do not contain special characters except for the underscore character.
      • The maximum number of characters is 1024.

      This field is optional.

  8. Click the Save button.

The record is added. After saving, the records in the active list are sorted in alphabet order.

Page top
[Topic 239780]

Duplicating records in the active list

To duplicate a record in the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

    A window with the records for the selected list is opened.

  6. Select the check boxes next to the record you want to copy.
  7. Click Duplicate.
  8. Specify the necessary settings.

    The Key field cannot be empty. If the field is not filled in, KUMA returns an error when trying to save the changes.

    Editing the field names in the Field column is not available for the records that have been added to the active list before. You can change the names only for records added at the time of editing. The name must not begin with an underscore and contain only numbers.

  9. Click the Save button.

The record is copied. After saving, the records in the active list are sorted in alphabet order.

Page top
[Topic 239900]

Changing a record in the active list

To edit a record in the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

    A window with the records for the selected list is opened.

  6. Click the record name in the Key column.
  7. Specify the required values.
  8. Click the Save button.

The record is overwritten. After saving, the records in the active list are sorted in alphabet order.

Restrictions when editing a record:

  • The record name is not editable. You can change it by importing the same data with a different name.
  • Editing the field names in the Field column is not available for the records that have been added to the active list before. You can change the names only for records added at the time of editing. The name must not begin with an underscore and contain only numbers.
  • The values in the Value column must meet the following requirements:
    • Do not contain Cyrillic characters.
    • Do not contain spaces or tabs.
    • Do not contain special characters except for the underscore character.
    • The maximum number of characters is 128.
Page top
[Topic 239533]

Deleting records from the active list

To delete records from the active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. In the Name column, select the desired active list.

    A window with the records for the selected list is opened.

  6. Select the check boxes next to the records you want to delete.

    To delete all records, select the check box next to the Key column.

    At least one check box must be selected.

  7. Click the Delete button.
  8. Click OK.

The records will be deleted.

Page top
[Topic 239645]

Import data to an active list

To import active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. Point the mouse over the row with the desired active list.
  6. Click More-DropDown to the left of the active list name.
  7. Select Import.

    The active list import window opens.

  8. In the File field select the file you wan to import.
  9. In the Format drop-down list select the format of the file:
    • csv
    • tsv
    • internal
  10. Under Key field, enter the name of the column containing the active list record keys.
  11. Click the Import button.

The data from the file is imported into the active list. The records included in the list before are saved.

Data imported from a file is not checked for invalid characters. If you use this data in widgets, widgets are displayed incorrectly if invalid characters are present in the data.

Page top
[Topic 239642]

Exporting data from the active list

To export active list:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. Select the check box next to the correlator for which you want to view the active list.
  4. Click the Go to active lists button.

    The Correlator active lists table is displayed.

  5. Point the mouse over the row with the desired active list.
  6. Click More-DropDown to the left of the desired active list.
  7. Click the Export button.

The active list is downloaded in the JSON format using your browsers settings. The name of the downloaded file reflects the name of active list.

Page top
[Topic 239643]

Predefined active lists

The active lists listed in the table below are included in the KUMA distribution kit.

Predefined active lists

Active list name

Description

[OOTB][AD] End-users tech support accounts

This active list is used as a filter for the "[OOTB][AD] Successful authentication with same user account on multiple hosts" correlation rule. Accounts of technical support staff may be added to the active list. Records are not deleted from the active list.

[OOTB][AD] List of sensitive groups

This active list is used as a filter for the "[OOTB][AD] Membership of sensitive group was modified" correlation rule. Critical domain groups, whose membership must be monitored, can be added to the active list. Records are not deleted from the active list.

[OOTB][Linux] CompromisedHosts

This active list is populated by the [OOTB] Successful Bruteforce by potentially compromised Linux hosts rule. Records are removed from the list 24 hours after they are recorded.

Page top
[Topic 249358]

Dictionaries

Description of parameters

Dictionaries are resources storing data that can be used by other KUMA resources and services. Dictionaries can be used in the following KUMA services and features:

Available dictionary settings are listed in the table below.

Available dictionary settings

Setting

Description

Name

Unique name for this resource type. Maximum length of the name: 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Description

Description of the resource. Maximum length of the description: 4000 Unicode characters.

Type

Dictionary type. The selected dictionary type determines the format of the data that the dictionary can contain:

  • You can add key-value pairs to the Dictionary type. We do not recommend adding more than 50,000 entries to dictionaries of this type using the KUMA Console.

    When adding lines with the same keys to the dictionary, each new line will overwrite the existing line with the same key. This means that only one line will be added to the dictionary.

  • Data in the form of complex tables can be added to the Table type. You can interact with dictionaries of this type using the REST API. When adding dictionaries using the API, there is no limit on the number of entries that can be added.

Required setting.

Values

Table with dictionary data.

  • For the Dictionary type, this block displays a list of KeyValue pairs. You can add and remove rows from the table. To add a row to the table, click add-button. To remove a row from the table, hover over the table row until the delete-button button appears and click the button.

    In the Key field, you must specify a unique key. Maximum length of the key: 128 Unicode characters. The first character cannot be $.

    In the Value field, you must specify a value. Maximum length of the value: 255 Unicode characters. The first character cannot be $.

    You may add one or more Key – Value pairs.

  • For the Table type, this block displays a table containing data. You can add and remove rows and columns from the table. To add a row or column to the table, click add-button. To remove a row or column from the table, hover over the row or the heading of the column until the delete-button button appears and click the button. You can edit the headings of table columns.

If the dictionary contains more than 5,000 entries, they are not displayed in the KUMA Console. To view the contents of such dictionaries, the contents must be exported in CSV format. If you edit the CSV file and import it back into KUMA, the dictionary is updated.

Importing and exporting dictionaries

You can import or export dictionary data in CSV format (in UTF-8 encoding) by using the Import CSV or Export CSV buttons.

The format of the CSV file depends on the dictionary type:

  • Dictionary type:

    {KEY},{VALUE}\n

  • Table type:

    {Column header 1}, {Column header N}, {Column header N+1}\n

    {Key1}, {ValueN}, {ValueN+1}\n

    {Key2}, {ValueN}, {ValueN+1}\n

    The keys must be unique for both the CSV file and the dictionary. In tables, the keys are specified in the first column. Keys must contain 1 to 128 Unicode characters.

    Values must contain 0 to 256 Unicode characters.

During an import, the contents of the dictionary are overwritten by the imported file. When imported into the dictionary, the resource name is also changed to reflect the name of the imported file.

If the key or value contains comma or quotation mark characters (, and "), they are enclosed in quotation marks (") when exported. Also, quotation mark character (") is shielded with additional quotation mark (").

If incorrect lines are detected in the imported file (for example, invalid separators), these lines will be ignored during import into the dictionary, and the import process will be interrupted during import into the table.

Interacting with dictionaries via API

You can use the REST API to read the contents of Table-type dictionaries. You can also modify them even if these resources are being used by active services. This lets you, for instance, configure enrichment of events with data from dynamically changing tables exported from third-party applications.

Predefined dictionaries

The dictionaries listed in the table below are included in the KUMA distribution kit.

Predefined dictionaries

Dictionary name

Type

Description

[OOTB] Ahnlab. Severity

dictionary

Contains a table of correspondence between a priority ID and its name.

[OOTB] Ahnlab. SeverityOperational

dictionary

Contains values of the SeverityOperational parameter and a corresponding description.

[OOTB] Ahnlab. VendorAction

dictionary

Contains a table of correspondence between the ID of the operation being performed and its name.

[OOTB] Cisco ISE Message Codes

dictionary

Contains Cisco ISE event codes and their corresponding names.

[OOTB] DNS. Opcodes

dictionary

Contains a table of correspondence between decimal opcodes of DNS operations and their IANA-registered descriptions.

[OOTB] IANAProtocolNumbers

dictionary

Contains the port numbers of transport protocols (TCP, UDP) and their corresponding service names, registered by IANA.

[OOTB] Juniper - JUNOS

dictionary

Contains JUNOS event IDs and their corresponding descriptions.

[OOTB] KEDR. AccountType

dictionary

Contains the ID of the user account type and its corresponding type name.

[OOTB] KEDR. FileAttributes

dictionary

Contains IDs of file attributes stored by the file system and their corresponding descriptions.

[OOTB] KEDR. FileOperationType

dictionary

Contains IDs of file operations from the KATA API and their corresponding operation names.

[OOTB] KEDR. FileType

dictionary

Contains modified file IDs from the KATA API and their corresponding file type descriptions.

[OOTB] KEDR. IntegrityLevel

dictionary

Contains the SIDs of the Microsoft Windows INTEGRITY LEVEL parameter and their corresponding descriptions.

[OOTB] KEDR. RegistryOperationType

dictionary

Contains IDs of registry operations from the KATA API and their corresponding values.

[OOTB] Linux. Sycall types

dictionary

Contains Linux call IDs and their corresponding names.

[OOTB] MariaDB Error Codes

dictionary

The dictionary contains MariaDB error codes and is used by the [OOTB] MariaDB Audit Plugin syslog normalizer to enrich events.

[OOTB] Microsoft SQL Server codes

dictionary

Contains MS SQL Server error IDs and their corresponding descriptions.

[OOTB] MS DHCP Event IDs Description

dictionary

Contains Microsoft Windows DHCP server event IDs and their corresponding descriptions.

[OOTB] S-Terra. Dictionary MSG ID to Name

dictionary

Contains IDs of S-Terra device events and their corresponding event names.

[OOTB] S-Terra. MSG_ID to Severity

dictionary

Contains IDs of S-Terra device events and their corresponding Severity values.

[OOTB] Syslog Priority To Facility and Severity

table

The table contains the Priority values and the corresponding Facility and Severity field values.

[OOTB] VipNet Coordinator Syslog Direction

dictionary

Contains direction IDs (sequences of special characters) used in ViPNet Coordinator to designate a direction, and their corresponding values.

[OOTB] Wallix EventClassId - DeviceAction

dictionary

Contains Wallix AdminBastion event IDs and their corresponding descriptions.

[OOTB] Windows.Codes (4738)

dictionary

Contains operation codes present in the MS Windows audit event with ID 4738 and their corresponding names.

[OOTB] Windows.Codes (4719)

dictionary

Contains operation codes present in the MS Windows audit event with ID 4719 and their corresponding names.

[OOTB] Windows.Codes (4663)

dictionary

Contains operation codes present in the MS Windows audit event with ID 4663 and their corresponding names.

[OOTB] Windows.Codes (4662)

dictionary

Contains operation codes present in the MS Windows audit event with ID 4662 and their corresponding names.

[OOTB] Windows. EventIDs and Event Names mapping

dictionary

Contains Windows event IDs and their corresponding event names.

[OOTB] Windows. FailureCodes (4625)

dictionary

Contains IDs from the Failure Information\Status and Failure Information\Sub Status fields of Microsoft Windows event 4625 and their corresponding descriptions.

[OOTB] Windows. ImpersonationLevels (4624)

dictionary

Contains IDs from the Impersonation level field of Microsoft Windows event 4624 and their corresponding descriptions.

[OOTB] Windows. KRB ResultCodes

dictionary

Contains Kerberos v5 error codes and their corresponding descriptions.

[OOTB] Windows. LogonTypes (Windows all events)

dictionary

Contains IDs of user logon types and their corresponding names.

[OOTB] Windows_Terminal Server. EventIDs and Event Names mapping

dictionary

Contains Microsoft Terminal Server event IDs and their corresponding names.

[OOTB] Windows. Validate Cred. Error Codes

dictionary

Contains IDs of user logon types and their corresponding names.

[OOTB] ViPNet Coordinator Syslog Direction

dictionary

Contains direction IDs (sequences of special characters) used in ViPNet Coordinator to designate a direction, and their corresponding values.

[OOTB] Syslog Priority To Facility and Severity

table

Contains the Priority values and the corresponding Facility and Severity field values.

Page top
[Topic 217843]

Response rules

Response rules let you initiate automatic running of Open Single Management Platform tasks, Threat Response actions for Kaspersky Endpoint Detection and Response, KICS/KATA, Active Directory, and running a custom script for specific events.

Automatic execution of Open Single Management Platform tasks, Kaspersky Endpoint Detection and Response tasks, and KICS/KATA and Active Directory tasks in accordance with response rules is available when integrated with the relevant programs.

You can configure response rules under Resources → Response, and then select the created response rule from the drop-down list in the correlator settings. You can also configure response rules directly in the correlator settings.

In this section

Response rules for Open Single Management Platform

Response rules for a custom script

Response rules for KICS for Networks

Response rules for Kaspersky Endpoint Detection and Response

Active Directory response rules

Page top
[Topic 217972]

Response rules for Open Single Management Platform

You can configure response rules to automatically start tasks of anti-virus scan and updates on Open Single Management Platform assets.

When creating and editing response rules for Open Single Management Platform, you need to define values for the following settings.

Response rule settings

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

Type

Required setting, available if KUMA is integrated with Open Single Management Platform.

Response rule type, ksctasks.

Open Single Management Platform task

Required setting.

Name of the Open Single Management Platform task to run. Tasks must be created beforehand, and their names must begin with "KUMA". For example, KUMA antivirus check (not case-sensitive and without quotation marks).

You can use KUMA to run the following types of Open Single Management Platform tasks:

  • Update
  • Virus scan

Event field

Required setting.

Defines the event field of the asset for which the Open Single Management Platform task should be started. Possible values:

  • SourceAssetID
  • DestinationAssetID
  • DeviceAssetID

Handlers

The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed.

Description

Description of the response rule. You can add up to 4,000 Unicode characters.

Filter

Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

To send requests to Open Single Management Platform, you must ensure that Open Single Management Platform is available over the UDP protocol.

If a response rule is owned by the shared tenant, the displayed Open Single Management Platform tasks that are available for selection are from the Open Single Management Platform server that the main tenant is connected to.

If a response rule has a selected task that is absent from the Open Single Management Platform server that the tenant is connected to, the task is not performed for assets of this tenant. This situation could arise when two tenants are using a common correlator, for example.

Page top
[Topic 233363]

Response rules for a custom script

You can create a script containing commands to be executed on the Kaspersky Unified Monitoring and Analysis Platform server when selected events are detected and configure response rules to automatically run this script. In this case, the program will run the script when it receives events that match the response rules.

The script file is stored on the server where the correlator service using the response resource is installed: /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts. The kuma user of this server requires the permissions to run the script.

When creating and editing response rules for a custom script, you need to define values for the following parameters.

Response rule settings

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

Type

Required setting.

Response rule type, script.

Timeout

The number of seconds allotted for the script to finish. If this amount of time is exceeded, the script is terminated.

Script name

Required setting.

Name of the script file.

If the response resource is attached to the correlator service but there is no script file in the /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts folder, the correlator will not work.

Script arguments

Arguments or event field values that must be passed to the script.

If the script includes actions taken on files, you should specify the absolute path to these files.

Parameters can be written with quotation marks (").

Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field which value must be passed to the script.

Example: -n "\"usr\": {{.SourceUserName}}"

Handlers

The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed.

Description

Description of the resource. You can add up to 4,000 Unicode characters.

Filter

Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 233366]

Response rules for KICS for Networks

You can configure response rules to automatically trigger response actions on KICS for Networks assets. For example, you can change the asset status in KICS for Networks.

When creating and editing response rules for KICS for Networks, you need to define values for the following settings.

Response rule settings

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

Type

Required setting.

Response rule type, Response via KICS/KATA.

Event field

Required setting.

Specifies the event field for the asset for which response actions must be performed. Possible values:

  • SourceAssetID
  • DestinationAssetID
  • DeviceAssetID

KICS for Networks task

Response action to be performed when data is received that matches the filter. The following types of response actions are available:

  • Change asset status to Authorized.
  • Change asset status to Unauthorized.

When a response rule is triggered, KUMA will send KICS for Networks an API request to change the status of the specified device to Authorized or Unauthorized.

Handlers

The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed.

Description

Description of the resource. You can add up to 4,000 Unicode characters.

Filter

Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 233722]

Response rules for Kaspersky Endpoint Detection and Response

You can configure response rules to automatically trigger response actions on Kaspersky Endpoint Detection and Response assets. For example, you can configure automatic asset network isolation.

When creating and editing response rules for Kaspersky Endpoint Detection and Response, you need to define values for the following settings.

Response rule settings

Setting

Description

Event field

Required setting.

Specifies the event field for the asset for which response actions must be performed. Possible values:

  • SourceAssetID
  • DestinationAssetID
  • DeviceAssetID

Task type

Response action to be performed when data is received that matches the filter. The following types of response actions are available:

  • Enable network isolation. When selecting this type of response, you need to define values for the following setting:
    • Isolation timeout—the number of hours during which the network isolation of an asset will be active. You can indicate from 1 to 9,999 hours. If necessary, you can add an exclusion for network isolation.

      To add an exclusion for network isolation:

      1. Click the Add exclusion button.
      2. Select the direction of network traffic that must not be blocked:
        • Inbound.
        • Outbound.
        • Inbound/Outbound.
      3. In the Asset IP field, enter the IP address of the asset whose network traffic must not be blocked.
      4. If you selected Inbound or Outbound, specify the connection ports in the Remote ports and Local ports fields.
      5. If you want to add more than one exclusion, click Add exclusion and repeat the steps to fill in the Traffic direction, Asset IP, Remote ports and Local ports fields.
      6. If you want to delete an exclusion, click the Delete button under the relevant exclusion.

    When adding exclusions to a network isolation rule, Kaspersky Endpoint Detection and Response may incorrectly display the port values in the rule details. This does not affect application performance. For more details on viewing a network isolation rule, please refer to the Kaspersky Anti Targeted Attack Platform Help Guide.

  • Disable network isolation.
  • Add prevention rule. When selecting this type of response, you need to define values for the following settings:
    • Event fields to extract hash from—event fields from which Kaspersky Unified Monitoring and Analysis Platform extracts SHA256 or MD5 hashes of files that must be prevented from running.
      The selected event fields, as well as the values ​​selected in Event field, must be added to the propagated fields of the correlation rule.
    • File hash #1—SHA256 or MD5 hash of the file to be blocked.

At least one of the above fields must be completed.

  • Delete prevention rule.
  • Run program. When selecting this type of response, you need to define values for the following settings:
    • File path—path to the file of the process that you want to start.
    • Command line parameters—parameters with which you want to start the file.
    • Working directory—directory in which the file is located at the time of startup.

    When a response rule is triggered for users with the General Administrator role, the Run program task will be displayed in the Task manager section of the program web interface. The Created by column of the task table for this task displays Scheduled task. You can view task completion results.

All of the listed operations can be performed on assets that have Kaspersky Endpoint Agent for Windows. On assets that have Kaspersky Endpoint Agent for Linux, the program can only be started.

At the software level, the capability to create prevention rules and network isolation rules for assets with Kaspersky Endpoint Agent for Linux is unlimited. Kaspersky Unified Monitoring and Analysis Platform and Kaspersky Endpoint Detection and Response do not notify about failing to apply these rules.

Handlers

The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed.

Description

Description of the response rule. You can add up to 4,000 Unicode characters.

Filter

Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 237454]

Active Directory response rules

Expand all | Collapse all

Active Directory response rules define the actions to be applied to an account if a rule is triggered.

When creating and editing response rules using Active Directory, specify the values for the following settings.

Response rule settings

Setting

Description

Name

Required setting.

Unique name of the resource. Must contain 1 to 128 Unicode characters.

Tenant

Required setting.

The name of the tenant that owns the resource.

Type

Required setting.

Response rule type, Response via Active Directory.

Source of the user account ID

Event field from which the Active Directory account ID value is taken. Possible values:

  • SourceAccountID
  • DestinationAccountID

AD command

Command that is applied to the account when the response rule is triggered.

Available values:

  • Add account to group

    The Active Directory group to move the account from or to.
    In the mandatory field Distinguished name, you must specify the full path to the group.
    For example, CN = HQ Team, OU = Groups, OU = ExchangeObjects, DC = avp, DC = ru.
    Only one group can be specified within one operation.

  • Remove account from group

    The Active Directory group to move the account from or to.
    In the mandatory field Distinguished name, you must specify the full path to the group.
    For example, CN = HQ Team, OU = Groups, OU = ExchangeObjects, DC = avp, DC = ru.
    Only one group can be specified within one operation.

  • Reset account password

If your Active Directory domain allows selecting the User cannot change password check box, resetting the user account password as a response will result in a conflict of requirements for the user account: the user will not be able to authenticate. The domain administrator will need to clear one of the check boxes for the affected user account: User cannot change password or User must change password at next logon.

  • Block account

Group DN

The DistinguishedName of the domain group in fields for each role. The users of this domain group must be able to authenticate with their domain user accounts. Example of entering a group: OU=KUMA users,OU=users,DC=example,DC=domain

Handlers

The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed.

Filter

Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter.

Creating a filter in resources

To create a filter:

  1. In the Filter drop-down list, select Create new.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. In this case, you will be able to use the created filter in various services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. Maximum length of the name: 128 Unicode characters.
  4. In the Conditions settings block, specify the conditions that the events must meet:
    1. Click the Add condition button.
    2. In the Left operand and Right operand drop-down lists, specify the search parameters. Depending on the data source selected in the Right operand field, there may be fields of additional parameters for identifying the value to be passed to the filter. For example, when you select active list, you must specify the name of the active list, the entry key, and the entry key field.
    3. In the operator drop-down list, select an operator.

      Filter operators

      • =—the left operand equals the right operand.
      • <—the left operand is less than the right operand.
      • <=—the left operand is less than or equal to the right operand.
      • >—the left operand is greater than the right operand.
      • >=—the left operand is greater than or equal to the right operand.
      • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
      • contains—the left operand contains values of the right operand.
      • startsWith—the left operand starts with one of the values of the right operand.
      • endsWith—the left operand ends with one of the values of the right operand.
      • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
      • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).

        The value to be checked is converted to binary and processed right to left. Chars are checked whose index is specified as a constant or a list.

        If the value being checked is a string, then an attempt is made to convert it to integer and process it in the way described above. If the string cannot be converted to a number, the filter returns False.

      • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.

        If you do not specify the ID and severity of the vulnerability, the filter is triggered if the asset in the event being checked has any vulnerability.

      • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
      • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
      • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
      • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
      • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • inContextTable—presence of the entry in the specified context table.
      • intersect—presence in the left operand of the list items specified in the right operand.
    4. If you want the operator to be case-insensitive, select the do not match case check box. The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators. This check box is cleared by default.
    5. If you want to add a negative condition, select If not from the If drop-down list.

    You can add multiple conditions or a group of conditions.

  5. If you have added multiple conditions or groups of conditions, choose a selection condition (and, or, not) by clicking the AND button.
  6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button. You can view the nested filter settings by clicking the edit-grey button.

Page top
[Topic 243446]

Connectors

Connectors are used for establishing connections between Kaspersky Unified Monitoring and Analysis Platform services and for receiving events actively and passively.

You can specify connector settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector.

Connectors can have the following types:

  • internal – Used for receiving data from KUMA services using the 'internal' protocol.
  • tcp – Used for passively receiving events over TCP when working with Windows and Linux agents.
  • udp – Used for passively receiving events over UDP when working with Windows and Linux agents.
  • netflow – Used for passively receiving events in the NetFlow format.
  • sflow – Used for passively receiving events in the sFlow format. For sFlow, only structures described in sFlow version 5 are supported.
  • nats-jetstream – Used for interacting with a NATS message broker when working with Windows and Linux agents.
  • kafka – Used for communicating with the Apache Kafka data bus when working with Windows and Linux agents.
  • http – Used for receiving events over HTTP when working with Windows and Linux agents.
  • sql – Used for querying databases. KUMA supports multiple types of databases. When creating a connector of the sql type, you must specify general connector settings and individual database connection settings.

    The program supports the following types of SQL databases:

    • SQLite.
    • MariaDB 10.5 or later.
    • MSSQL.
    • MySQL 5.7 or later.
    • PostgreSQL.
    • Cockroach.
    • Oracle.
    • Firebird.
  • file – Used for getting data from text files when working with Windows and Linux agents. One line of a text file is considered to be one event. \n is used as the newline character.
  • 1c-log – Used for getting data from 1C technology logs when working with Linux agents. \n is used as the newline character. The connector accepts only the first line from a multi-line event record.
  • 1c-xml – Used for getting data from 1C registration logs when working with Linux agents. When the connector handles multi-line events, it converts them into single-line events.
  • diode – Used for unidirectional data transmission in industrial ICS networks using data diodes.
  • ftp – Used for getting data over File Transfer Protocol (FTP) when working with Windows and Linux agents.
  • nfs – Used for getting data over Network File System (NFS) when working with Windows and Linux agents.
  • wmi – Used for getting data using Windows Management Instrumentation when working with Windows agents.
  • wec – Used for getting data using Windows Event Forwarding (WEF) and Windows Event Collector (WEC), or local operating system logs of a Windows host when working with Windows agents.
  • etw – Used for getting extended logs of DNS servers.
  • snmp – Used for getting data over Simple Network Management Protocol (SNMP) when working with Windows and Linux agents. To process events received over SNMP, you must use the json normalizer. Supported SNMP protocol versions:
    • snmpV1
    • snmpV2
    • snmpV3
  • snmp-trap – Used for passively receiving events using SNMP traps when working with Windows and Linux agents. The connector receives snmp-trap events and prepares them for normalization by mapping SNMP object IDs to temporary keys. Then the message is passed to the JSON normalizer, where the temporary keys are mapped to the KUMA fields and an event is generated. To process events received over SNMP, you must use the json normalizer. Supported SNMP protocol versions:
    • snmpV1
    • snmpV2
  • kata/edr – Used for getting KEDR data via the API.
  • vmware – Used for getting VMware vCenter data via the API.
  • elastic – Used for getting Elasticsearch data. Elasticsearch version 7.0.0 is supported.
  • office365 – Used for receiving Microsoft 365 (Office 365) data via the API.

Some connector types (such as tcp, sql, wmi, wec, and etw) support TLS encryption. KUMA supports TLS 1.2 and 1.3. When TLS mode is enabled for these connectors, the connection is established according to the following algorithm:

  • If KUMA is being used as a client:
    1. KUMA sends a connection request to the server with a ClientHello message specifying the maximum supported TLS version (1.3), as well as a list of supported ciphersuites.
    2. The server responds with the preferred TLS version and a ciphersuite.
    3. Depending on the TLS version in the server response:
      • If the server responds to the request with TLS 1.3 or 1.2, KUMA establishes a connection with the server.
      • If the server responds to the request with TLS 1.1, KUMA terminates the connection with the server.
  • If KUMA is being used as a server:
    1. The client sends a connection request to KUMA with the maximum supported TLS version, as well as a list of supported ciphersuites.
    2. Depending on the TLS version in the client request:
      • If the ClientHello message of the client request specifies TLS 1.1, KUMA terminates the connection.
      • If the client request specifies TLS 1.2 or 1.3, KUMA responds to the request with the preferred TLS version and a ciphersuite.

In this section

Viewing connector settings

Adding a connector

Connector settings

Predefined connectors

Page top
[Topic 217776]

Viewing connector settings

To view connector settings:

  1. In the web interface of Kaspersky Unified Monitoring and Analysis Platform, go to the Resources → Connectors section.
  2. In the folder structure, select the folder containing the relevant connector.
  3. Select the connector whose settings you want to view.

The settings of connectors are displayed on two tabs: Basic settings and Advanced settings. For a detailed description of each connector settings, please refer to the Connector settings section.

Page top
[Topic 233566]

Adding a connector

You can enable the display of non-printing characters for all entry fields except the Description field.

To add a connector:

  1. In the web interface of Kaspersky Unified Monitoring and Analysis Platform, go to the Resources → Connectors section.
  2. In the folder structure, select the folder in which you want the connector to be located.

    Root folders correspond to tenants. To make a connector available to a specific tenant, the resource must be created in the folder of that tenant.

    If the required folder is absent from the folder tree, you need to create it.

    By default, added connectors are created in the Shared folder.

  3. Click the Add connector button.
  4. Define the settings for the selected connector type.

    The settings that you must specify for each type of connector are provided in the Connector settings section.

  5. Click the Save button.
Page top
[Topic 233570][Topic 233592]

Connector, internal type

Connectors of the internaltypeUsed for receiving data from KUMA services using the 'internal' protocol. For example, you must use such a connector to receive the following data:

  • Internal data, such as event routes.
  • File attributes. If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file, 1c-xml, or 1c-log type, at the Event parsing step, in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:
    • $kuma_fileSourceName to pass the name of the file being processed by the collector in the KUMA event field.
    • $kuma_fileSourcePath to pass the path to the file being processed by the collector in the KUMA event field.

    When you use a file, 1c-xml, or 1c-log connector, the new variables in the normalizer will only work with destinations of the internal type.

  • Events to the event router. The event router can only receive events over the 'internal' protocol, therefore you can only use internal destinations when sending events to the event router.

Settings for a connector of the internal type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: internal.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

The URL and port that the connector is listening on. You can enter a value in one of the following formats:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • :<port number>

You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple values or delete values. To add a value, click the + Add button. To delete a value, click the delete cross-black icon next to it.

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Page top
[Topic 292827]

Connector, tcp type

Connectors of the tcp typeUsed for passively receiving events over TCP when working with Windows and Linux agents. Settings for a connector of the tcp type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: tcp.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. You can enter a URL in one of the following formats:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>
  • :<port number>

Required setting.

Auditd

This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event.

If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism.

If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events.

The maximum size of a grouped auditd event is approximately 4,174,304 characters.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Character encoding

Character encoding. The default is UTF-8.

Event buffer TTL

Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event.

The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: from 50 to 30,000. The default value is 2000.

This field is available if you have enabled the Auditd toggle switch on the Basic settings tab.

The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics.

If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector.

Transport header

Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it.

The regular expression must contain the record_type_name, record_type_value, and event_sequence_number groups. If a multi-line auditd event contains a prefix, the prefix is retained for the first line of the auditd event and discarded for the following lines.

You can revert to the default regular expression for auditd events by clicking Reset to default value.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

  • Custom PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

Page top
[Topic 220739]

Connector, udp type

Connectors of the udp typeUsed for passively receiving events over UDP when working with Windows and Linux agents. Settings for a connector of the udp type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: udp.

Required setting.

URL

URL that you want to connect to. You can enter a URL in one of the following formats:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>
  • :<port number>

Required setting.

Auditd

This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event.

If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism.

If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events.

The maximum size of a grouped auditd event is approximately 4,174,304 characters.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Character encoding

Character encoding. The default is UTF-8.

Event buffer TTL

Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event.

The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: from 50 to 30,000. The default value is 2000.

This field is available if you have enabled the Auditd toggle switch on the Basic settings tab.

The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics.

If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector.

Transport header

Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it.

The regular expression must contain the record_type_name, record_type_value, and event_sequence_number groups. If a multi-line auditd event contains a prefix, the prefix is retained for the first line of the auditd event and discarded for the following lines.

You can revert to the default regular expression for auditd events by clicking Reset to default value.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

Page top
[Topic 220740]

Connector, netflow type

Connectors of the netflow typeUsed for passively receiving events in the NetFlow format. Settings for a connector of the netflow type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: netflow.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 220741]

Connector, sflow type

Connectors of the sflow typeUsed for passively receiving events in the sFlow format. For sFlow, only structures described in sFlow version 5 are supported. Settings for a connector of the sflow type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: sflow.

Required setting.

URL

URL that you want to connect to. You can enter a URL in one of the following formats:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>
  • :<port number>

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 233206]

Connector, nats-jetstream type

Expand all | Collapse all

Connectors of the nats-jetstream typeUsed for interacting with a NATS message broker when working with Windows and Linux agents. Settings for a connector of the nats-jetstream type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: nats-jetstream.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Authorization

Type of authorization when connecting to the URL specified in the URL field:

  • Disabled. This value is selected by default.
  • Plain. If this option is selected, in the Secret drop-down list, specify the secret containing user account credentials for authorization when connecting to the destination. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

Subject

The topic of NATS messages. Characters are entered in Unicode encoding.

Required setting.

GroupID.

The value of the GroupID parameter for NATS messages. Maximum length of the value: 255 Unicode characters. The default value is default.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

  • Custom PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

Page top
[Topic 220742]

Connector, kafka type

Expand all | Collapse all

Connectors of the kafka typeUsed for communicating with the Apache Kafka data bus when working with Windows and Linux agents. Settings for a connector of the kafka type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: kafka.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Authorization

Type of authorization when connecting to the URL specified in the URL field:

  • Disabled. This value is selected by default.
  • Plain. If this option is selected, in the Secret drop-down list, specify the secret containing user account credentials for authorization when connecting to the destination. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

  • PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Topic

Subject of Kafka messages. The maximum length of the subject is 255 characters. You can use the following characters: a–z, A–Z, 0–9, ".", "_", "-".

Required setting.

GroupID.

The value of the GroupID parameter for Kafka messages. Maximum length of the value: 255 characters. You can use the following characters: a–z, A–Z, 0–9, ".", "_", and "-".

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

Size of message to fetch

Size of one message in the request, in bytes. The default value of 16 MB is applied if no value is specified or 0 is specified.

Maximum fetch wait time

Timeout for one message in seconds. The default value of 5 seconds is applied if no value is specified or 0 is specified.

Connection timeout

Kafka broker connection timeout in seconds.

Maximum possible value: 2147483647. The default value is 30 seconds.

Read timeout

Read operation timeout in seconds.

Maximum possible value: 2147483647. The default value is 30 seconds.

Write timeout

Write operation timeout in seconds.

Maximum possible value: 2147483647. The default value is 30 seconds.

Group status update interval

Group status update interval in seconds Cannot exceed session time. The recommended value is 1/3 of the session time.

Maximum possible value: 2147483647. The default value is 30 seconds.

Session time

Session time in seconds.

Maximum possible value: 2147483647. The default value is 30 seconds.

Maximum time to process one message

Maximum time to process one message by a single thread, in milliseconds.

Maximum possible value: 2147483647. The default value is 100 milliseconds.

Enable autocommit

Enabled by default.

Autocommit interval

Autocommit interval in seconds The default value is 1 second.

Maximum possible value: 18446744073709551615. Any positive number can be specified.

Page top
[Topic 220744]

Connector, http type

Expand all | Collapse all

Connectors of the http typeUsed for receiving events over HTTP when working with Windows and Linux agents. Settings for a connector of the http type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: http.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. You can enter a URL in one of the following formats:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>
  • :<port number>

Required setting.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

  • Custom PFX means TLS encryption with a PFX secret. You must generate a PFX certificate with a private key in PKCS#12 container format in an external Certificate Authority, export the PFX certificate from the key store, and upload the PFX certificate to the KUMA Console as a PFX secret. If you select this value, in the PFX secret drop-down list, specify a PFX secret with a certificate signed by a certification authority. You can select an existing PFX secret or create a new PFX secret. To create a new PFX secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a PFX secret?

    To create a PFX secret:

    1. In the Name field, enter the name of the PFX secret.
    2. Click Upload PFX and select the PKCS#12 container file to which you exported the PFX certificate with the private key.
    3. In the Password field, enter the PFX certificate security password that was set in the PFX Certificate Export Wizard.
    4. Click the Create button.

    The PFX secret is created and displayed in the PFX secret drop-down list.

Page top
[Topic 220745]

Connector, sql type

Expand all | Collapse all

Connectors of the sql typeUsed for querying databases. KUMA supports multiple types of databases. When creating a connector of the sql type, you must specify general connector settings and individual database connection settings. Settings for a connector of the sql type are described in the following tables.

The program supports the following types of SQL databases:

  • SQLite.
  • MariaDB 10.5 or later.
  • MSSQL.
  • MySQL 5.7 or later.
  • PostgreSQL.
  • Cockroach.
  • Oracle.
  • Firebird.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: sql.

Required setting.

Tags

Tags for resource search.

Optional setting.

Default query

SQL query that is executed when connecting to the database.

Required setting.

Reconnect to the database every time a query is sent

This toggle enables reconnection of the connector to the database every time a query is sent. This toggle switch is turned off by default.

Poll interval, sec

Interval for executing SQL queries in seconds. The default value is 10 seconds.

Connection

Database connection settings:

  • Database type is the type of the database to connect to. When you select a database type, the prefix corresponding to the communication protocol is displayed in the URL field. For example, for a ClickHouse database, the URL field contains the clickhouse:// prefix.
  • The Secret separately check box allows viewing the connection information.
  • URL is the connection URL. This lets you view connection information without having to re-create a large number of connections if the password of the user account that you used for the connections changes.

    When creating connections, strings containing account credentials with special characters may be incorrectly processed. If an error occurs when creating a connection, but you are sure that the specified settings are correct, enter the special characters in percent encoding.

    Codes of special characters

    !

    #

    $

    %

    &

    '

    (

    )

    *

    +

    %21

    %23

    %24

    %25

    %26

    %27

    %28

    %29

    %2A

    %2B

    ,

    /

    :

    ;

    =

    ?

    @

    [

    ]

    \

    %2C

    %2F

    %3A

    %3B

    %3D

    %3F

    %40

    %5B

    %5D

    %5C

    The following special characters are not supported in passwords used to access SQL databases: space, [, ], :, /, #, %, \.

    If you select the Secret separately check box, you can select an existing URL or create a new URL. To create a new URL, select Create new.

    If you want to edit the settings of an existing URL, click the pencil edit-pencilicon next to it.

  • Secret  is an urls secret that stores a list of URLs for connecting to the database. This field is displayed if the Secret separately check box is selected.
  • Identity column is the name of the column that contains the ID for each row of the table.

    Required setting.

  • Identity seed is the value in the identity column for determining the row from which you want to start reading data from the SQL table.
  • Query is the additional SQL query that is executed instead of the default SQL query.
  • Poll interval, sec is the SQL query execution interval in seconds. The specified interval is used instead of the default interval for the connector. The default value is 10 seconds.

You can add multiple connections or delete a connection. To add a connection, click the +Add connection button. To remove a connection, click the delete cross-black icon next to it.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

KUMA converts SQL responses to UTF-8 encoding. You can configure the SQL server to send responses in UTF-8 encoding or change the encoding of incoming messages on the KUMA side.

Within a single connector, you can create a connection for multiple supported databases. If a collector with a connector of the sql type cannot be started, check if the /opt/kaspersky/kuma/collector/<collector ID>/sql/state-<file ID> state file is empty. If the state file is empty, delete it and restart the collector.

To create a connection for multiple SQL databases:

  1. Click the Add connection button.
  2. Specify the URL, Identity column, Identity seed, Query, and Poll interval, sec values.
  3. Repeat steps 1–2 for each required connection.

Supported SQL types and their specific usage features

The following SQL types are supported:

  • MSSQL.

    For example:

    • sqlserver://{user}:{password}@{server:port}/{instance_name}?database={database}

    We recommend using this URL variant.

    • sqlserver://{user}:{password}@{server}?database={database}

    The characters @p1 are used as a placeholder in the SQL query.

    If you want to connect using domain account credentials, specify the account name in <domain>%5C<user> format. For example: sqlserver://domain%5Cuser:password@ksc.example.com:1433/SQLEXPRESS?database=KAV.

  • MySQL/MariaDB

    For example:

    mysql://{user}:{password}@tcp({server}:{port})/{database}

    The characters ? are used as placeholders in the SQL query.

  • PostgreSQL.

    For example: postgres://{user}:{password}@{server}/{database}?sslmode=disable

    The characters $1 are used as a placeholder in the SQL query.

  • CockroachDB

    For example:

    postgres://{user}:{password}@{server}:{port}/{database}?sslmode=disable

    The characters $1 are used as a placeholder in the SQL query.

  • SQLite3

    For example:

    sqlite3://file:{file_path}

    A question mark (?) is used as a placeholder in the SQL query.

    When querying SQLite3, if the initial value of the ID is in datetime format, you must add a date conversion with the sqlite datetime function to the SQL query. For example:

    select * from connections where datetime(login_time) > datetime(?, 'utc') order by login_time

    In this example, connections is the SQLite table, and the value of the variable ? is taken from the Identity seed field, and it must be specified in the {<date>}T{<time>}Z format, for example, 2021-01-01T00:10:00Z).

  • Oracle DB

    Example URL of a secret with the 'oracle' driver:

    oracle://{user}:{password}@{server}:{port}/{service_name}

    oracle://{user}:{password}@{server}:{port}/?SID={SID_VALUE}

    If the query execution time exceeds 30 seconds, the oracle driver aborts the SQL request, and the following error appears in the collector log: user requested cancel of current operation. To increase the execution time of an SQL query, specify the value of the timeout parameter in seconds in the connection string, for example:

    oracle://{user}:{password}@{server}:{port}/{service_name}?timeout=300

    The :val variable is used as a placeholder in the SQL query.

    When querying Oracle DB, if the identity seed is in the datetime format, you must consider the type of the field in the database and, if necessary, add conversions of the time string in the SQL query to make sure the SQL connector works correctly. For example, if the Connections table in the database has a login_time field, the following conversions are possible:

    • If the login_time field has the TIMESTAMP type, then depending on the configuration of the database, the login_time field may contain a value in the YYYY-MM-DD HH24:MI:SS format, for example, 2021-01-01 00:00:00. In this case, you need to specify 2021-01-01T00:00:00Z in the Identity seed field, and in the SQL query, perform the conversion using the to_timestamp function, for example:

      select * from connections where login_time > to_timestamp(:val, 'YYYY-MM-DD"T"HH24:MI:SS"Z"')

    • If the login_time field has the TIMESTAMP WITH TIME ZONE type, then depending on the configuration of the database, the login_time field may contain a value in the YYYY-MM-DD"T"HH24:MI:SSTZH:TZM format (for example, 2021-01-01T00:00:00+03:00). In this case, you need to specify 2021-01-01T00:00:00+03:00 in the Identity seed field, and in the SQL query, perform the conversion using the to_timestamp_tz function, for example:

      select * from connections_tz where login_time > to_timestamp_tz(:val, 'YYYY-MM-DD"T"HH24:MI:SSTZH:TZM')

      For details about the to_timestamp and to_timestamp_tz functions, please refer to the official Oracle documentation.

    To interact with Oracle DB, you must install the libaio1 Astra Linux package.

  • Firebird SQL

    For example:

    firebirdsql://{user}:{password}@{server}:{port}/{database}

    A question mark (?) is used as a placeholder in the SQL query.

    If a problem occurs when connecting Firebird on Windows, use the full path to the database file, for example:

    firebirdsql://{user}:{password}@{server}:{port}/C:\Users\user\firebird\db.FDB

  • ClickHouse

    When using TLS encryption, by default, the connector works with ClickHouse only on port 9000. If TLS encryption is not used, by default, the connector works with ClickHouse only on port 9440. If TLS encryption mode is configured on the ClickHouse server, and in connector settings, in the TLS mode drop-down list, you have selected Disabled or vice versa, the database connection cannot be established.

    If you want to connect to the KUMA ClickHouse, in the SQL connector settings, specify the PublicPki secret type, which contains the base64-encoded PEM private key and the public key.

    In the parameters of the SQL connector for the ClickHouse connection type, you need to select Disabled in the TLS mode drop-down list. This value must not be specified if a certificate is used for authentication. If in the TLS mode drop-down list, you select Custom CA, you need to specify the ID of a secret of the 'certificate' type in the Identity column field. You also need to select one of the following values in the Authorization type drop-down list:

    • Disabled. If you select this value, you need to leave the Identity column field blank.
    • Plain. Select this value if the Secret separately check box is selected and the ID of a secret of the 'credentials' type is specified in the Identity column field.
    • PublicPki. Select this value if the Secret separately check box is selected and the ID of a secret of the 'PublicPki' type is specified in the Identity column field.

    The Secret separately check box lets you specify the URL separately, not as part of the secret.

A sequential request for database information is supported in SQL queries. For example, if in the Query field, you enter select * from <name of data table> where id > <placeholder>, the value of the Identity seed field is used as the placeholder value the first time you query the table. In addition, the service that utilizes the SQL connector saves the ID of the last read entry, and the ID of this entry will be used as the placeholder value in the next query to the database.

Examples of SQL requests

SQLite, Firebird—select * from table_name where id > ?

MSSQL—select * from table_name where id > @p1

MySQL, MariaDB—select * from table_name where id > ?

PostgreSQL, Cockroach—select * from table_name where id > $1

Oracle—select * from table_name where id > :val

Page top
[Topic 220746]

Connector, file type

Expand all | Collapse all

Connectors of the file typeUsed for getting data from text files when working with Windows and Linux agents. One line of a text file is considered to be one event. \n is used as the newline character.

If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file type, at the Event parsing in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:

  • $kuma_fileSourceName to pass the name of the file being processed by the collector in the KUMA event field.
  • $kuma_fileSourcePath to pass the path to the file being processed by the collector in the KUMA event field.

When you use a file connector, the new variables in the normalizer will only work with destinations of the internal type.

To read Windows files, you need to create a connector of the file type and manually install the agent on Windows. The Windows agent must not read its files in the folder where the agent is installed. The connector will work even with a FAT file system; if the disk is defragmented, the connector re-reads all files from scratch because all inodes of files are reset.

We do not recommend running the agent under an administrator account; read permissions for folders/files must be configured for the user account of the agent. We do not recommend installing the agent on important systems; it is preferable to send the logs and read them on dedicated hosts with the agent.

For each file that the connector of the file type interacts with, a state file (states.ini) is created with the offset, dev, inode, and filename parameters. The state file allows the connector, to resume reading from the position where the connector last stopped instead of starting over when rereading the file. Some special considerations are involved in rereading files:

  • If the inode parameter in the state file changes, the connector rereads the corresponding file from the beginning. When the file is deleting and recreated, the inode setting in the associated state file may remain unchanged. In this case, when rereading the file, the connector resumes reading in accordance with the offset parameter.
  • If the file has been truncated or its size has become smaller, the connector start reading from the beginning.
  • If the file has been renamed, when rereading the file, the connector resumes reading from the position where the connector last stopped.
  • If the directory with the file has been remounted, when rereading the file, the connector resumes reading from the position where the connector last stopped. You can specify the path to the files with which the connector interacts when configuring the connector in the File path field.

Settings for a connector of the file type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: file.

Required setting.

Tags

Tags for resource search.

Optional setting.

Path to the file.

The full path to the file that the connector interacts with. For example, /var/log/*som?[1-9].log or с:\folder\logs.*. The following paths are not allowed:

  • `(?i)^[a-zA-Z]:\\Program Files`.
  • `(?i)^[a-zA-Z]:\\Program Files \(x86\)`.
  • `(?i)^[a-zA-Z]:\\Windows`.
  • `(?i)^[a-zA-Z]:\\ProgramData\\Kaspersky Lab\\KUMA`.

File and folder mask templates

Masks:

  • '*'—matches any sequence of characters.
  • '[' [ '^' ] { <range of characters> } ']'—class of characters (may not be left blank).
  • '?'—matches any single character.

Ranges of characters:

  • [0-9] for numerals
  • [a-zA-Z] for Latin alphabet characters

Examples:

  • /var/log/*som?[1-9].log
  • /mnt/dns_logs/*/dns.log
  • /mnt/proxy/access*.log

Limitations when using prefixes in file paths

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

Limiting the number of files for watching by mask

The number of files simultaneously watched by mask can be limited by the max_user_watches setting of the Core. To view the value of this setting, run the command:

cat /proc/sys/fs/inotify/max_user_watches

If the number of files for watching exceeds the value of the max_user_watches setting, the collector cannot read any more events from the files and the following error is written to the collector log:

Failed to add files for watching {"error": "no space left on device"}

To make sure that the collector continues to work correctly, you can configure the appropriate rotation of files so that the number of files does not exceed the value of the max_user_watches setting, or increase the max_user_watches value.

To increase the value of this setting, run the command:

sysctl fs.inotify.max_user_watches=<number of files>

sysctl -p

You can also add the value of the max_user_watches setting to sysctl.conf so make sure it is kept indefinitely.

After you increase the value of the max_user_watches setting, the collector resumes correct operation.

Required setting.

Modification timeout, sec

The time in seconds for which the file must not be updated for KUMA to apply the action specified in the Action after timeout drop-down list to the file. Default value: 0, meaning that if the file is not updated, KUMA does not apply any action to it.

The entered value must not be less than the value that you entered on the Advanced settings in the Poll interval, sec field.

Action after timeout

The action that KUMA applies to the file after the time specified in the Modification timeout, sec:

  • Do nothing. The default value.
  • Add a suffix adds the .kuma_processed extension to the file name and does not process the file even when it is updated.
  • Delete deletes the file.

Auditd

This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event.

If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism.

If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events.

The maximum size of a grouped auditd event is approximately 4,174,304 characters.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

File/folder polling mode

Specifies how the connector rereads files in the directory:

  • Monitor changes means the connector rereads files in the directory at an interval in milliseconds specified in the Poll interval, ms field if the files are not being updated. The default value.

    For example, if the files are constantly being updated, and the value of Request interval, ms is 5000, the connector rereads the files continuously instead of every 5000 milliseconds. If the files are not being updated, the connector rereads them every 5000 milliseconds.

  • Track periodically means the connector rereads files in the directory at an interval in milliseconds specified in the Polling interval, ms field, regardless of whether the files are being updated or not.

Poll interval, ms

The interval in milliseconds at which the connector rereads files in the directory. Default value: 0 means the connector rereads files in the directory every 700 milliseconds. In the File/folder polling mode drop-down list, select the mode the connector must use to reread files in the directory.

The entered value must not be less than the value that you entered on the Basic settings in the Modification timeout, sec field.

We recommend entering a value less than the value that you entered in the Event buffer TTL field because this may adversely affect the performance of Auditd.

Character encoding

Character encoding. The default is UTF-8.

Event buffer TTL

Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event.

The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: 700 to 30,000. The default value is 2000.

This field is available if you have enabled the Auditd toggle switch on the Basic settings tab.

The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics.

If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector.

Transport header

Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it.

The regular expression must contain the record_type_name, record_type_value, and event_sequence_number groups. If a multi-line auditd event contains a prefix, the prefix is retained for the first line of the auditd event and discarded for the following lines.

You can revert to the default regular expression for auditd events by clicking Reset to default value.

Page top
[Topic 220748]

Connector, 1c-log type

Expand all | Collapse all

Connectors with the 1c-log typeUsed for getting data from 1C technology logs when working with Linux agents. \n is used as the newline character. The connector accepts only the first line from a multi-line event record.

Settings for a connector of the 1c-log type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: 1c-log.

Required setting.

Tags

Tags for resource search.

Optional setting.

Directory path

The full path to the directory with the files that you want to interact with, for example, /var/log/1c/logs/.

Limitations when using prefixes in file paths

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

File/folder polling mode

Specifies how the connector rereads files in the directory:

  • Monitor changes means the connector rereads files in the directory at an interval in milliseconds specified in the Poll interval, ms field if the files are not being updated. The default value.

    For example, if the files are constantly being updated, and the value of Request interval, ms is 5000, the connector rereads the files continuously instead of every 5000 milliseconds. If the files are not being updated, the connector rereads them every 5000 milliseconds.

  • Track periodically means the connector rereads files in the directory at an interval in milliseconds specified in the Polling interval, ms field, regardless of whether the files are being updated or not.

Poll interval, ms

The interval in milliseconds at which the connector rereads files in the directory. Default value: 0 means the connector rereads files in the directory every 700 milliseconds. In the File/folder polling mode drop-down list, select the mode the connector must use to reread files in the directory.

Character encoding

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

Connector operation diagram:

  1. All 1C technology log files are searched. Log file requirements:
    • Files with the LOG extension are created in the log directory (/var/log/1c/logs/ by default) within a subdirectory for each process.

      Example of a supported 1C technology log structure

      1c-log-fileStructure

    • Events are logged to a file for an hour; after that, the next log file is created.
    • The file names have the following format: <YY><MM><DD><HH>.log. For example, 22111418.log is a file created in 2022, in the 11th month, on the 14th at 18:00.
    • Each event starts with the event time in the following format: <mm>:<ss>.<microseconds>-<duration in microseconds>.
  2. The processed files are discarded. Information about processed files is stored in the file /<collector working directory>/1c_log_connector/state.json.
  3. Processing of the new events starts, and the event time is converted to the RFC3339 format.
  4. The next file in the queue is processed.

Connector limitations:

  • Installation of a collector with a 1c-log connector is not supported in a Windows operating system. To set up transfer of 1C log files for processing by the KUMA collector:
    1. On the Windows server, grant read access over the network to the folder with the 1C log files.
    2. On the Linux server, mount the shared folder with the 1C log files on the Windows server (see the list of supported operating systems).
    3. On the Linux server, install the collector that you want to process 1C log files from the mounted shared folder.
  • Only the first line from a multi-line event record is processed.
  • The normalizer processes only the following types of events:
    • ADMIN
    • ATTN
    • CALL
    • CLSTR
    • CONN
    • DBMSSQL
    • DBMSSQLCONN
    • DBV8DBENG
    • EXCP
    • EXCPCNTX
    • HASP
    • LEAKS
    • LIC
    • MEM
    • PROC
    • SCALL
    • SCOM
    • SDBL
    • SESN
    • SINTEG
    • SRVC
    • TLOCK
    • TTIMEOUT
    • VRSREQUEST
    • VRSRESPONSE
Page top
[Topic 244775]

Connector, 1c-xml type

Expand all | Collapse all

Connectors with the 1c-xml typeUsed for getting data from 1C registration logs when working with Linux agents. When the connector handles multi-line events, it converts them into single-line events.

Settings for a connector of the 1c-xml type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: 1c-xml.

Required setting.

Tags

Tags for resource search.

Optional setting.

Directory path

The full path to the directory with the files that you want to interact with, for example, /var/log/1c/logs/.

Limitations when using prefixes in file paths

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Buffer size

Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB).

File/folder polling mode

Specifies how the connector rereads files in the directory:

  • Monitor changes means the connector rereads files in the directory at an interval in milliseconds specified in the Poll interval, ms field if the files are not being updated. The default value.

    For example, if the files are constantly being updated, and the value of Request interval, ms is 5000, the connector rereads the files continuously instead of every 5000 milliseconds. If the files are not being updated, the connector rereads them every 5000 milliseconds.

  • Track periodically means the connector rereads files in the directory at an interval in milliseconds specified in the Polling interval, ms field, regardless of whether the files are being updated or not.

Poll interval, ms

The interval in milliseconds at which the connector rereads files in the directory. Default value: 0 means the connector rereads files in the directory every 700 milliseconds. In the File/folder polling mode drop-down list, select the mode the connector must use to reread files in the directory.

Character encoding

Character encoding. The default is UTF-8.

Connector operation diagram:

  1. The files containing 1C logs with the XML extension are searched within the specified directory. Logs are placed in the directory either manually or using an application written in the 1C language, for example, using the ВыгрузитьЖурналРегистрации() function. The connector only supports logs received this way. For more information on how to obtain 1C logs, see the official 1C documentation.
  2. Files are sorted by the last modification time in ascending order. All the files modified before the last read are discarded.

    Information about processed files is stored in the file /<collector working directory>/1c_xml_connector/state.ini and has the following format: "offset=<number>\ndev=<number>\ninode=<number>".

  3. Events are defined in each unread file.
  4. Events from the file are processed one by one. Multi-line events are converted to single-line events.

Connector limitations:

  • Installation of a collector with a 1c-xml connector is not supported in a Windows operating system. To set up transfer of 1C log files for processing by the KUMA collector:
    1. On the Windows server, grant read access over the network to the folder with the 1C log files.
    2. On the Linux server, mount the shared folder with the 1C log files on the Windows server (see the list of supported operating systems).
    3. On the Linux server, install the collector that you want to process 1C log files from the mounted shared folder.
  • Files with an incorrect event format are not read. For example, if event tags in the file are in Russian, the collector does not read such events.

    Example of a correct XML file with an event.

    <?xml version="1.0" encoding="UTF-8"?>

    <v8e:EventLog xmlns: v8e="http://v8.1c.ru/eventLog"

    xmlns:xs="http://www.w3.org/2001/XMLSchema"

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <v8e:Event>

    <v8e:Level>Information</v8e:Level>

    <v8e:Date>2022-12-07T01:55:44+03:00</v8e:Date>

    <v8eApplicationName>generator.go</v8e:ApplicationName>

    <v8e:ApplicationPresentation>generator.go</v8e:ApplicationPresentation>

    <v8e:Event>Test event type: Count test</v8e:Event>

    <v8e:EventPresentation></v8e:Event Presentation>

    <v8e:User>abcd_1234</v8e:User>

    <v8e:UserName>TestUser</v8e:UserName>

    <v8e:Computer>Test OC</v8e:Computer>

    <v8e:Metadata></v8e:Metadata>

    <v8e:MetadataPresentation></v8e:MetadataPresentation>

    <v8e:Comment></v8e:Comment>

    <v8e:Data>

    <v8e:Name></v8e:Name>

    <v8e:CurrentOSUser></v8e:CurrentOSUser>

    </v8e:Data>

    <v8e:DataPresentation></v8e:DataPresentation>

    <v8e:TransactionStatus>NotApplicable</v8e:TransactionStatus>

    <v8e:TransactionID></v8e:TransactionID>

    <v8e:Connection>0</v8e:Connection>

    <v8e:Session></v8e:Session>

    <v8e:ServerName>kuma-test</v8e:ServerName>

    <v8e:Port>80</v8e:Port>

    <v8e:SyncPort>0</v8e:SyncPort>

    </v8e:Event>

    </v8e:EventLog>

    Example of a processed event.

    XML_processed_event_example

  • If a file read by the connector is enriched with the new events and if this file is not the last file read in the directory, all events from the file are processed again.
Page top
[Topic 244776]

Connector, diode type

Connectors of the diode typeUsed for unidirectional data transmission in industrial ICS networks using data diodes. Settings for a connector of the diode type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: diode.

Required setting.

Tags

Tags for resource search.

Optional setting.

Directory with events from the data diode

Full path to the directory on the KUMA collector server, into which the data diode moves files with events from the isolated network segment. After the connector has read these files, the files are deleted from the directory. Maximum length of the path: 255 Unicode characters.

Limitations when using prefixes in paths

Prefixes that cannot be used when specifying paths to files:

  • /*
  • /bin
  • /boot
  • /dev
  • /etc
  • /home
  • /lib
  • /lib64
  • /proc
  • /root
  • /run
  • /sys
  • /tmp
  • /usr/*
  • /usr/bin/
  • /usr/local/*
  • /usr/local/sbin/
  • /usr/local/bin/
  • /usr/sbin/
  • /usr/lib/
  • /usr/lib64/
  • /var/*
  • /var/lib/
  • /var/run/
  • /opt/kaspersky/kuma/

Files are available at the following paths:

  • /opt/kaspersky/kuma/clickhouse/logs/
  • /opt/kaspersky/kuma/mongodb/log/
  • /opt/kaspersky/kuma/victoria-metrics/log/

Required setting.

Delimiter

The character that marks the boundary between events:

  • \n
  • \t
  • \0

If you do not select a value in this drop-down list, \n is selected by default.

You must select the same value in the Delimiter drop-down list in the settings of the connector and the destination being used to transmit events from the isolated network segment using a data diode.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Number of handlers

Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2.

The value must be a positive integer.

Poll interval, sec

Interval at which the files are read from the directory containing events from the data diode. The default value is 2 seconds.

Character encoding

Character encoding. The default is UTF-8.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

You must select the same value in the Snappy drop-down list in the settings of the connector and the destination being used to transmit events from the isolated network segment using a data diode.

Page top
[Topic 232912]

Connector, ftp type

Connectors of the ftp typeUsed for getting data over File Transfer Protocol (FTP) when working with Windows and Linux agents. Settings for a connector of the ftp type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: ftp.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL of file or file mask that begins with the 'ftp://' schema. You can use * ? [...] for the file mask.

File mask templates

Masks:

  • '*'—matches any sequence of characters.
  • '[' [ '^' ] { <range of characters> } ']'—class of characters (may not be left blank).
  • '?'—matches any single character.

Ranges of characters:

  • [0-9] for numerals
  • [a-zA-Z] for Latin alphabet characters

Examples:

  • /var/log/*som?[1-9].log
  • /mnt/dns_logs/*/dns.log
  • /mnt/proxy/access*.log

If the URL does not contain the port number of the FTP server, port 21 is automatically specified.

Required setting.

Secret

 

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 220749]

Connector, nfs type

Connectors of the nfs typeUsed for getting data over Network File System (NFS) when working with Windows and Linux agents. Settings for a connector of the nfs type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: nfs.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

Path to the remote directory in the nfs://<host name>/<path> format.

Required setting.

File name mask

A mask used to filter files containing events. The following wildcards are acceptable "*", "?", "[...]".

Poll interval, sec

Poll interval in seconds. The time interval after which files are re-read from the remote system. The default value is 0.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 220750]

Connector, wmi type

Connectors of the wmi typeUsed for getting data using Windows Management Instrumentation when working with Windows agents. Settings for a connector of the wmi type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: wmi.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

URL

URL of the collector that you created to receive data using Windows Management Instrumentation, for example, kuma-collector.example.com:7221.

When a collector is created, an agent is automatically created that will get data on the remote device and forward it to the collector service. If you know which server the collector service will be installed on, the URL is known in advance. You can enter the URL of the collector in the URL field after completing the installation wizard. To do so, you first need to copy the URL of the collector in the ResourcesActive services section.

Required setting.

Default credentials

No value. You need to specify credentials for connecting to hosts in the Remote hosts table.

Remote hosts

Settings of remote Windows devices to connect to.

  • Server is the IP address or name of the device from which you want to receive data, for example, machine-1.

    Required setting.

  • Domain is the name of the domain in which the remote device resides. For example, example.com.

    Required setting.

  • Log type are the names of the Windows logs that you want to get. By default, this drop-down list includes only preconfigured logs, but you can add custom log to the list. To do so, enter the names of the custom logs in the Windows logs field, then press ENTER. KUMA service and resource configurations may require additional changes in order to process custom logs correctly.

    Logs that are available by default:

    • Application
    • ForwardedEvents
    • Security
    • System
    • HardwareEvents

    If a WMI connection uses at least one log with an incorrect name, the agent that uses the connector does not receive events from all the logs within this connection, even if the names of other logs are specified correctly. The WMI agent connections for which all log names are specified correctly will work properly.

  • Secret is the account credentials for accessing the remote Windows asset with permissions to read logs. If you do not select an option in this drop-down list, the credentials from the secret selected in the Default credentials drop-down list are used. The login in the secret must be specified without the domain. The domain value for access to the host is taken from the Domain column of the Remote hosts table.

    You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

You can add multiple remote Windows devices or remove a remote Windows device. To add a remote Windows device, click +Add. To remove a remote Windows device, select the check box next to it and click Delete.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.

Receiving events from a remote device

Conditions for receiving events from a remote Windows device hosting a KUMA agent:

  • To start the KUMA agent on the remote device, you must use an account with the “Log on as a service” permissions.
  • To receive events from the KUMA agent, you must use an account with Event Log Readers permissions. For domain servers, one such user account can be created so that a group policy can be used to distribute its rights to read logs to all servers and workstations in the domain.
  • TCP ports 135, 445, and 49152–65535 must be opened on the remote Windows devices.
  • You must run the following services on the remote machines:
    • Remote Procedure Call (RPC)
    • RPC Endpoint Mapper
Page top
[Topic 220751]

Connector, wec type

Connectors of the wec typeUsed for getting data using Windows Event Forwarding (WEF) and Windows Event Collector (WEC), or local operating system logs of a Windows host when working with Windows agents. Settings for a connector of the wec type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: wec.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL of the collector that you created to receive data using Windows Event Collector, for example, kuma-collector.example.com:7221.

When a collector is created, an agent is automatically created that will get data on the remote device and forward it to the collector service. If you know which server the collector service will be installed on, the URL is known in advance. You can enter the URL of the collector in the URL field after completing the installation wizard. To do so, you first need to copy the URL of the collector in the ResourcesActive services section.

Required setting.

Windows logs

The names of the Windows logs that you want to get. By default, this drop-down list includes only preconfigured logs, but you can add custom log to the list. To do so, enter the names of the custom logs in the Windows logs field, then press ENTER. KUMA service and resource configurations may require additional changes in order to process custom logs correctly.

Preconfigured logs:

  • Application
  • ForwardedEvents
  • Security
  • System
  • HardwareEvents

If the name of at least one log is specified incorrectly, the agent using the connector does not receive events from any log, even if the names of other logs are correct.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.

To start the KUMA agent on the remote device, you must use a service account with the “Log on as a service” permissions. To receive events from the operating system log, the service user account must also have Event Log Readers permissions.

You can create one user account with “Log on as a service” and “Event Log Readers” permissions, and then use a group policy to extend the rights of this account to read the logs to all servers and workstations in the domain.

We recommend that you disable interactive logon for the service account.

Page top
[Topic 220752]

Connector, etw type

Connectors of the etw typeUsed for getting extended logs of DNS servers. Settings for a connector of the etw type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: etw.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL of the DNS server.

Required setting.

Session name

Session name that corresponds to the ETW provider: Microsoft-Windows-DNSServer {EB79061A-A566-4698-9119-3ED2807060E7}.

If in a connector of the etw type, the session name is specified incorrectly, an incorrect provider is specified in the session, or an incorrect method is specified for sending events (to send events correctly, on the Windows Server side, you must specify "Real time" or "File and Real time" mode), events will not arrive from the agent, an error will be recorded in the agent log on Windows, and the status of the agent will be green. At the same time, no attempt will be made to get events every 60 seconds. If you modify session settings on the Windows side, you must restart the etw agent and/or the session for the changes to take effect.

For details about specifying session settings on the Windows side to receive DNS server events, see the Configuring receipt of DNS server events using the ETW agent section.

Required setting.

Extract event information

This toggle switch enables the extraction of the minimum set of event information that can be obtained without having to download third-party metadata from the disk. This method helps conserve CPU resources on the computer with the agent. By default, this toggle switch is enabled and all event data is extracted.

Extract event properties

This toggle switch enables the extraction of event properties. If this toggle switch is disabled, event properties are not extracted, which helps save CPU resources on the machine with the agent. By default, this toggle switch is enabled and event properties are extracted. You can enable the Extract event properties switch only if the Extract event information toggle switch is enabled.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • With verification means TLS encryption is used with verification of the certificate signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during application installation and are stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ directory.

Compression

Drop-down list for configuring Snappy compression:

  • Disabled. This value is selected by default.
  • Use Snappy.

If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.

Page top
[Topic 275982]

Connector, snmp type

Connectors of the snmp typeUsed for getting data over Simple Network Management Protocol (SNMP) when working with Windows and Linux agents. To process events received over SNMP, you must use the json normalizer. Supported SNMP versions:

  • snmpV1
  • snmpV2
  • snmpV3

Only one snmp connector created in the agent settings can be used in an agent. If you need to use multiple snmp connectors, you must create one or all snmp connectors as a separate resource and select it in the connection settings.

Available settings for a connector of the snmp type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: snmp.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

SNMP resource

Settings for connecting to an SNMP resource:

  • SNMP version is the version of the SNMP protocol being used.

    Required setting.

  • Host is the name or IP address of the host. Possible formats:
    • <host name>
    • <IPv4 address>
    • <IPv6 address>

    Required setting.

  • Port is the port number to be used when connecting to the host. Typical values are 161 or 162.

    Required setting.

  • Secret is the secret that stores the credentials for connecting over the Simple Network Management Protocol. The secret type must match the SNMP version.

    You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

    Required setting.

You can add multiple connections to SNMP resources or delete an SNMP resource connection. To create a connection to an SNMP resource, click the + SNMP resource button. To delete a connection to an SNMP resource, click the delete cross-black icon next to the SNMP resource.

Settings

Rules for naming the received data, according to which OIDs (object identifiers) are converted to the keys with which the normalizer can interact. Available settings:

  • Parameter name is the name for the data type, for example, Host name or Host uptime.

    Required setting.

  • OID is a unique identifier that determines where to look for the required data at the event source, for example, 1.3.6.1.2.1.1.5.

    Required setting.

  • Key is a unique identifier returned in response to a request to the device with the value of the requested parameter, for example, sysName. You can reference the key when normalizing data.

    Required setting.

  • If the MAC address check box is selected, KUMA correctly decodes data where the OID contains information about the MAC address in OctetString format. After decoding, the MAC address is converted to a String value of the XX:XX:XX:XX:XX:XX format.

You can do the following with rules:

  • Add multiple rules. To add a rule, click the +Add button.
  • Delete rules. To delete a rule, select the check box next to it and click Delete.
  • Clear rule settings. To do so, click the Clear all values button.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 220753]

Connector, snmp-trap type

Connectors of the snmp-trap typeUsed for passively receiving events using SNMP traps when working with Windows and Linux agents. The connector receives snmp-trap events and prepares them for normalization by mapping SNMP object IDs to temporary keys. Then the message is passed to the JSON normalizer, where the temporary keys are mapped to the KUMA fields and an event is generated. To process events received over SNMP, you must use the json normalizer. Supported SNMP versions:

  • snmpV1
  • snmpV2

Settings for a connector of the snmp-trap type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: snmp-trap.

Required setting.

Tags

Tags for resource search.

Optional setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

SNMP resource

Connection settings for receiving snmp-trap events:

  • SNMP version is the version of the SNMP protocol being used:
    • snmpV1
    • snmpV2

    For example, Windows uses the snmpV2 version of the SNMP protocol by default.

    Required setting.

  • URL is the URL for receiveing SNMP trap events. You can enter a URL in one of the following formats:
    • <host name>:<port number>
    • <IPv4 address>:<port number>
    • <IPv6 address>:<port number>
    • :<port number>

    Required setting.

You can add multiple connections or delete a connection. To add a connection, click the + SNMP resource button. To remove a SNMP resource, click the delete cross-black icon next to it.

Settings

Rules for naming the received data, according to which OIDs (object identifiers) are converted to the keys with which the normalizer can interact. Available settings:

  • Parameter name is the name for the data type, for example, Host name or Host uptime.

    Required setting.

  • OID is a unique identifier that determines where to look for the required data at the event source, for example, 1.3.6.1.2.1.1.5.

    Required setting.

  • Key is a unique identifier returned in response to a request to the device with the value of the requested parameter, for example, sysName. You can reference the key when normalizing data.

    Required setting.

  • If the MAC address check box is selected, KUMA correctly decodes data where the OID contains information about the MAC address in OctetString format. After decoding, the MAC address is converted to a String value of the XX:XX:XX:XX:XX:XX format.

You can do the following with rules:

  • Add multiple rules. To add a rule, click the +Add button.
  • Delete rules. To delete a rule, select the check box next to it and click Delete.
  • Clear rule settings. To do so, click the Clear all values button.
  • Populate the table with mappings for OID values received in WinEventLog logs. To do this, click the Apply OIDs for WinEventLog button.

    If more data needs to be determined and normalized in the incoming events, add to the table rows containing OID objects and their keys.

    Data is processed according to the allow list principle: objects that are not specified in the table are not sent to the normalizer for further processing.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

When receiving snmp-trap events from Windows with Russian localization, if you encounter invalid characters in the event, we recommend changing the character encoding in the snmp-trap connector to Windows 1251.

In this section

Configuring the source of SNMP trap messages for Windows

Page top
[Topic 239700]

Configuring the source of SNMP trap messages for Windows

Configuring a Windows device to send SNMP trap messages to the KUMA collector proceeds in stages:

  1. Configuring and starting the SNMP and SNMP trap services
  2. Configuring the Event to Trap Translator service

Events from the source of SNMP trap messages must be received by the KUMA collector, which uses a connector of the snmp-trap type and a json normalizer.

In this section

Configuring and starting the SNMP and SNMP trap services

Configuring the Event to Trap Translator service

Page top
[Topic 239863]

Configuring and starting the SNMP and SNMP trap services

To configure and start the SNMP and SNMP trap services in Windows 10:

  1. Open SettingsAppsApps and featuresOptional featuresAdd featureSimple Network Management Protocol (SNMP) and click Install.
  2. Wait for the installation to complete and restart your computer.
  3. Make sure that the SNMP service is running. If any of the following services are not running, enable them:
    • ServicesSNMP Service.
    • ServicesSNMP Trap.
  4. Right-click ServicesSNMP Service, and in the context menu select Properties. Specify the following settings:
    • On the Log On tab, select the Local System account check box.
    • On the Agent tab, fill in the Contact (for example, specify User-win10) and Location (for example, specify detroit) fields.
    • On the Traps tab:
      • In the Community Name field, enter community public and click Add to list.
      • In the Trap destination field, click Add, specify the IP address or host of the KUMA server on which the collector that waits for SNMP events is deployed, and click Add.
    • On the Security tab:
      • Select the Send authentication trap check box.
      • In the Accepted community names table, click Add, enter Community Name public and specify READ WRITE as the Community rights.
      • Select the Accept SNMP packets from any hosts check box.
  5. Click Apply and confirm your selection.
  6. Right click ServicesSNMP Service and select Restart.

To configure and start the SNMP and SNMP trap services in Windows XP:

  1. Open StartControl PanelAdd or Remove ProgramsAdd / Remove Windows ComponentsManagement and Monitoring ToolsDetails.
  2. Select Simple Network Management Protocol and WMI SNMP Provider, and then click OKNext.
  3. Wait for the installation to complete and restart your computer.
  4. Make sure that the SNMP service is running. If any of the following services are not running, enable them by setting the Startup type to Automatic:
    • ServicesSNMP Service.
    • ServicesSNMP Trap.
  5. Right-click ServicesSNMP Service, and in the context menu select Properties. Specify the following settings:
    • On the Log On tab, select the Local System account check box.
    • On the Agent tab, fill in the Contact (for example, specify User-win10) and Location (for example, specify detroit) fields.
    • On the Traps tab:
      • In the Community Name field, enter community public and click Add to list.
      • In the Trap destination field, click Add, specify the IP address or host of the KUMA server on which the collector that waits for SNMP events is deployed, and click Add.
    • On the Security tab:
      • Select the Send authentication trap check box.
      • In the Accepted community names table, click Add, enter Community Name public and specify READ WRITE as the Community rights.
      • Select the Accept SNMP packets from any hosts check box.
  6. Click Apply and confirm your selection.
  7. Right click ServicesSNMP Service and select Restart.

Changing the port for the SNMP trap service

You can change the SNMP trap service port if necessary.

To change the port of the SNMP trap service:

  1. Open the C:\Windows\System32\drivers\etc folder.
  2. Open the services file in Notepad as an administrator.
  3. In the service name section of the file, specify the snmp-trap connector port added to the KUMA collector for the SNMP trap service.
  4. Save the file.
  5. Open the Control Panel and select Administrative ToolsServices.
  6. Right-click SNMP Service and select Restart.
Page top
[Topic 239864]

Configuring the Event to Trap Translator service

To configure the Event to Trap Translator service that translates Windows events to SNMP trap messages:

  1. In the command line, type evntwin and press Enter.
  2. Under Configuration type, select Custom, and click the Edit button.
  3. In the Event sources group of settings, use the Add button to find and add the events that you want to send to KUMA collector with the SNMP trap connector installed.
  4. Click the Settings button, in the opened window, select the Don't apply throttle check box, and click OK.
  5. Click Apply and confirm your selection.
Page top
[Topic 239865]

Connector, kata/edr type

Connectors of the kata/edr typeUsed for getting KEDR data via the API. Settings for a connector of the kata/edr type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: kata/edr.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL that you want to connect to. The following URL formats are supported:

  • <host name>:<port number>
  • <IPv4 address>:<port number>
  • <IPv6 address>:<port number>

    You can specify IPv6 addresses in the following format: [<IPv6 address>%<interface>:<port number>, for example, [fe80::5054:ff:fe4d:ba0c%eth0]:4222.

You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

Required setting.

Secret

Secret that stores the credentials for connecting to the KATA/EDR server. You can select an existing secret or create a new secret. To create a new secret, select Create new.

If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

How to create a secret?

To create a secret:

  1. In the Name field, enter the name of the secret.
  2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
  3. If necessary, enter a description of the secret in the Description field.
  4. Click the Create button.

The secret is added and displayed in the Secret drop-down list.

Required setting.

External ID

Identifier for external systems. KUMA automatically generates an ID and populates this field with it.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. We only recommend configuring a conversion if you find invalid characters in the fields of the normalized event. By default, no value is selected.

Number of events

Maximum number of events in one request. By default, the value set on the KATA/EDR server is used.

Events fetch timeout

The time in seconds to wait for receipt of events from the KATA/EDR server. Default value: 0, which means that the value set on the KATA/EDR server is used.

Client timeout

Time in seconds to wait for a response from the KATA/EDR server. Default value: 0, corresponding to 1800 seconds.

KEDRQL filter

Filter of requests to the KATA/EDR server. For more details on the query language, please refer to the KEDR Help.

Page top
[Topic 268052]

Connector, vmware type

Expand all | Collapse all

Connectors of the vmware typeUsed for getting VMware vCenter data via the API. Settings for a connector of the vmware type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: vmware.

Required setting.

Tags

Tags for resource search.

Optional setting.

URL

URL of the VMware API. You need to include the hostname and port number in the URL. You can only specify one URL.

Required setting.

VMware credentials

Secret that stores the user name and password for connecting to the VMware API. You can select an existing secret or create a new secret. To create a new secret, select Create new.

If you want to edit the settings of an existing secret, click the pencil edit-pencilicon next to it.

How to create a secret?

To create a secret:

  1. In the Name field, enter the name of the secret.
  2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
  3. If necessary, enter a description of the secret in the Description field.
  4. Click the Create button.

The secret is added and displayed in the Secret drop-down list.

Required setting.

Client timeout

Time to wait after a request that did not return events before making a new request. The default value is 5 seconds. If you specify 0, the default value is used.

Maximum number of events

Number of events requested from the VMware API in one request. The default value is 100. The maximum value is 1000.

Start timestamp

Starting date and time from which you want to read events from the VMware API. By default, events are read from the VMware API from the time when the collector was started. If started after the collector is stopped, the events are read from the last saved date.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

TLS mode

TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:

  • Disabled means TLS encryption is not used. This value is selected by default.
  • Enabled means TLS encryption is used, but certificates are not verified.
  • Custom CA means TLS encryption is used with verification that the certificate was signed by a Certificate Authority. If you select this value, in the Custom CA drop-down list, specify a secret with a certificate signed by a certification authority. You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a certificate signed by a Certificate Authority?

    You can create a CA-signed certificate on the KUMA Core server (the following command examples use OpenSSL).

    To create a certificate signed by a Certificate Authority:

    1. Generate a key to be used by the Certificate Authority, for example:

      openssl genrsa -out ca.key 2048

    2. Create a certificate for the generated key, for example:

      openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

    3. Create a private key and a request to have it signed by the Certificate Authority, for example:

      openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

    4. Create the certificate signed by the Certificate Authority. You need to include the domain names or IP addresses of the server for which you are creating the certificate in the subjectAltName variable, for example:

      openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

    5. Upload the created server.crt certificate in the KUMA Console to a secret of the certificate type, then in the Custom CA drop-down list, select the secret of the certificate type.

    To use KUMA certificates on third-party devices, you must change the certificate file extension from CERT to CRT. Otherwise, you can get the x509: certificate signed by unknown authority error.

Page top
[Topic 268029]

Connector, elastic type

Connectors of the elastic typeUsed for getting Elasticsearch data. Elasticsearch version 7.0.0 is supported. Settings for a connector of the elastic type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: elastic.

Required setting.

Tags

Tags for resource search.

Optional setting.

Connection

Elasticsearch server connection settings:

  • URL is the URL of the Elasticsearch server. You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete cross-black icon next to it.

    Required setting.

  • Index is the name of the index in Elasticsearch.

    Required setting.

  • Query is the Elasticsearch query. We recommend specifying the size parameter in the query to prevent performance problems with KUMA and Elasticsearch, as well as the sort parameter for the sorting order.

    The following values are possible for the sort parameter in the query: asc, desc, or a custom sorting order by specific fields in accordance with the Elasticsearch syntax. To sort by a specific field, we recommend also specifying the "missing" : "_first" parameter next to the "order" parameter to prevent errors in cases when this field is absent in any document. For example, "sort": { "DestinationDnsDomain.keyword": {"order": "desc", "missing" : "_first" } }. For more details on sorting, please refer to the Elasticsearch documentation.

    Query example:

    "query" : { "match_all" : {} }, "size" : 25, "sort": {"_doc" : "asc"}

    Required setting.

  • Elastic credentials is the secret that stores the credentials for connecting to the Elasticsearch server.

    You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

    How to create a secret?

    To create a secret:

    1. In the Name field, enter the name of the secret.
    2. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
    3. If necessary, enter a description of the secret in the Description field.
    4. Click the Create button.

    The secret is added and displayed in the Secret drop-down list.

  • Elastic fingerprint is the secret that stores secrets of the 'fingerprint' type for connecting to the Elasticsearch server and secrets of the 'certificate' type for using a CA certificate.

    You can select an existing secret or create a new secret. To create a new secret, select Create new.

    If you want to edit the settings of an existing secret, click the pencil edit-pencil icon next to it.

  • Poll interval, sec is the interval between queries to the Elasticsearch server in seconds if the previous query did not return any events. If Elasticsearch contained events at the time of the request, the connector will receive events until all available events have been received from Elasticsearch.

You can add multiple connections to an Elasticsearch server resources or delete an Elasticsearch server connection. To add an Elasticsearch server connection, click the + Add connection button. To delete an Elasticsearch server connection, click the delete cross-black icon next to the Elasticsearch server.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

Page top
[Topic 273544]

Connector, office365 type

Connectors of the office365 type are used for receiving Microsoft 365 (Office 365) data via the API.

Available settings for a connector of the office365 type are described in the following tables.

Basic settings tab

Setting

Description

Name

Unique name of the resource. The maximum length of the name is 128 Unicode characters.

Required setting.

Tenant

The name of the tenant that owns the resource.

Required setting.

Type

Connector type: office365.

Required setting.

Tags

Tags for resource search.

Optional setting.

Office365 content types

Content types that you want to receive in KUMA. The following content types are available, providing information about actions and events in Microsoft 365, grouped by information source:

  • Audit.General
  • Audit.AzureActiveDirectory
  • Audit.Exchange
  • Audit.Sharepoint
  • DLP.All

You can find detailed information about the properties of the available content types and related events in the schema on the Microsoft website.

Required setting. You can select one or more content types.

Office365 tenant ID

Unique ID that you get after registering an account with Microsoft 365. If you do not have one, contact your administrator or Microsoft.

Required setting.

Office365 client ID

Unique ID that you get after registering an account with Microsoft 365. If you do not have one, contact your administrator or Microsoft.

Required setting.

Authorization

Authorization method for connecting to Microsoft 365. The following authorization methods are available:

  • PFX. Using a PFX secret.
  • Token. Using a 'token' secret.

For more information, see the section on secrets.

Office365 credentials

The field becomes available after selecting the authorization method. You can select one of the available authorization secrets or create a new secret of the selected type.

Required setting.

Description

Description of the resource. The maximum length of the description is 4000 Unicode characters.

Advanced settings tab

Setting

Description

Debug

Ths switch enables resource logging. The toggle switch is turned off by default.

Character encoding

Character encoding. The default is UTF-8.

Authentication host

The URL that is used for connection and authorization.

By default, a connection is made to https://login.microsoftonline.com.

Resource host

URL from which the events are to be received.

The default address is https://manage.office.com.

Retrospective analysis interval, hours

The period for which all new events are requested, in hours. To avoid losing some events, it is important to set overlapping event reception intervals, because some types of Microsoft 365 content may be sent with a delay. In this case, previously received events are not duplicated.

By default, all new events for the last 12 hours are requested.

Request timeout, sec

Time to wait for a response to a request to get new events, in seconds. The default response timeout is 30 seconds.

Repeat interval, sec

The time in seconds after which a failed request to get new events must be repeated.

By default, a request to get new events is repeated 10 seconds after getting an error or no response within the specified timeout.

Clear interval, sec

How often obsolete data is deleted, in seconds.

The minimum value is 300 seconds. By default, obsolete data is deleted every 1800 seconds.

Poll interval, min

How often requests for new events are sent, in minutes.

By default, requests are sent every 10 minutes.

Proxy server

Proxy settings, if necessary to connect to Microsoft 365.

You can select one of the available proxy servers or create a new proxy server.

Page top
[Topic 295203]

Predefined connectors

The connectors listed in the table below are included in the KUMA distribution kit.

Predefined connectors

Connector name

Comment

[OOTB] Continent SQL

Obtains events from the database of the Continent hardware and software encryption system.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] InfoWatch Trafic Monitor SQL

Obtains events from the database of the InfoWatch Traffic Monitor system.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] KSC MSSQL

Obtains events from the MS SQL database of the Open Single Management Platform system.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] KSC MySQL

Obtains events from the MySQL database of the Open Single Management Platform system.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] KSC PostgreSQL

Obtains events from the PostgreSQL database of the Open Single Management Platform 15.0 system.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] Oracle Audit Trail SQL

Obtains audit events from the Oracle database.

To use it, you must configure the settings of the corresponding secret type.

[OOTB] SecretNet SQL

Obtains events from the SecretNet SQL database.

To use it, you must configure the settings of the corresponding secret type.

Page top
[Topic 250627]

Secrets

Secrets are used to securely store sensitive information such as user names and passwords that must be used by KUMA to interact with external services. If a secret stores account data such as user login and password, when the collector connects to the event source, the account specified in the secret may be blocked in accordance with the password policy configured in the event source system.

Secrets can be used in the following KUMA services and features:

Available settings:

  • Name (required)—a unique name for this type of resource. Must contain 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Type (required)—the type of secret.

    When you select the type in the drop-down list, the parameters for configuring this secret type also appear. These parameters are described below.

  • Description—up to 4,000 Unicode characters.

Depending on the secret type, different fields are available. You can select one of the following secret types:

  • credentials—this type of secret is used to store account credentials required to connect to external services, such as SMTP servers. If you select this type of secret, you must fill in the User and Password fields. If the Secret resource uses the 'credentials' type to connect the collector to an event source, for example, a database management system, the account specified in the secret may be blocked in accordance with the password policy configured in the event source system.
  • token—this secret type is used to store tokens for API requests. Tokens are used when connecting to IRP systems, for example. If you select this type of secret, you must fill in the Token field.
  • ktl—this secret type is used to store Kaspersky Threat Intelligence Portal account credentials. If you select this type of secret, you must fill in the following fields:
    • User and Password (required fields)—user name and password of your Kaspersky Threat Intelligence Portal account.
    • PFX file (required)—lets you upload a Kaspersky Threat Intelligence Portal certificate key.
    • PFX password (required)—the password for accessing the Kaspersky Threat Intelligence Portal certificate key.
  • urls—this secret type is used to store URLs for connecting to SQL databases and proxy servers. In the Description field, you must provide a description of the connection for which you are using the secret of urls type.

    You can specify URLs in the following formats: hostname:port, IPv4:port, IPv6:port, :port.

  • pfx—this type of secret is used for importing a PFX file containing certificates. If you select this type of secret, you must fill in the following fields:
    • PFX file (required)—this is used to upload a PFX file. The file must contain a certificate and key. PFX files may include CA-signed certificates for server certificate verification.
    • PFX password (required)—this is used to enter the password for accessing the certificate key.
  • kata/edr—this type of secret is used to store the certificate file and private key required when connecting to the Kaspersky Endpoint Detection and Response server. If you select this type of secret, you must upload the following files:
    • Certificate file—KUMA server certificate.

      The file must be in PEM format. You can upload only one certificate file.

    • Private key for encrypting the connection—KUMA server RSA key.

      The key must be without a password and with the PRIVATE KEY header. You can upload only one key file.

      You can generate certificate and key files by clicking the download button.

  • snmpV1—this type of secret is used to store the values of Community access (for example, public or private) that is required for interaction over the Simple Network Management Protocol.
  • snmpV3—this type of secret is used for storing data required for interaction over the Simple Network Management Protocol. If you select this type of secret, you must fill in the following fields:
    • User—user name indicated without a domain.
    • Security Level—security level of the user.
      • NoAuthNoPriv—messages are forwarded without authentication and without ensuring confidentiality.
      • AuthNoPriv—messages are forwarded with authentication but without ensuring confidentiality.
      • AuthPriv—messages are forwarded with authentication and ensured confidentiality.

      You may see additional settings depending on the selected level.

    • Password—SNMP user authentication password. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
    • Authentication Protocol—the following protocols are available: MD5, SHA, SHA224, SHA256, SHA384, SHA512. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
    • Privacy Protocol—protocol used for encrypting messages. Available protocols: DES, AES. This field becomes available when the AuthPriv security level is selected.
    • Privacy password—encryption password that was set when the SNMP user was created. This field becomes available when the AuthPriv security level is selected.
  • certificate—this secret type is used for storing certificate files. Files are uploaded to a resource by clicking the Upload certificate file button. X.509 certificate public keys in Base64 are supported.
  • fingerprint—this type of secret is used to store the Elastic fingerprint value that can be used when connecting to the Elasticsearch server.
  • PublicPKI—this type of secret is used to connect a KUMA collector to ClickHouse. If you select this option, you must specify the secret containing the base64-encoded PEM private key and the public key.

Predefined secrets

The secrets listed in the table below are included in the KUMA distribution kit.

Predefined secrets

Secret name

Description

[OOTB] Continent SQL connection

Stores confidential data and settings for connecting to the APKSh Kontinent database. To use it, you must specify the login name and password of the database.

[OOTB] KSC MSSQL connection

Stores confidential data and settings for connecting to the MS SQL database of Open Single Management Platform (KSC). To use it, you must specify the login name and password of the database.

[OOTB] KSC MySQL Connection

Stores confidential data and settings for connecting to the MySQL database of Open Single Management Platform (KSC). To use it, you must specify the login name and password of the database.

[OOTB] Oracle Audit Trail SQL Connection

Stores confidential data and settings for connecting to the Oracle database. To use it, you must specify the login name and password of the database.

[OOTB] SecretNet SQL connection

Stores confidential data and settings for connecting to the MS SQL database of the SecretNet system. To use it, you must specify the login name and password of the database.

Page top
[Topic 217990]

Context tables

A context table is a container for a data array that is used by KUMA correlators for analyzing events in accordance with correlation rules. You can create context tables in the Resources section. The context table data is stored only in the correlator to which it was added using filters or actions in correlation rules.

You can populate context tables automatically using correlation rules of 'simple' and 'operational' types or import a file with data for the context table.

You can add, copy, and delete context tables, as well as edit their settings.

Context tables can be used in the following KUMA services and features:

The same context table can be used in multiple correlators. However, a separate entity of the context table is created for each correlator. Therefore, the contents of the context tables used by different correlators are different even if the context tables have the same name and ID.

Only data based on correlation rules of the correlator are added to the context table.

You can add, edit, delete, import, and export records in the context table of the correlator.

When records are deleted from context tables after their lifetime expires, service events are generated in the correlators. These events only exist in the correlators, and they are not redirected to other destinations. Service events are sent for processing by correlation rules of that correlator which uses the context table. Correlation rules can be configured to track these events so that they can be used to process events and identify threats.

Service event fields for deleting an entry from a context table are described below.

Event field

Value or comment

ID

Event ID

Timestamp

Time when the expired entry was deleted

Name

"context table record expired"

DeviceVendor

"Kaspersky"

DeviceProduct

"KUMA"

ServiceID

Correlator ID

ServiceName

Correlator name

DeviceExternalID

Context table ID

DevicePayloadID

Key of the expired entry

BaseEventCount

Number of updates for the deleted entry, incremented by one

FileName

Name of the context table.

S.<context table field>

SA.<context table field>

N.<context table field>

NA.<context table field>

F.<context table field>

FA.<context table field>

Depending on the type of the entry that dropped out from the context table, the dropped-out context table entry is recorded in the corresponding type of event:

for example, S.<context table field> = <context table field value>

SA.<context table field> = <array of context table field values>

 

Context table records of the boolean type have the following format:

S.<context table field> = true/false

SA.<context table field> = false,true,false

In this section

Viewing the list of context tables

Adding a context table

Viewing context table settings

Editing context table settings

Duplicating context table settings

Deleting a context table

Viewing context table records

Searching context table records

Adding a context table record

Editing a context table record

Deleting a context table record

Importing data into a context table

Exporting data from a context table

Page top
[Topic 264170]

Viewing the list of context tables

To view the context table list of the correlator:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator for which you want to view context tables, select Go to context tables.

The Correlator context tables list is displayed.

The table contains the following data:

  • Name—name of the context table.
  • Size on disk—size of the context table.
  • Directory—path to the context table on the KUMA correlator server.
Page top
[Topic 264179]

Adding a context table

Expand all | Collapse all

To add a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click Context tables.
  3. In the Context tables window, click Add.

    This opens the Create context table window.

  4. In the Name field, enter a name for the context table.
  5. In the Tenant drop-down list, select the tenant that owns the resource.
  6. In the TTL field, specify time the record added to the context table is stored in it.

    When the specified time expires, the record is deleted. The time is specified in seconds. The maximum value is 31536000 (1 year).

    The default value is 0. If the value of the field is 0, the record is stored indefinitely.

  7. In the Description field, provide any additional information.

    You can use up to 4,000 Unicode characters.

    This field is optional.

  8. In the Schema section, specify which fields the context table has and the data types of the fields.

    Depending on the data type, a field may or may not be a key field. At least one field in the table must be a key field. The names of all fields must be unique.

    To add a table row, click Add and fill in the table fields:

    1. In the Name field, enter the name of the field. The maximum length is 128 characters.
    2. In the Type drop-down list, select the data type for the field.

      Possible field data types

      Possible data types of context table fields

      Field data type

      Can be a key field

      Comment

      Integer

      Yes

      нет значения.

      Floating point number

      Yes

      нет значения.

      String

      Yes

      нет значения.

      Boolean

      Yes

      нет значения.

      Timestamp

      Yes

      For a field of this type, it is checked that the field value is greater than or equal to zero. No other operations are provided.

      IP address

      Yes

      For a field of this type, it is checked that the field value corresponds to the IPv4, IPv6 format. No other operations are provided.

      Integer list

      No

      нет значения.

      Float list

      No

      нет значения.

      List of strings

      No

      нет значения.

      Boolean list

      No

      нет значения.

      Timestamp list

      No

      For a field of this type, it is checked that each item in the list is greater than or equal to zero. No other operations are provided.

      IP list

      No

      For a field of this type, it is checked that each item of the list corresponds to the IPv4, IPv6 format. No other operations are provided.

    3. If you want to make a field a key field, select the Key field check box.

      A table can have multiple key fields. Key fields are chosen when the context table is created, uniquely identify a table entry and cannot be changed.

      If a context table has multiple key fields, each table entry is uniquely identified by multiple fields (composite key).

  9. Add the required number of context table rows.

    After saving the context table, the schema cannot be changed.

  10. Click the Save button.

The context table is added.

Page top
[Topic 264219]

Viewing context table settings

To view the context table settings:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click Context tables.
  3. In the list in the Context tables window, select the context table whose settings you want to view.

This opens the context table settings window. It displays the following information:

  • Name—unique name of the resource.
  • Tenant—the name of the tenant that owns the resource.
  • TTL—the record added to the context table is stored in it for this duration. This value is specified in seconds.
  • Description—any additional information about the resource.
  • Schema is an ordered list of fields and their data types, with key fields marked.
Page top
[Topic 265069]

Editing context table settings

To edit context table settings:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click Context tables.
  3. In the list in the Context tables window, select the context table whose settings you want to edit.
  4. Specify the values of the following parameters:
    • Name—unique name of the resource.
    • TTL—the record added to the context table is stored in it for this duration. This value is specified in seconds.
    • Description—any additional information about the resource.
    • Schema is an ordered list of fields and their data types, with key fields marked. If the context table is not used in a correlation rule, you can edit the list of fields.

      If you want to edit the schema in a context table that is already being used in a correlation rule, follow the steps below.

    The Tenant field is not editable.

  5. Click Save.

To edit the settings of the context table previously used by the correlator:

  1. Export data from the table.
  2. Copy and save the path to the file with the data of the table on the disk of the correlator. This path is specified in the Directory column in the Correlator context tables window. You will need this path later to delete the file from the disk of the correlator.
  3. Delete the context table from the correlator.
  4. Edit context table settings as necessary.
  5. Delete the file with data of the table on the disk of the correlator at the path from step 2.
  6. To apply the changes (delete the table), update the configuration of the correlator: in the Resources → Active services section, in the list of services, select the check box next to the relevant correlator and click Update configuration.
  7. Add the context table in which you edited the settings to the correlator.
  8. To apply the changes (add a table), update the configuration of the correlator: in the Resources → Active services section, in the list of services, select the check box next to the relevant correlator and click Update configuration.
  9. Adapt the fields in the exported table (see step 1) so that they match the fields of the table that you uploaded to the correlator at step 7.
  10. Import the adapted data to the context table.

The configuration of the context table is updated.

Page top
[Topic 265073]

Duplicating context table settings

To copy a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click Context tables.
  3. Select the check box next to the context table that you want to copy.
  4. Click Duplicate.
  5. Specify the necessary settings.
  6. Click the Save button.

The context table is copied.

Page top
[Topic 265304]

Deleting a context table

You can delete only those context tables that are not used in any of the correlators.

To delete a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Resources section, click Context tables.
  3. Select the check boxes next to the context tables that you want to delete.

    To delete all context tables, select the check box next to the Name column.

    At least one check box must be selected.

  4. Click the Delete button.
  5. Click OK.

The context tables are deleted.

Page top
[Topic 264177]

Viewing context table records

To view a list of context table records:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator for which you want to view the context table, select Go to context tables.

    This opens the Correlator context tables window.

  4. In the Name column, select the relevant context table.

The list of records for the selected context table is displayed.

The list contains the following data:

  • Key is the composite key of the record. It is comprised by one or more values of key fields, separated by the "|" character. If one of the key field values is absent, the separator character is still displayed.

    For example, a record key consists of three fields: DestinationAddress, DestinationPort, and SourceUserName. If the last two fields do not contain values, the record key is displayed as follows: 43.65.76.98| | .

  • Record repetitions is the total number of times the record was mentioned in events and identical records were downloaded when importing context tables to KUMA.
  • Expiration date – date and time when the record must be deleted.

    If the TTL field had the value of 0 when the context table was created, the records of this context table are retained for 36,000 days (approximately 100 years).

  • Updated is the date and time when the context table was updated.
Page top
[Topic 265306]

Searching context table records

To find a record in the context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator in whose context table you want to find a record, select Go to context tables.

    This opens the Correlator context tables window.

  4. In the Name column, select your context table.

    This opens a window with the records of the selected context table.

  5. In the Search field, enter the record key value or several characters from the key.

The list of context table records displays only the records whose key contains the entered characters.

If the your search query matches records with empty key values, the text <Nothing found> is displayed in the widget on the Dashboard. We recommend clarifying the conditions of your search query.

Page top
[Topic 265310]

Adding a context table record

To add a record to the context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator to whose context table you want to add a record, select Go to context tables.

    This opens the Correlator context tables window.

  4. In the Name column, select the relevant context table.

    The list of records for the selected context table is displayed.

  5. Click Add.

    The Create record window opens.

  6. In the Value field, specify the values for fields in the Field column.

    KUMA takes field names from the correlation rules with which the context table is associated. These names are not editable. The list of fields cannot be edited.

    If you do not specify some of the field values, the missing fields, including key fields, are populated with default values. The key of the record is determined from the full set of fields, and the record is added to the table. If an identical key already exists in the table, an error is displayed.

    List of default field values

    Field type

    Default value

    Integer

    0

    Floating point number

    0.0

    String

    ""

    Boolean

    false

    IP address

    "0.0.0.0"

    Timestamp

    0

    Integer list

    []

    Float list

    []

    List of strings

    []

    Boolean list

    []

    Timestamp list

    []

    IP list

    []

  7. Click the Save button.

The record is added.

Page top
[Topic 265311]

Editing a context table record

To edit a record in the context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator for which you want to edit the context table, select Go to context tables.

    This opens the Correlator context tables window.

  4. In the Name column, select the relevant context table.

    The list of records for the selected context table is displayed.

  5. Click on the row of the record that you want to edit.
  6. Specify your values in the Value column.
  7. Click the Save button.

The record is overwritten.

Restrictions when editing a record:

  • The value of the key field of the record is not available for editing. You can change it by exporting and importing a record.
  • Field names in the Field column are not editable.
  • The values in the Value column must meet the following requirements:
    • greater than or equal to 0 for fields of the Timestamp and Timestamp list types.
    • IPv4 or IPv6 format for fields of the IP address and IP list types.
    • is true or false for a Boolean field.
Page top
[Topic 265325]

Deleting a context table record

To delete records from a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator from whose context table you want to delete a record, select Go to context tables.

    This opens the Correlator context tables window.

  4. In the Name column, select the relevant context table.

    The list of records for the selected context table is displayed.

  5. Select the check boxes next to the records you want to delete.

    To delete all records, select the check box next to the Key column.

    At least one check box must be selected.

  6. Click the Delete button.
  7. Click OK.

The records will be deleted.

Page top
[Topic 265339]

Importing data into a context table

To import data to a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator to whose context table you want to import data, select Go to context tables.

    This opens the Correlator context tables window.

  4. Select the check box next to your context table and click Import.

    This opens the context table data import window.

  5. Click Add and select the file that you want to import.
  6. In the Format drop-down list select the format of the file:
    • csv
    • tsv
    • internal
  7. Click the Import button.

The data from the file is imported into the context table. Records that previously existed in the context table are preserved.

When importing, KUMA checks the uniqueness of each record's key. If a record already exists, its fields are populated with new values obtained by merging the previous values with the field values of the imported record.

If no record existed in the context table, a new record is created.

Data imported from a file is not checked for invalid characters. If you use this data in widgets, widgets are displayed incorrectly if invalid characters are present in the data.

Page top
[Topic 265327]

Exporting data from a context table

To export data from a context table:

  1. In the KUMA Console, select the Resources section.
  2. In the Services section, click the Active services button.
  3. In the context menu of the correlator whose context table you want to export, select Go to context tables.

    This opens the Correlator context tables window.

  4. Select the check box next to your context table and click Export.

The context table is downloaded to your computer in JSON format. The name of the downloaded file reflects the name of the context table. The order of the fields in the file is not defined.

Page top
[Topic 265349]