Contents
- KUMA resources
- Operations with resources
- Creating, renaming, moving, and deleting resource folders
- Creating, duplicating, moving, editing, and deleting resources
- Bulk deletion of resources
- Link correlators to a correlation rule
- Updating resources
- Exporting resources
- Importing resources
- Tag management
- Resource usage tracing
- Resource versioning
- Destinations
- Normalizers
- Aggregation rules
- Enrichment rules
- Data collection and analysis rules
- Correlation rules
- Filters
- Active lists
- Viewing the table of active lists
- Adding active list
- Viewing the settings of an active list
- Changing the settings of an active list
- Duplicating the settings of an active list
- Deleting an active list
- Viewing records in the active list
- Searching for records in the active list
- Adding a record to an active list
- Duplicating records in the active list
- Changing a record in the active list
- Deleting records from the active list
- Import data to an active list
- Exporting data from the active list
- Predefined active lists
- Dictionaries
- Response rules
- Connectors
- Viewing connector settings
- Adding a connector
- Connector settings
- Connector, internal type
- Connector, tcp type
- Connector, udp type
- Connector, netflow type
- Connector, sflow type
- Connector, nats-jetstream type
- Connector, kafka type
- Connector, http type
- Connector, sql type
- Connector, file type
- Connector, 1c-log type
- Connector, 1c-xml type
- Connector, diode type
- Connector, ftp type
- Connector, nfs type
- Connector, wmi type
- Connector, wec type
- Connector, etw type
- Connector, snmp type
- Connector, snmp-trap type
- Connector, kata/edr type
- Connector, vmware type
- Connector, elastic type
- Connector, office365 type
- Predefined connectors
- Secrets
- Context tables
- Viewing the list of context tables
- Adding a context table
- Viewing context table settings
- Editing context table settings
- Duplicating context table settings
- Deleting a context table
- Viewing context table records
- Searching context table records
- Adding a context table record
- Editing a context table record
- Deleting a context table record
- Importing data into a context table
- Exporting data from a context table
- Operations with resources
KUMA resources
Resources are KUMA components that contain parameters for implementing various functions: for example, establishing a connection with a given web address or converting data according to certain rules. Like parts of an erector set, these components are assembled into resource sets for services that are then used as the basis for creating KUMA services.
Resources are contained in the Resources section, Resources block of KUMA Console. The following resource types are available:
- Correlation rules—resources of this type contain rules for identifying event patterns that indicate threats. If the conditions specified in these resources are met, a correlation event is generated.
- Normalizers—resources of this type contain rules for converting incoming events into the format used by KUMA. After processing in the normalizer, the raw event becomes normalized and can be processed by other KUMA resources and services.
- Connectors—resources of this type contain settings for establishing network connections.
- Aggregation rules—resources of this type contain rules for combining several basic events of the same type into one aggregation event.
- Enrichment rules—resources of this type contain rules for supplementing events with information from third-party sources.
- Destinations—resources of this type contain settings for forwarding events to a destination for further processing or storage.
- Filters—resources of this type contain criteria for selecting individual events from the event stream to be sent to processing.
- Response rules—resources of this type are used in correlators to, for example, execute scripts or launch Open Single Management Platform tasks when certain conditions are met.
- Data collection and analysis rules—resources of this type contain rules that allow scheduling SQL queries with aggregation functions to the storage. Data received from SQL queries is then used for correlation.
- Notification templates—resources of this type are used when sending notifications about new alerts.
- Active lists—resources of this type are used by correlators for dynamic data processing when analyzing events according to correlation rules.
- Dictionaries—resources of this type are used to store keys and their values, which may be required by other KUMA resources and services.
- Proxies—resources of this type contain settings for using proxy servers.
- Secrets—resources of this type are used to securely store confidential information (such as credentials) that KUMA needs to interact with external services.
When you click on a resource type, a window opens displaying a table with the available resources of this type. The table contains the following columns:
- Name—the name of the resource. Can be used to search for resources and sort them.
- Updated—the date and time of the last update of a resource. Can be used to sort resources.
- Created by—the name of the user who created a resource.
- Description—the description of a resource.
- Type—the type of the resource. Displayed for all types of resources, except Aggregation rules, Enrichment rules, Data collection and analysis rules, Filters, Active lists, Proxies.
- Resource path—address in the resource tree. Displayed in the tree of folders, starting from the tenant in which the resource was created.
- Tags—tags assigned to the resource. A resource can have more than one tag.
Tags are part of the resource and are imported with the resource.
- Package name—the name of the package in which the resource was imported from the repository.
- Correlator—the list of correlators to which the correlation rule is linked. Displayed only for resources of the Correlation rule type.
- MITRE techniques—the MITRE matrix techniques that this correlation rule covers. Displayed only for resources of the Correlation rule type. When you hover over a value, the name of the rule is displayed.
The maximum table size is not limited. If you want to select all resources, scroll to the end of the table and select the Select all check box, which selects all available resources in the table.
The table of resources in the lower part displays the number of resources from tenants that are available to you in the table:
- Total is the total amount or the amount with the filter or search applied.
- Selected is the number of selected resources.
When filters are applied, the resource selection and the Selected value are reset. If the amount of resources changes due to the actions (for example, deletion) undertake by another user, the displayed number of resources changes after you refresh the page, perform an action with the resource, or apply a filter.
Resources can be organized into folders. The folder structure is displayed in the left part of the window: root folders correspond to tenants and contain a list of all resources of the tenant. All other folders nested within the root folder display the resources of an individual folder. When a folder is selected, the resources it contains are displayed as a table in the right pane of the window.
Resources can be created, edited, copied, moved from one folder to another, and deleted. Resources can also be exported and imported.
KUMA comes with a set of predefined resources, which can be identified by the "[OOTB]<resource_name>" name. OOTB resources are protected from editing.
If you want to adapt a predefined OOTB resource to your organization's infrastructure:
- In the Resources-<resource type> section, select the OOTB resource that you want to edit.
- In the upper part of the KUMA Console, click Duplicate, then click Save.
- A new resource named "[OOTB]<resource_name> - copy" is displayed in the web interface.
- Edit the copy of the predefined resource as necessary and save your changes.
The adapted resource is available for use.
Operations with resources
To manage Kaspersky Unified Monitoring and Analysis Platform resources, you can create, move, copy, edit, delete, import, and export them. These operations are available for all resources, regardless of the resource type.
The table of resources in the lower part displays the number of resources from tenants that are available to you in the table:
- Total is the total amount or the amount with the filter or search applied.
- Selected is the number of selected resources.
When filters are applied, the resource selection and the Selected value are reset. If the amount of resources changes due to the actions (for example, deletion) undertake by another user, the displayed number of resources changes after you refresh the page, perform an action with the resource, or apply a filter.
Kaspersky Unified Monitoring and Analysis Platform resources are arranged in folders. You can add, rename, move, or delete resource folders.
Creating, renaming, moving, and deleting resource folders
Resources can be organized into folders. The folder structure is displayed in the left part of the window: root folders correspond to tenants and contain a list of all resources of the tenant. All other folders nested within the root folder display the resources of an individual folder. When a folder is selected, the resources it contains are displayed as a table in the right pane of the window.
You can create, rename, move and delete folders.
To create a folder:
- Select the folder in the tree where the new folder is required.
- Click the Add folder button.
The folder will be created.
To rename a folder:
- Locate required folder in the folder structure.
- Hover over the name of the folder.
The
icon will appear near the name of the folder.
- Open the
drop-down list and select Rename.
The folder name will become active for editing.
- Enter the new folder name and press ENTER.
The folder name cannot be empty.
The folder will be renamed.
To move a folder,
Drag and drop the folder to a required place in folder structure by clicking its name.
Folders cannot be dragged from one tenant to another.
To delete a folder:
- Select the relevant folder in the folder structure.
- Right-click to bring up the context menu and select Delete.
The conformation window appears.
- Click OK.
The folder will be deleted.
The program does not delete folders that contain files or subfolders.
Page topCreating, duplicating, moving, editing, and deleting resources
You can create, move, copy, edit, and delete resources.
To create the resource:
- In the Resources → <resource type> section, select or create a folder where you want to add the new resource.
Root folders correspond to tenants. For a resource to be available to a specific tenant, it must be created in the folder of that tenant.
- Click the Add <resource type> button.
The window for configuring the selected resource type opens. The available configuration parameters depend on the resource type.
- Enter a unique resource name in the Name field.
- Specify the required parameters (marked with a red asterisk).
- If necessary, specify the optional parameters (not required).
- Click Save.
The resource will be created and available for use in services and other resources.
To move the resource to a new folder:
- In the Resources → <resource type> section, find the required resource in the folder structure.
- Select the check box near the resource you want to move. You can select multiple resources.
The
icon appears near the selected resources. The number of selected resources is displayed in the lower part of the table.
- Use the
icon to drag and drop resources to the required folder.
The resources will be moved to the new folders.
You can only move resources to folders of the tenant in which the resources were created. Resources cannot be moved to another tenant's folders.
To copy the resource:
- In the Resources → <resource type> section, find the required resource in the folder structure.
- Select the check box next to the resource that you want to copy and click Duplicate.
The number of selected resources is displayed in the lower part of the table.
A window opens with the settings of the resource that you have selected for copying. The available configuration parameters depend on the resource type.
The
<selected resource name> - copy
value is displayed in the Name field. - Make the necessary changes to the parameters.
- Enter a unique name in the Name field.
- Click Save.
The copy of the resource will be created.
To edit the resource:
- In the Resources → <resource type> section, find the required resource in the folder structure.
- Select the resource.
A window with the settings of the selected resource opens. The available configuration parameters depend on the resource type.
- Make the necessary changes to the parameters.
- Do one of the following:
- Click Save to save your changes.
- Click Save with a comment, and in the displayed window, add a comment that describes your changes. The changes are saved and the comment is added to the created version of the resource.
The resource is updated and a new version is created for it. If this resource is used in a service, restart the service to apply the new version of the resource.
If the current resource is not editable (for example, you cannot edit a correlation rule), you can go to the card of another resource by clicking the View button. This button becomes available in batch resources when you click another resource linked to your current resource.
If, when saving changes to a resource, it turns out that the current version of the resource has been modified by another user, you are prompted to select one of the following actions:
- Save your changes as a new version of the resource on top of the changes made by the other user.
- Save your changes as a new resource.
In this case, a duplicate of the original resource is created with the changed settings. The "- copy" string is added to the name of the new resource, and the name and version of the resource that was duplicated is specified in the version comments of the new resource.
- Discard your changes.
Discarded changes cannot be restored.
To delete the resource:
- In the Resources → <resource type> section, find the required resource in the folder structure.
- Select the check box next to the resource that you want to delete and click Delete.
The number of selected resources is displayed in the lower part of the table. A confirmation window opens.
- Click OK.
The resource and all its saved versions are deleted.
Page topBulk deletion of resources
In the KUMA Console, you can select multiple resources and delete them.
You must have the right to delete resources.
To delete resources:
- In the Resources → <resource type> section, find the required resource in the folder structure.
- Select the check boxes next to the resources that you want to delete.
In the lower part of the table, you can see the total number of resources and the number of resources selected.
- Click Delete.
This opens a window that tells you whether it is safe to delete resources, depending on whether the resources selected for deletion are linked to other resources.
For all resources that cannot be deleted, the application displays a table of links in a modal window.
- Click Delete.
Only resources without links are deleted.
Deleting folders with resources
You can select the delete operation for any folder at any level, except the tenant.
To delete a folder with resources:
- In the Resources section, select a folder.
- Click the
button and select the Delete option.
This opens a window prompting you to confirm deletion. The window displays a field in which you can enter the generated value. Also, if dependent resources exist in the folder, a list of dependencies is displayed.
- Enter the generated value.
- Confirm the deletion.
You can delete a folder if:
- The folder does not contain any subfolders or resources.
- The folder does not contain any subfolders, but does contain unlinked resources.
- None of the resources in the folder are dependencies of anything (services, resources, integrations).
Link correlators to a correlation rule
The Link correlators option is available for the created correlation rules.
To link correlators:
- In the KUMA web interface → Resources → Correlation rules section, select the created correlation rule and go to the Correlators tab.
- This opens the Correlators window; in that window, select one or more correlators by selecting the check box next to them.
- Click OK.
Correlators are linked to a correlation rule.
The rule is added to the end of the execution queue in each selected correlator. If you want to move the rule up in the execution queue, go to Resources → Correlators → <selected correlator> → Edit correlator → Correlation, select the check box next to the relevant rule and use the Move up or Move down buttons to reorder the rules as necessary.
Page topUpdating resources
Kaspersky regularly releases packages with resources that can be imported from the repository. You can specify an email address in the settings of the Repository update task. After the first execution of the task, KUMA starts sending notifications about the packages available for update to the specified address. You can update the repository, analyze the contents of each update, and decide if to import and deploy the new resources in the operating infrastructure. KUMA supports updates from Kaspersky servers and from custom sources, including offline update using the update mirror mechanism. If you have other Kaspersky products in the infrastructure, you can connect KUMA to existing update mirrors. The update subsystem expands KUMA capabilities to respond to the changes in the threat landscape and the infrastructure. The capability to use it without direct Internet access ensures the privacy of the data processed by the system.
To update resources, perform the following steps:
- Update the repository to deliver the resource packages to the repository. The repository update is available in two modes:
- Automatic update
- Manual update
- Import the resource packages from the updated repository into the tenant.
For the service to start using the resources, make sure that the updated resources are mapped after performing the import. If necessary, link the resources to collectors, correlators, or agents, and update the settings.
To enable automatic update:
- In the Settings → Repository update section, configure the Data refresh interval in hours. The default value is 24 hours.
- Specify the Update source. The following options are available:
- .
You can view the list of update servers in the Knowledge Base.
- Custom source:
- The URL to the shared folder on the HTTP server.
- The full path to the local folder on the host where the KUMA Core is installed.
If a local folder is used, the kuma system user must have read access to this folder and its contents.
- .
- If necessary, in the Proxy server list, select an existing proxy server to be used when running the Repository update task.
You can also create a new proxy server by clicking on the
button.
- Specify the Emails for notification by clicking the Add button. The notifications that new packages or new versions of the packages imported into the tenant are available in the repository are sent to the specified email addresses.
If you specify the email address of a KUMA user, the Receive email notifications check box must be selected in the user profile. For emails that do not belong to any KUMA user, the messages are received without additional settings. The settings for connecting to the SMTP server must be specified in all cases.
- Click Save. The update task starts shortly. Then the task restarts according to the schedule.
To manually start the repository update:
- To disable automatic updates, in the Settings → Repository update section, select the Disable automatic update check box. This check box is cleared by default. You can also start a manual repository update without disabling automatic update. Starting an update manually does not affect the automatic update schedule.
- Specify the Update source. The following options are available:
- Kaspersky update servers.
- Custom source:
- The URL to the shared folder on the HTTP server.
- The full path to the local folder on the host where the KUMA Core is installed.
If a local folder is used, the kuma user must have access to this folder and its contents.
- If necessary, in the Proxy server list, select an existing proxy server to be used when running the Repository update task.
You can also create a new proxy server by clicking on the
button.
- Specify the Emails for notification by clicking the Add button. The notifications that new packages or new versions of the packages imported into the tenant are available in the repository are sent to the specified email addresses.
If you specify the email address of a KUMA user, the Receive email notifications check box must be selected in the user profile. For emails that do not belong to any KUMA user, the messages are received without additional settings. The settings for connecting to the SMTP server must be specified in all cases.
- Click Run update. Thus, you simultaneously save the settings and manually start the Repository update task.
Configuring a custom source using Kaspersky Update Utility
You can update resources without Internet access by using a custom update source via the Kaspersky Update Utility.
Configuration consists of the following steps:
- Configuring a custom source using Kaspersky Update Utility:
- Installing and configuring Kaspersky Update Utility on one of the computers in the corporate LAN.
- Configuring copying of updates to a shared folder in Kaspersky Update Utility settings.
- Configuring update of the KUMA repository from a custom source.
Configuring a custom source using Kaspersky Update Utility:
You can download the Kaspersky Update Utility distribution kit from the Kaspersky Technical Support website.
- In Kaspersky Update Utility, enable the download of updates for KUMA 2.1:
- Under Applications – Perimeter control, select the check box next to KUMA 2.1 to enable the update capability.
- If you work with Kaspersky Update Utility using the command line, add the following line to the [ComponentSettings] section of the updater.ini configuration file or specify the
true
value for an existing line:KasperskyUnifiedMonitoringAndAnalysisPlatform_3_4=true
- In the Downloads section, specify the update source. By default, Kaspersky update servers are used as the update source.
- In the Downloads section, in the Update folders group of settings, specify the shared folder for Kaspersky Update Utility to download updates to. The following options are available:
- Specify the local folder on the host where Kaspersky Update Utility is installed. Deploy the HTTP server for distributing updates and publish the local folder on it. In KUMA, in the Settings → Repository update → Custom source section, specify the URL of the local folder published on the HTTP server.
- Specify the local folder on the host where the Kaspersky Update Utility is installed. Make this local folder available over the network. Mount the network-accessible local folder on the host where KUMA is installed. In KUMA, in the Settings → Repository update → Custom source section, specify the full path to the local folder.
For detailed information about working with Kaspersky Update Utility, refer to the Kaspersky Knowledge Base.
Page topExporting resources
If shared resources are hidden for a user, the user cannot export shared resources or resources that use shared resources.
To export resources:
- In the Resources section, click Export resources.
The Export resources window opens with the tree of all available resources.
- In the Password field enter the password that must be used to protect exported data.
- In the Tenant drop-down list, select the tenant whose resources you want to export.
- Check boxes near the resources you want to export.
If selected resources are linked to other resources, linked resources will be exported, too. The number of selected resources is displayed in the lower part of the table.
- Click the Export button.
Current versions of resources in a password-protected file are saved on your computer in accordance with your browser settings. Previous versions of the resources are saved in the file. The Secret resources are exported blank.
To export a previous version of a resource:
- In the KUMA Console, in the Resources section, select the type of resources that you need.
This opens a window opens with a table of available resources of this type.
If you want to view all resources, in the Resources section, go to the List tab.
- Select the check box for the resource whose change history you want to view, and click the Show version history button in the upper part of the table.
This opens the window with the version history of the resource.
- Click the row of the version that you want to export and click the Export button in the lower part of the displayed window.
You can only export a previous version of a resource. The Export button is not displayed when the current version of the resource is selected.
The resource version is saved in a JSON file on your computer in accordance with your browser settings.
Page topImporting resources
In KUMA 3.4, we recommended using resources from the "[OOTB] KUMA 3.4 resources" package and resources published in the repository after the release of this package.
To import resources:
- In the Resources section, click Import resources.
The Resource import window opens.
- In the Tenant drop-down list, select the tenant to assign the imported resources to.
- In the Import source drop-down list, select one of the following options:
- File
If you select this option, enter the password and click the Import button.
- Repository
If you select this option, a list of packages available for import is displayed. We recommend you to ensure that the repository update date is relatively recent and configure automatic updates if necessary.
You can select one or more packages to import and click the Import button. The dependent resources of the Shared tenant are imported into the Shared tenant, the rest of the resources are imported into the selected tenant. You do not need special rights for the Shared tenant; you must only have the right to import in the selected tenant.
Imported resources marked as "This resource is a part of the package. You can delete it, but it is impossible to edit." can only be deleted. To rename, edit or move an imported resource, make a copy of the resource using the Duplicate button and perform the desired actions with the resource copy. When importing future versions of the package, the duplicate is not updated because it is a separate object.
Imported resources in the "Integration" directory can be edited; such resources are marked as "This resource is a part of the package". A Dictionary of the "Table" type can be added to the batch resource located in the "Integration" directory; adding other resources is not allowed. When importing future versions of the package, the edited resource will not be replaced with the corresponding resource from the package, which allows you to keep the changes you made.
- File
- Resolve the conflicts between the resources imported from the file and the existing resources if they occur. Read more about resource conflicts below.
- If the name, type, and guid of an imported resource fully match the name, type, and guid of an existing resource, the Conflicts window opens with the table displaying the type and the name of the conflicting resources. Resolve displayed conflicts:
- To replace the existing resource with a new one, click Replace.
To replace all conflicting resources, click Replace all.
- To leave the existing resource, click Skip.
For dependent resources, that is, resources that are associated with other resources, the Skip option is not available; you can only Replace dependent resources.
To keep all existing resources, click Skip all.
- To replace the existing resource with a new one, click Replace.
- Click the Resolve button.
The resources are imported to KUMA. The Secret resources are imported blank.
- If the name, type, and guid of an imported resource fully match the name, type, and guid of an existing resource, the Conflicts window opens with the table displaying the type and the name of the conflicting resources. Resolve displayed conflicts:
Importing resources that use the extended event schema
If you import a normalizer that uses one or more fields of the extended event schema, KUMA automatically creates an extended schema field that is used in the normalizer.
If you import other types of resources that use fields of the extended event schema in their logic, the resources are imported successfully. To make sure the imported resources work as intended, you need to create the corresponding extended schema fields in the Settings → Extended event schema fields section or import a normalizer that uses the required fields.
If a normalizer that uses an extended event schema field is imported into KUMA and the same field already exists in KUMA, the previously created field is used.
If a normalizer is imported into KUMA that uses an extended event schema field that does not meet the KUMA requirements, the import is completed, but the extended event schema field is created with the Disabled status and you cannot use this field in other normalizers and resources. An extended event schema field runs afoul of requirements if, for example, its name contains special characters or spaces. If you want to use such a field that does not meet the requirements, you need to fix its problems (for example, by renaming it) and then enable the field.
About conflict resolving
When resources are imported into KUMA from a file, they are compared with existing resources; the following parameters are compared:
- Name and kind. If an imported resource's name and kind parameters match those of the existing one, the imported resource's name is automatically changed.
- ID. If identifiers of two resources match, a conflict appears that must be resolved by the user. This could happen when you import resources to the same KUMA server from which they were exported.
When resolving a conflict you can choose either to replace existing resource with the imported one or to keep exiting resource, skipping the imported one.
In this case, if a conflict occurs, the imported resource is added as a new version of the existing resource. An "imported resource" comment is added to this version.
Some resources are linked: for example, in some types of connectors, the connector secret must be specified. The secrets are also imported if they are linked to a connector. Such linked resources are exported and imported together.
Special considerations of import:
- Resources are imported to the selected tenant.
- If a linked resource was in the Shared tenant, it ends up in the Shared tenant when imported.
- In the Conflicts window, the Parent column always displays the top-most parent resource among those that were selected during import.
- If a conflict occurs during import and you choose to replace existing resource with a new one, it would mean that all the other resources linked to the one being replaced are automatically replaced with the imported resources.
Known errors:
- The linked resource ends up in the tenant specified during the import, and not in the Shared tenant, as indicated in the Conflicts window, under the following conditions:
- The linked resource is initially in the Shared tenant.
- In the Conflicts window, you select Skip for all parent objects of the linked resource from the Shared tenant.
- You leave the linked resource from the Shared tenant for replacement.
- After importing, the categories do not have a tenant specified in the filter under the following conditions:
- The filter contains linked asset categories from different tenants.
- Asset category names are the same.
- You are importing this filter with linked asset categories to a new server.
- In Tenant 1, the name of the asset category is duplicated under the following conditions:
- in Tenant 1, you have a filter with linked asset categories from Tenant 1 and the Shared tenant.
- The names of the linked asset categories are the same.
- You are importing such a filter from Tenant 1 to the Shared tenant.
- You cannot import conflicting resources into the same tenant.
The error "Unable to import conflicting resources into the same tenant" means that the imported package contains conflicting resources from different tenants and cannot be imported into the Shared tenant.
Solution: Select a tenant other than Shared to import the package. In this case, during the import, resources originally located in the Shared tenant are imported into the Shared tenant, and resources from the other tenant are imported into the tenant selected during import.
- Only the general administrator can import categories into the Shared tenant.
The error "Only the general administrator can import categories into the Shared tenant" means that the imported package contains resources with linked shared asset categories. You can see the categories or resources with linked shared asset categories in the KUMA Core log. Path to the Core log:
/opt/kaspersky/kuma/core/log/core
Solution. Choose one of the following options:
- Do not import resources to which shared categories are linked: clear the check boxes next to the relevant resources.
- Perform the import under a General administrator account.
- Only the general administrator can import resources into the Shared tenant.
The error "Only the general administrator can import resources into the Shared tenant" means that the imported package contains resources with linked shared resources. You can see the resources with linked shared resources in the KUMA Core log. Path to the Core log:
/opt/kaspersky/kuma/core/log/core
Solution. Choose one of the following options:
- Do not import resources that have linked resources from the Shared tenant, and the shared resources themselves: clear the check boxes next to the relevant resources.
- Perform the import under a General administrator account.
Tag management
To help manage resources, the KUMA Console lets you add tags to resources. You can use tags to search for a component, as well as manage tags, link or unlink tags.
You cannot add tags to resources that are created from the interface of other resources. Tags can be added only from the resource's own card. You also cannot add tags to a resource that is not editable.
Tag management
The list of tags is displayed in the Settings → Tags section and is displayed as a table with the following columns: Name, Tenant, Used in resources.
In the Tags table, you can:
- Sort tags by Name, Used in resources fields.
- Filter by values of the Tenant field.
- Find a tag by the Name field.
- Go to the list of resources that have the selected tag.
Adding a tag
To add a tag:
- Go to the Resources section and select a resource.
- In the panel above the table, click Add.
- In the Tags field of the selected resource, add a new tag, or select a tag from the list.
- Click Create.
The new tag is added.
You can also add tags from existing ones.
When adding a tag, keep in mind the following special considerations:
- You can add multiple tags.
- A tag can contain characters of various alphabets (for example, Cyrillic, Latin, or Greek characters), numerals, underscores, and spaces.
- A tag may not contain any special characters other than the underscore and the space.
- You can enter the tag in uppercase or lowercase, but after saving, the tag is always displayed in lowercase.
- The tag inherits the tenant of the resource in which it is used.
- A tag is part of a resource and exists as long as the resource exists in which the tag was created or is used.
- Tags are unique within a tenant.
- Tags are imported or exported together with the resource as part of the resource.
Searching by tags
In the Resources section, you can search for resources:
- By tags
- By resource name
The search is performed across all resource types and services.
The search results display a list of resources and services.
To find resources by tags:
- Go to the Resources section and select a resource.
- In the table of the resource, select the Tags column.
- In the Search field that is displayed, enter or select a tag name.
Displays a list of resources if the specified tag is used in those resources.
In the list of resources, you can:
- Sort the list by name and type of resource or service.
- Filter resources or services by resource or service type, or by tag.
- Link or unlink tags.
Linking and unlinking tags
To link tags to a resource or unlink tags from a resource:
- Go to the Resources section.
- Select the List tab.
- In the Name column, select the check boxes next to the relevant resources.
- In the panel above the list, select the Tags tab.
- Click the Link or Unlink button and select the tags that you want to link or unlink.
The selected tags are linked to or unlinked from the resources.
Page topResource usage tracing
For stable operation of KUMA, it is important to understand how some resources affect the performance of other resources, what connections exist between resources and other KUMA objects. You can visualize these interdependencies on an interactive graph in the KUMA Console.
Displaying the links of a resource on a graph
To display the relations of the selected resource:
- In the KUMA Console, in the Resources section, select a resource type.
A list of resources of the selected type is displayed.
- Select the resource that you need.
The Show dependencies button in the panel above the list of resources becomes active. On a narrow display, the button may be hidden under the
icon.
- Click the Show dependencies button.
This opens the a window with the dependency graph of the selected resource. If you do not have rights to view the resource, it is marked in the graph with the
(inaccessible resource) icon. If necessary, you can close the graph window to go back to the list of resources.
Resource dependency graph
The graph displays all relations that are formed based on the universal unique identifier (UUID) of resources used in the configuration of the resource selected for display, as well as relations of resources that have the UUID of the selected resource in their configuration. Downward links, that is, resources referenced (used) by the selected resource, are displayed down to the last level, while for upward links, that is, resources that reference the selected resource, only one level is displayed.
On the graph, you can view the dependencies of the following resources:
Correlation rules
Aggregation rules
Enrichment rules
Response rules
Data mining rules
Normalizers
Connectors
Destinations
Filters
Notification templates
Active lists
Dictionaries
Proxy servers
Secrets
Context tables
Collectors
Note that if a collector was initially selected for displaying links, "upward" links are not displayed.
Correlators
Storages
Agents (autoagents)
Note that if an agent is selected for displaying links, the collector is displayed with the linked relation type only if the collector is running as a service and in the collector is correctly (fqdn+port) specified as the destination of the agent.
Event routers
Note that if an event router was initially selected for displaying links, "upward" links are not displayed.
Integrations
The name of the integration corresponds to the name of the tab in the Integrations section.
Resource group
The number before the parentheses indicates the number of resources from the group displayed in the graph; the number in parentheses indicates the total number of resources in the group.
Inaccessible resource (if you do not have the rights to view it).
Clicking a resource node lets you view the following information about the resource:
- Name
Contains a link to the resource; clicking the link opens the resource in a separate tab; this does not close the graph window.
- Type
- Path
Resource path without a link.
- Tags
- Tenant
- Package name
You can open the context menu of the resource and perform the following actions:
- Show relations of resource
The dependencies of the selected resource are displayed.
- Hide resource on graph
The selected resource is hidden. Resources at the lower level that the selected resource references are marked with "*" as having hidden links. Resources that refer to a hidden resource are marked with the
icon as having hidden links. In this case, the graph becomes broken.
- Hide "downward" relations of resource on graph
The selected resource remains. Only those lower-level resources that do not have any links remaining on the first higher level on the graph are hidden. Resources referenced by resources of the first (hidden) level are marked with "*" as having hidden links.
- Hide all resources of this type on graph
All resources of the selected type are hidden. This operation is applied to each resource of the selected type.
- Update resource relations
You can update the resource state if the resource was edited by another user while you were managing the graph. Only changes of visible links are displayed.
- Group
If there is no group node on the screen: the group node appears on the screen, resources of the same type as the selected resource and resources that refer to the same resource are hidden. The edges are redrawn from the group. The Group button is available only when more than 10 links to resource of the same type exist.
If there is a group node on the screen: the resource is hidden and added to the group, the edges are redrawn from the group.
Several types of relations are displayed on the graph:
- Solid line without a caption.
Represents a direct link by UUID, including the use of secrets and proxies in integrations.
- Line captioned <function_name>.
Represents using an active list in a correlation rule.
- Dotted line captioned linked.
Represents a link by URL, for example, of a destination with a collector, or of a destination with a storage.
Resources created inline are shown on the graph as a dotted line with the linked type.
We do not recommend building large dependency graphs. We recommend limiting the number of nodes to 100 nodes.
When you open the graph, the resource selected for display is highlighted with a blinking circle for some time to set it apart graphically from other resources and draw attention to it.
You can look at the map of the graph to get an idea of where you are on the graph. You can use the selector and move it to display the necessary part of the graph.
By clicking the Arrange button, you can improve the display of resources on the graph.
If you select Show links, the focus on the graph does not change, and the resources are displayed so that you do not have to return to where you started.
When you select a group node in the graph, a sidebar is displayed, in which you can hide or show the resources that are part of the group. To do so, select the check box next to the relevant resource and click the Show on graph or Hide on graph button.
The graph retains its state if you displayed something on the graph, then switched to editing a resource, and then reopened the graph tab.
The previously displayed resources on the graph remain in their places when new resources are added to the graph.
When you close the graph, all changes are discarded.
After the resource links are drawn on the graph, you can search for a node:
- By name
- By tag
- By path
- By package
Nodes, including groups that match the selection criterion, are highlighted with a yellow circle.
You can filter the graph by resource type:
- Hide or show resources of a certain type.
- Hide resources of multiple types. Display all types of resources.
With the filter window closed, you can tell the selected filters by the indicator, a red dot in the toolbar.
Your actions when managing the graph (the last 50 actions) are saved in memory; you can undo changes by pressing Ctrl/Command+Z.
You can save the displayed graph can be saved to an SVG file. The visible part of the graph is saved in the file.
Page topResource versioning
KUMA stores the change history of resources in the form of versions. A resource version is created automatically when you create a new resource or save changes made to the settings of an existing resource.
The change history is not available for the Dictionaries resource. To save the history of dictionaries, you can export data.
Resource versions are retained for the duration specified in the Settings section. When the age of a resource version reaches the specified value, the version is automatically deleted.
You can view the change history of KUMA resources, compare versions, and restore a previous version of a resource, for example, if it fails and you need to recover it.
To view the change history of a resource:
- In the KUMA Console, in the Resources section, select the type of resources that you need.
This opens a window opens with a table of available resources of this type.
If you want to view all resources, in the Resources section, go to the List tab.
- Select the check box for the resource whose change history you want to view, and click the Show version history button in the upper part of the table.
This opens a window with a table of saved versions of the selected resource. New resources have only one version, the current version.
For each version, the table displays the following information:
- Version is the serial number of the resource version. When you save changes to the resource and create a new version, the serial number is increased by 1.
The version with the highest number and the most recent publication date reflects the current state of the resource. Version 1 reflects the state of the resource at the moment when it was created.
- Published is the date and time when the resource version was created.
- Author is the login of the user that saved the changes to the resource.
If the changes were made by the system or by the migration script, the displayed value is system.
- Comment is a text comment added by the author when saving changes, or a system comment describing the changes made.
- Retention period is the number of days and the date after which the resource version will be deleted.
If necessary, you can configure the retention period for resource versions.
- Actions is the button that restores the resource version.
You can sort the table of resource versions by the Version, Published, and Author columns by clicking the heading and selecting Ascending or Descending. You can also display only changes made by a specific author or authors in the table by clicking the heading of the Author column and selecting the authors as needed.
If you want to view the status of a resource in a specific version, click that version in the table. This opens a window with the resource of the selected version, in which you can:
- View the settings specified in that version of the resource.
- Restore this version of the resource by clicking the Restore button.
- Export this version of the resource to a JSON file by clicking the Export button.
Comparing resource versions
You can compare any two versions of a resource, for example, if you need to track changes.
To compare versions of a resource:
- In the KUMA Console, in the Resources section, select the type of resources that you need.
This opens a window opens with a table of available resources of this type.
If you want to view all resources, in the Resources section, go to the List tab.
- Select the check box next to a resource and click the Show version history button in the upper part of the table.
This opens the window with the version history of the resource.
- Select the check boxes next to the two versions of the resource that you want to compare and click the Compare button in the upper part of the table.
This opens the resource version comparison window. Resource fields are displayed as a list or in JSON format. Differences between the two versions are highlighted. You can select other versions to compare using the drop-down lists above the resource fields.
Page topRestoring a resource version
You can restore a previous version of a resource, for example, if you need to recover the resource in case of mistakes made when making changes.
Versions of automatically generated agents cannot be restored separately because they are created when the parent collector is modified. If you want to restore a version of an automatically generated agent, you need to restore the corresponding version of the parent collector.
To restore a previous version of a resource:
- In the KUMA Console, in the Resources section, select the type of resources that you need.
This opens a window opens with a table of available resources of this type.
If you want to view all resources, in the Resources section, go to the List tab.
- Select the check box next to a resource and click the Show version history button in the upper part of the table.
This opens the window with the version history of the resource.
- In the row of the relevant version, in the Action column, click the Restore button.
You can also restore a version by clicking the row of this version and clicking the Restore button in the lower part of the window.
You can restore only previous versions of a resource; for the current version, the Restore button is not available.
If the structure of the resource has changed after a KUMA update, restoring its saved versions may not be possible.
- Confirm the action and, if necessary, add a comment. If you do not add a comment, the "Restored from v.<number of the restored version>" comment is automatically added to the version.
The resource version is restored as a new version and become the current version.
If the resource for which you restored the version is added to the active service, this also changes the state of the service. You must restart the service to apply the resource change.
Page topConfiguring the retention period for resource versions
You can change the retention period of resource versions in the KUMA Console in the Settings → General section by changing the Resource history retention period, days setting.
The default setting is 30 days. If you want to keep all versions of resources without time limits, specify 0 (store indefinitely).
Only a user with the General administrator role can view and manage the retention period of resource versions.
The retention period of resource versions is checked daily, and versions of resources that have been stored in KUMA for longer than the specified period are automatically deleted. In the task manager, the Clear resource change history task is created to check the storage duration of resource versions and delete old versions. This task also runs after a restart of the Core component.
You can check the time remaining until a resource version is deleted in the table of versions, in the Retention period column.
Page topDestinations
Destinations define network settings for sending normalized events. Collectors and correlators use destinations to describe where to send processed events. Typically, correlators and storages act as destinations.
You can specify destination point settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of destination.
Destinations can have the following types:
- internal – Used for receiving data from KUMA services using the 'internal' protocol.
- nats-jetstream – Used for communication through NATS.
- tcp – Used for communication over TCP.
- http – Used for communication over the HTTP protocol.
- diode – Used to transmit events using a data diode.
- kafka – Used for kafka communications.
- file – Used for writing to a file.
- storage – Used for sending data to storage.
- correlator – Used for sending data to a correlator.
- eventRouter – Used for sending events to an event router.
Destination, internal type
Destinations of the internal typeUsed for receiving data from KUMA services using the 'internal' protocol. You can send the following data over the 'internal' protocol:
- Internal data, such as event routes.
- File attributes. If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file, 1c-xml, or 1c-log type, at the Event parsing step, in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:
$kuma_fileSourceName
to pass the name of the file being processed by the collector in the KUMA event field.$kuma_fileSourcePath
to pass the path to the file being processed by the collector in the KUMA event field.
When you use a file, 1c-xml, or 1c-log connector, the new variables in the normalizer will only work with destinations of the internal type.
- Events to the event router. The event router can only receive events over the 'internal' protocol, therefore you can only use internal destinations when sending events to the event router.
Settings for a destination of the internal type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: internal. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
Proxy server |
The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new. If you want to edit the settings of an existing proxy server, click the pencil |
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Health check timeout |
Interval, in seconds, for checking the health of the destination. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, type nats-jetstream
Destinations of the nats-jetstream typeUsed for communication through NATS. Settings for a destination of the nats-jetstream type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: nats-jetstream. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Subject |
The topic of NATS messages. Characters are entered in Unicode encoding. Required setting. |
Authorization |
Type of authorization when connecting to the URL specified in the URL field:
|
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, tcp type
Destinations of the tcp typeUsed for communication over TCP. Settings for a destination of the tcp type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: tcp. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, http type
Destinations of the http typeUsed for communication over the HTTP protocol. Settings for a destination of the http type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: http. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Authorization |
Type of authorization when connecting to the URL specified in the URL field:
|
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Proxy server |
The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new. If you want to edit the settings of an existing proxy server, click the pencil |
Compression |
Drop-down list for configuring Snappy compression:
|
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Path |
The path that must be added in the request to the URL specified in the URL field on the Basic settings tab. For example, if you specify |
Health check path |
The URL for sending requests to obtain health information about the system that the destination resource is connecting to. |
Health check |
This toggle switch enables the health check. This toggle switch is turned off by default. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, diode type
Destinations of the diode typeUsed to transmit events using a data diode. Settings for a destination of the diode type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: diode. Required setting. |
Data diode source directory |
Path to the directory from which the data diode moves events. The maximum length of the path is 255 Unicode characters. Limitations when using prefixes in paths on Windows servers Limitations when using prefixes in paths on Linux servers The paths specified in the Data diode source directory and Temporary directory fields may not be the same. |
Temporary directory |
Path to the directory in which events are prepared for transmission to the data diode. The maximum length of the path is 255 Unicode characters. Events are stored in a file when a timeout or a buffer overflow occurs. The default timeout is 10 seconds. The prepared file with events is moved to the directory specified in the Data diode source directory field. The checksum (SHA-256) of the file contents is used as the name of the file with events. The paths specified in the Data diode source directory and Temporary directory fields may not be the same. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Compression |
Drop-down list for configuring Snappy compression:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, kafka type
Destinations of the kafka typeUsed for kafka communications. Settings for a destination of the kafka type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: kafka. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Topic |
Subject of Kafka messages. The maximum length of the subject is 255 characters. You can use the following characters: a–z, A–Z, 0–9, ".", "_", "-". Required setting. |
Authorization |
Type of authorization when connecting to the URL specified in the URL field:
|
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, file type
Destinations of the file typeUsed for writing to a file. Settings for a destination of the file type are described in the following tables.
When deleting a destination of the file type that is being used in a service, you must restart the service.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: file. Required setting. |
URL |
Path to the file to which the events must be written. Limitations when using prefixes in file paths Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, storage type
Destinations of the storage typeUsed for sending data to storage. Settings for a destination of the storage type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: storage. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Proxy server |
The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new. If you want to edit the settings of an existing proxy server, click the pencil |
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Health check timeout |
Interval, in seconds, for checking the health of the destination. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, correlator type
Destinations of the correlator typeUsed for sending data to a correlator. Settings for a destination of the correlator type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: correlator. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Proxy server |
The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new. If you want to edit the settings of an existing proxy server, click the pencil |
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Health check timeout |
Interval, in seconds, for checking the health of the destination. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Destination, eventRouter type
Destinations of the eventRouter typeUsed for sending events to an event router. Settings for a destination of the eventRouter type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
State |
This toggle switch enables sending events to the destination. This toggle switch is turned on by default.
|
Type |
Destination type: eventRouter. Required setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Buffer flush interval |
Interval (in seconds) for sending events to the destination. The default value is 1 second. |
Disk buffer size limit |
Size of the disk buffer in bytes. The default value is 10 GB. |
Handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Output format |
Format in which events are sent to the destination:
|
Proxy server |
The proxy server for the destination. You can select an existing proxy server or create a new proxy server. To create a new proxy server, select Create new. If you want to edit the settings of an existing proxy server, click the pencil |
URL selection policy |
Method of determining URLs to which events must be sent first if you added multiple URLs in the URL field on the Basic settings:
|
Health check timeout |
Interval, in seconds, for checking the health of the destination. |
Disk buffer disabled |
This toggle switch that enables the disk buffer. This toggle switch is turned on by default. The disk buffer is used if the collector cannot send normalized events to the destination. You can specify the size of the disk buffer in the Disk buffer size limit field. If the disk buffer runs out of free space, new normalized events will overwrite old normalized events, starting with the oldest. |
Timeout |
The time, in seconds, for which the destination waits for a response from another service or component. |
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Filter |
Filter for determining which events must be processed by the resource. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil |
Predefined destinations
Destinations listed in the table below are included in the KUMA distribution kit.
Predefined destinations
Destination name |
Description |
[OOTB] Correlator |
Sends events to a correlator. |
[OOTB] Storage |
Sends events to storage. |
Normalizers
Normalizers are used for converting raw events that come from various sources in different formats to the KUMA event data model. Normalized events become available for processing by other KUMA resources and services.
A normalizer consists of the main event parsing rule and optional additional event parsing rules. By creating a main parsing rule and a set of additional parsing rules, you can implement complex event processing logic. Data is passed along the tree of parsing rules depending on the conditions specified in the Extra normalization conditions setting. The sequence in which parsing rules are created is significant: the event is processed sequentially and the processing sequence is indicated by arrows.
The following event normalization options are now available:
- 1 collector — 1 normalizer
We recommend using this method if you have many events of the same type or many IP addresses from which events of the same type may originate. You can configure one collector with only one normalizer, which is optimal in terms of performance.
- 1 collector — multiple normalizers linked to IP
This method is available for collectors with a connector of UDP, TCP, or HTTP type. If a UDP, TCP, or HTTP connector is specified in the collector at the 'Transport' step, then at the 'Event parsing' step, you can specify multiple IP addresses on the 'Parsing settings' tab and choose the normalizer that you want to use for events coming from the specified addresses. The following types of normalizers are available: json, cef, regexp, syslog, csv, kv, xml. For normalizers of the Syslog and regexp types, you can specify extra normalization conditions depending on the value of the DeviceProcessName field.
A normalizer is created in several steps:
- Preparing to create a normalizer
A normalizer can be created in the KUMA Console:
- In the Resources → Normalizers section.
- When creating a collector, at the Event parsing step.
Then parsing rules must be created in the normalizer.
- Creating the main parsing rule for an event
The main parsing rule is created using the Add event parsing button. This opens the Event parsing window, where you can specify the settings of the main parsing rule:
- Specify event parsing settings.
- Specify event enrichment settings.
The main parsing rule for an event is displayed in the normalizer as a dark circle. You can view or modify the settings of the main parsing rule by clicking this circle. When you hover the mouse over the circle, a plus sign is displayed. Click it to add the parsing rules.
The name of the main parsing rule is used in KUMA as the normalizer name.
- Creating additional event parsing rules
Clicking the plus icon that is displayed when you hover the mouse over the circle or the block corresponding to the normalizer opens the Additional event parsing window where you can specify the settings of the additional parsing rule:
- Specify the conditions for sending data to the new normalizer.
- Specify event parsing settings.
- Specify event enrichment settings.
The additional event parsing rule is displayed in the normalizer as a dark block. The block displays the triggering conditions for the additional parsing rule, the name of the additional parsing rule, and the event field. When this event field is available, the data is passed to the normalizer. Click the block of the additional parsing rule to view or modify its settings.
If you hover the mouse over the additional normalizer, a plus button appears. You can use this button to create a new additional event parsing rule. To delete a normalizer, use the button with the trash icon.
- Completing the creation of the normalizer
To finish the creation of the normalizer, click Save.
In the upper right corner, in the search field, you can search for additional parsing rules by name.
For normalizer resources, you can enable the display of control characters in all input fields except the Description field.
If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the normalizer under Resources → Normalizers in the web interface.
Event parsing settings
You can configure the rules for converting incoming events to the KUMA format when creating event parsing rules in the normalizer settings window, on the Normalization scheme tab. Available event parsing settings are listed in the table below.
When normalizing events, you can use extended event schema fields in addition to standard KUMA event schema fields.
Available event parsing settings
Setting |
Description |
---|---|
Name |
Name of the parsing rule. Maximum length of the name: 128 Unicode characters. The name of the main parsing rule is used as the name of the normalizer. Required setting. |
Tenant |
The name of the tenant that owns the resource. This setting is not available for extra parsing rules. |
Parsing method |
The type of incoming events. Depending on the selected parsing method, you can use the predefined event field matching rules or define your own rules. When you select some parsing methods, additional settings may become available they you must specify. Available parsing methods: Required setting. |
Keep raw event |
Keeping raw events in the newly created normalized event. Available values:
Required setting. This setting is not available for extra parsing rules. |
Keep extra fields |
Keep fields and values for which no mapping rules are configured. This data is saved as an array in the Filtering based on data from the Extra event field By default, no extra fields are saved. Required setting. |
Description |
Description of the resource. Maximum length of the description: 4000 Unicode characters. This setting is not available for extra parsing rules. |
Event examples |
Example of data that you want to process. This setting is not available for the following parsing methods: netflow5, netflow9, sflow5, ipfix, and sql. If the event was parsed successfully, and the type of the data obtained from the raw event matches the type of the KUMA field, the Event examples field is filled with data obtained from the raw event. For example, the |
Mapping |
Settings for configuring the mapping of source event fields to fields of the event in the KUMA format:
You can add new table rows or delete table rows. To add a new table row, click Add row. To delete a single row in the table, click If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field. If the size of the KUMA event field is less than the length of the value placed in it, the value is truncated to the size of the event field. |
Extended event schema
You can use the extended event schema fields in normalizers for normalizing events and in other KUMA resources, for example, as widget fields or to filter and search for events. You can view the list of all extended event schema fields that exist in KUMA in the Settings → Extended event schema fields section. The list of extended event schema fields is the same for all tenants.
Only users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Read shared resources, and Manage shared resources roles can view the table of extended event schema fields.
The Extended event schema fields table contains the following information:
- Type—Data type of the extended event schema field.
- Field name—Name of the extended event schema field, without a type.
You can click the name to edit the settings of the extended event schema field.
- Status—Whether the extended event schema field can be used in resources.
You can Enable or Disable the toggle switch to allow or forbid using this extended event schema field in new resources. However, a disabled field is still used in resource configurations that are already operational, until you manually remove the field from the configuration; the field also remains available in the list of table columns in the Events section for managing old events.
Only a user with the General administrator role can disable an extended event schema field.
- Update date—Date and time of the last modification of the extended event schema field.
- Created by—Name of the user that created the extended event schema field.
- Dependencies—Number of KUMA resources, dashboard layouts, reports, presets, and field sets for searching event sources that use the extended event schema field.
You can click the number to open a pane with a table of all resources and other KUMA entities that are using this field. For each dependency, the table displays the name, tenant (only for resources), and type. Dependencies in the table are sorted by name. Clicking the name of a dependency takes you to its page (except for dashboard layouts, presets, and saved user queries).
You can view the dependencies of an extended event schema field only for resources and entities to whose tenants you have access. If you do not have access to the tenant, its resources are not displayed in the table, but still count towards the number of dependencies.
- Description—Text description of the field.
By default, the table of extended event schema fields is sorted by update date in descending order. If necessary, you can sort the table by clicking a column heading and selecting Ascending or Descending; you can also use context search by field name.
By default, the following service extended event schema fields are automatically added to KUMA:
KL_EventRoute
, typeS
for storing information about the route of the event.You can use this field in normalizers, as a key or value in active lists, in enrichment rules, as a query field in data collection and analysis rules, in correlation rules. You cannot use this field to detect event sources.
- The following fields are added to a correlation event:
KL_CorrelationRulePriority
, typeN
KL_SourceAssetDisplayName
, typeS
KL_DestinationAssetDisplayName
, typeS
KL_DeviceAssetDisplayName
, typeS
KL_SourceAccountDisplayName
, typeS
KL_DestinationAccountDisplayName
, typeS
You cannot use this service fields to search for events.
You cannot edit, delete, export, or disable service fields. All extended event schema fields with the KL_
prefix are service fields and can be managed only from Kaspersky servers. We do not recommend using the KL_
prefix when adding new extended event schema fields.
Adding extended event schema fields
Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Manage shared resources roles can add new extended event schema fields.
To add an extended event schema field:
- In the KUMA Console, in the Settings → Extended event schema fields section, click the Add button in the upper part of the table.
This opens the Create extended schema pane.
- Enable or disable the Status toggle switch to enable or disable this extended event schema field for resources.
The toggle switch is turned on by default. A disabled field remains available in the list of table columns in the Events section for managing old events.
- In the Type field, select the data type of the extended event schema field.
- In the Name field, specify the name of the extended event schema field.
Consider the following when naming extended event schema fields:
- The name must be unique within the KUMA instance.
- Names are case-sensitive. For example,
Field_name
andfield_name
are different names. - You can use Latin, Cyrillic characters and numerals. Spaces or " ~ ` @ # $ % ^ & * ( ) + - [ ] { } | \ | / . " < > ; ! , : ? = characters are not allowed.
- If you want to use the extended event schema fields to search for event sources, you can only use Latin characters and numerals.
- The maximum length is 128 characters.
- If necessary, in the Description field, enter a description for the extended event schema field.
We recommend describing the purpose of the extended event schema field. Only Unicode characters are allowed in the description. The maximum length is 256 characters.
- Click the Save button.
A new extended event schema field is added and displayed at the top of the table. An audit event is generated for the creation of the extended event schema field. If you have enabled the field, you can use it in normalizers and when configuring resources.
Page topEditing extended event schema fields
Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, Manage shared resources roles can edit existing extended event schema fields.
To edit an extended event schema field:
- In the KUMA Console, in the Settings → Extended event schema fields section, click the name of the field that you want to edit.
This opens the Edit extended schema pane. This pane displays the settings of the selected field, as well as the Dependencies table with a list of resources, dashboard layouts, reports, presets, and sets of fields for finding event sources that use this field. Only resources to whose tenants you have access are displayed. If the field is used by resources to whose tenant you do not have access, such resources are not displayed in the table. Resources in the table are sorted by name.
Clicking the name of a resource or entity takes you to its page (except for dashboard resources, presets, and saved user queries).
- Make the changes you need in the available settings.
You can edit the Type and Field name settings only if the extended event schema field does not have dependencies. You can edit the Status and Description settings for any extended event scheme field. However, a field with the Disabled status is still used in resource configurations that are already operational, until you manually remove the field from the configuration; the field also remains available in the list of table columns in the Events section for managing old events.
Disabling an extended event schema field using the Status field requires the General administrator role.
- Click the Save button.
The extended event schema field is updated. An audit event is generated about the modification of the field.
Page topImporting and exporting extended event schema fields
You can add multiple new extended event schema fields at once by importing them from a JSON file. You can also export all extended event schema fields with information about them to a file, for example, to propagate the list of fields to other KUMA instances to maintain resources.
Users with the General administrator, Tenant administrator, Tier 2 analyst, Tier 1 analyst, Junior analyst, and Manage shared resources roles can import an export extended event schema fields. Users with the Read shared resources role can only export extended event schema fields.
To import extended event schema fields into KUMA from a file:
- In the KUMA Console, in the Settings → Extended event schema fields section, click the Import button.
- This opens a window; in that window, select a JSON file with a list of extended event schema field objects.
Example JSON file:
[
{"kind": "SA",
"name": "<fieldName1>",
"description": "<description1>",
"disabled": false},
{"kind": "N",
"name": "<fieldName2>",
"description": "<description2>",
"disabled": false},
....
{"kind": "FA",
"name": "<fieldNameX>",
"description": "<descriptionX>",
"disabled": false}
]
When importing fields from a file, their names are checked for possible conflicts with fields of the same type. If a field with the same name and type already exists in KUMA, such fields are not imported from the file.
Extended event schema fields are imported from the file to KUMA. An audit event about the import of fields is generated, and a separate audit event is generated for each added field.
To export extended event schema fields to a file:
- In the KUMA Console, go to the Settings → Extended event schema fields section.
- If you want to export specific extended event schema fields:
- Select the check boxes in the first column of the table for the required fields.
You cannot select service fields.
- Click the Export selected button in the upper part of the table.
- Select the check boxes in the first column of the table for the required fields.
- If you want to export all extended event schema fields, click the Export all button in the upper part of the table.
A JSON file with a list of extended event schema field objects and information about them is downloaded.
Page topDeleting extended event schema fields
Only a user with the General administrator role can delete extended event schema fields.
You can delete only those extended event schema fields that are not service fields, that have the Disabled status, and that are not used in KUMA resources and other entities (do not have dependencies). We recommend deleting extended event schema fields after enough time has passed to make sure that all events in which the field was used have been deleted from KUMA. When you delete a field, it is no longer displayed in event tips.
To delete extended event schema fields:
- In the KUMA Console, go to the Settings → Extended event schema fields section.
- Select the check boxes in the first column of the table next to one or more fields that you want to delete.
To select all fields, you can select the check box in the heading of the first column.
- Click the Delete button in the upper part of the table.
The Delete button is active only if all selected fields are disabled and have no dependencies. If at least one field is enabled or has a dependency, the button is inactive.
If you want to delete a field that is used in at least one KUMA resource (has a dependency), but you do not have access to its tenant, the Delete button is active when this field is selected, but an error is displayed when you try to delete it.
The selected fields are deleted. An audit event is generated about the deletion of the fields.
Page topUsing extended event schema fields in normalizers
When using extended event schema fields, the general limit for the maximum size of an event that can be processed by the collector is the same, 4 MB. Information about the types of extended event schema fields is shown in the table below (step 6 of the instructions).
Using many unique fields of the extended event schema can reduce the performance of the system, increase the amount of disk space required for storing events, and make the information difficult to understand.
We recommend consciously choosing a minimal set of additional fields of the extended event schema that you want to use in normalizers and correlation.
To use the fields of the extended event schema:
- Open an existing normalizer or create a new normalizer.
- Specify the basic settings of the normalizer.
- Click Add row.
- For the Source setting, enter the name of the source field in the raw event.
- For the KUMA field setting, start typing the name of the extended event schema field and select the field from the drop-down list.
The extended event schema fields in the drop-down list have names in the
<type>.<field name>
format. - Click the Save button to save the event normalizer.
The normalizer is saved with the selected extended event schema field.
If the data in the fields of the raw event does not match the type of the KUMA field, the value is not saved during the normalization of events if type conversion cannot be performed. For example, the string test
cannot be written to the DeviceCustomNumber1
KUMA field of the Number type.
If you want to minimize the load on the storage server when searching events, preparing reports, and performing other operations on events in storage, use KUMA event schema fields as your first preference, extended event schema fields as your second preference, and the Extra
fields as your last resort.
Enrichment in the normalizer
When creating event parsing rules in the normalizer settings window, on the Enrichment tab, you can configure the rules for adding extra data to the fields of the normalized event using enrichment rules. Enrichment rules are stored in the settings of the normalizer where they were created.
You can create enrichment rules by clicking the Add enrichment button. To delete an enrichment rule, click next to it. Extended event schema fields can be used for event enrichment. Available enrichment rule settings are listed in the table below.
Available enrichment rule settings
Setting |
Description |
---|---|
Source kind |
Enrichment type. Depending on the selected enrichment type, you may see advanced settings that will also need to be completed. Available types of enrichment: Required setting. |
Target field |
The KUMA event field that you want to populate with the data. Required setting. This setting is not available for the enrichment source of the Table type. |
Conditions for forwarding data to an extra normalizer
When creating additional event parsing rules, you can specify the conditions. When these conditions are met, the events are sent to the created parsing rule for processing. Conditions can be specified in the Additional event parsing window, on the Extra normalization conditions tab. This tab is not available for the basic parsing rules.
Available settings:
- Use raw event — If you want to send a raw event for extra normalization, select Yes in the Keep raw event drop-down list. The default value is No. We recommend passing a raw event to normalizers of json and xml types. If you want to send a raw event for extra normalization to the second, third, etc nesting levels, at each nesting level, select Yes in the Keep raw event drop-down list.
- Field to pass into normalizer—indicates the event field if you want only events with fields configured in normalizer settings to be sent for additional parsing.
If this field is blank, the full event is sent to the extra normalizer for processing.
- Set of filters—used to define complex conditions that must be met by the events received by the normalizer.
You can use the Add condition button to add a string containing fields for identifying the condition (see below).
You can use the Add group button to add a group of filters. Group operators can be switched between AND, OR, and NOT. You can add other condition groups and individual conditions to filter groups.
You can swap conditions and condition groups by dragging them by the
icon; you can also delete them using the
icon.
Filter condition settings:
- Left operand and Right operand—used to specify the values to be processed by the operator.
In the left operand, you must specify the source field of events coming into the normalizer. For example, if the eventType - DeviceEventClass mapping is configured in the Basic event parsing window, then in the Additional event parsing window on the Extra normalization conditions tab, you must specify eventType in the left operand field of the filter. Data is processed only as text strings.
- Operators:
- = – full match of the left and right operands.
- startsWith – the left operand starts with the characters specified in the right operand.
- endsWith – the left operand ends with the characters specified in the right operand.
- match – the left operand matches the regular expression (RE2) specified in the right operand.
- in – the left operand matches one of the values specified in the right operand.
The incoming data can be converted by clicking the button. The Conversion window opens, where you can use the Add conversion button to create the rules for converting the source data before any actions are performed on them. In the Conversion window, you can swap the added rules by dragging them by the
icon; you can also delete them using the
icon.
Supported event sources
KUMA supports the normalization of events coming from systems listed in the "Supported event sources" table. Normalizers for these systems are included in the distribution kit.
Supported event sources
System name |
Normalizer name |
Type |
Normalizer description |
---|---|---|---|
1C EventJournal |
[OOTB] 1C EventJournal Normalizer |
xml |
Designed for processing the event log of the 1C system. The event source is the 1C log. |
1C TechJournal |
[OOTB] 1C TechJournal Normalizer |
regexp |
Designed for processing the technology event log. The event source is the 1C technology log. |
Absolute Data and Device Security (DDS) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
AhnLab Malware Defense System (MDS) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Ahnlab UTM |
[OOTB] Ahnlab UTM |
regexp |
Designed for processing events from the Ahnlab system. The event sources is system logs, operation logs, connections, the IPS module. |
AhnLabs MDS |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Apache Cassandra |
[OOTB] Apache Cassandra file |
regexp |
Designed for processing events from the logs of the Apache Cassandra database version 4.0. |
Aruba ClearPass |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Atlassian Conflunce |
[OOTB] Atlassian Jira Conflunce file |
regexp |
Designed for processing events of Atlassian Jira, Atlassian Confluence systems (Jira 9.12, Confluence 8.5) stored in files. |
Atlassian Jira |
[OOTB] Atlassian Jira Conflunce file |
regexp |
Designed for processing events of Atlassian Jira, Atlassian Confluence systems (Jira 9.12, Confluence 8.5) stored in files. |
Avanpost FAM |
[OOTB] Avanpost FAM syslog |
regexp |
Designed for processing events of the Avanpost Federated Access Manager (FAM) 1.9 received via syslog. |
Avanpost IDM |
[OOTB] Avanpost IDM syslog |
regexp |
Designed for processing events of the Avanpost IDM system received via syslog. |
Avaya Aura Communication Manager |
[OOTB] Avaya Aura Communication Manager syslog |
regexp |
Designed for processing some of the events received from Avaya Aura Communication Manager 7.1 via syslog. |
Avigilon Access Control Manager (ACM) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Ayehu eyeShare |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Arbor Pravail |
[OOTB] Arbor Pravail syslog |
Syslog |
Designed for processing events of the Arbor Pravail system received via syslog. |
Aruba Aruba AOS-S |
[OOTB] Aruba Aruba AOS-S syslog |
regexp |
Designed for processing certain types of events received from Aruba network devices with Aruba AOS-S 16.10 firmware via syslog. The normalizer supports the following types of events: accounting events, ACL events, ARP protect events, authentication events, console events, loop protect events. |
Barracuda Cloud Email Security Gateway |
[OOTB] Barracuda Cloud Email Security Gateway syslog |
regexp |
Designed for processing events from Barracuda Cloud Email Security Gateway via syslog. |
Barracuda Networks NG Firewall |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Barracuda Web Security Gateway |
[OOTB] Barracuda Web Security Gateway syslog |
Syslog |
Designed for processing some of the events received from Barracuda Web Security Gateway 15.0 via syslog. |
BeyondTrust Privilege Management Console |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
BeyondTrust’s BeyondInsight |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Bifit Mitigator |
[OOTB] Bifit Mitigator Syslog |
Syslog |
Designed for processing events from the DDOS Mitigator protection system received via Syslog. |
Bloombase StoreSafe |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
BMC CorreLog |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Bricata ProAccel |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Brinqa Risk Analytics |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Broadcom Symantec Advanced Threat Protection (ATP) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Broadcom Symantec Endpoint Protection |
[OOTB] Broadcom Symantec Endpoint Protection |
regexp |
Designed for processing events from the Symantec Endpoint Protection system. |
Broadcom Symantec Endpoint Protection Mobile |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Broadcom Symantec Threat Hunting Center |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Canonical LXD |
[OOTB] Canonical LXD syslog |
Syslog |
Designed for processing events received via syslog from the Canonical LXD system version 5.18. |
Checkpoint |
[OOTB] Checkpoint syslog |
Syslog |
[OOTB] Checkpoint syslog — designed for processing events received from the Checkpoint R81 firewall via the Syslog protocol. [OOTB] Checkpoint Syslog CEF by CheckPoint — designed for processing events in CEF format received from the Checkpoint firewall via the Syslog protocol. |
Cisco Access Control Server (ACS) |
[OOTB] Cisco ACS syslog |
regexp |
Designed for processing events of the Cisco Access Control Server (ACS) system received via Syslog. |
Cisco ASA |
[OOTB] Cisco ASA and IOS syslog |
Syslog |
Designed for certain events of Cisco ASA and Cisco IOS devices received via syslog. |
Cisco Email Security Appliance (WSA) |
[OOTB] Cisco WSA AccessFile |
regexp |
Designed for processing the event log of the Cisco Email Security Appliance (WSA) proxy server, the access.log file. |
Cisco Firepower Threat Defense |
[OOTB] Cisco ASA and IOS syslog |
Syslog |
Designed for processing events for network devices: Cisco ASA, Cisco IOS, Cisco Firepower Threat Defense (version 7.2) received via syslog. |
Cisco Identity Services Engine (ISE) |
[OOTB] Cisco ISE syslog |
regexp |
Designed for processing events of the Cisco Identity Services Engine (ISE) system received via Syslog. |
Cisco IOS |
[OOTB] Cisco ASA and IOS syslog |
Syslog |
Designed for certain events of Cisco ASA and Cisco IOS devices received via syslog. |
Cisco Netflow v5 |
[OOTB] NetFlow v5 |
netflow5 |
Designed for processing events from Cisco Netflow version 5. |
Cisco NetFlow v9 |
[OOTB] NetFlow v9 |
netflow9 |
Designed for processing events from Cisco Netflow version 9. |
Cisco Prime |
[OOTB] Cisco Prime syslog |
Syslog |
Designed for processing events of the Cisco Prime system version 3.10 received via syslog. |
Cisco Secure Email Gateway (SEG) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Cisco Secure Firewall Management Center |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Cisco WLC |
[OOTB] Cisco WLC syslog
|
regexp
|
Normalizer for some types of events received from Cisco WLC network devices (2500 Series Wireless Controllers, 5500 Series Wireless Controllers, 8500 Series Wireless Controllers, Flex 7500 Series Wireless Controllers) via Syslog. |
Cisco WSA |
[OOTB] Cisco WSA file |
regexp |
Designed for processing the event log of the Cisco WSA 14.2, 15.0 proxy server. The normalizer supports processing events generated using the template: %t %e %a %w/%h %s %2r %A %H/%d %c %D %Xr %?BLOCK_SUSPECT_USER_AGENT,MONITOR_SUSPECT_USER_AGENT?%<User-Agent:%!%-%. %) %q %k %u %m. |
Citrix NetScaler |
[OOTB] Citrix NetScaler syslog |
regexp |
Designed for processing events received from the Citrix NetScaler 13.7 load balancer, Citrix ADC NS13.0. |
Claroty Continuous Threat Detection |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
CloudPassage Halo |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Codemaster Mirada |
[OOTB] Codemaster Mirada syslog |
Syslog |
Designed for processing events of the Codemaster Mirada system received via syslog. |
CollabNet Subversion Edge |
[OOTB] CollabNet Subversion Edge syslog |
Syslog |
Designed for processing events received from the Subversion Edge (version 6.0.2) system via syslog. |
CommuniGate Pro |
[OOTB] CommuniGate Pro |
regexp |
Designed to process events of the CommuniGate Pro 6.1 system sent by the KUMA agent via TCP. |
Corvil Network Analytics |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Cribl Stream |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
CrowdStrike Falcon Host |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
CyberArk Privileged Threat Analytics (PTA) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
CyberPeak Spektr |
[OOTB] CyberPeak Spektr syslog |
Syslog |
Designed for processing events of the CyberPeak Spektr system version 3 received via syslog. |
Cyberprotect Cyber Backup |
[OOTB] Cyberprotect Cyber Backup SQL |
sql |
Designed for processing events received by the connector from the database of the Cyber Backup system (version 16.5). |
DeepInstinct |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Delinea Secret Server |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Digital Guardian Endpoint Threat Detection |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
BIND DNS server |
[OOTB] BIND Syslog [OOTB] BIND file |
Syslog regexp |
[OOTB] BIND Syslog is designed for processing events of the BIND DNS server received via Syslog. [OOTB] BIND file is designed for processing event logs of the BIND DNS server. |
Docsvision |
[OOTB] Docsvision syslog |
Syslog |
Designed for processing audit events received from the Docsvision system via syslog. |
Dovecot |
[OOTB] Dovecot Syslog |
Syslog |
Designed for processing events of the Dovecot mail server received via Syslog. The event source is POP3/IMAP logs. |
Dragos Platform |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Dr.Web Enterprise Security Suite |
[OOTB] Syslog-CEF |
syslog |
Designed for processing Dr.Web Enterprise Security Suite 13.0.1 events in the CEF format. |
EclecticIQ Intelligence Center |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Edge Technologies AppBoard and enPortal |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Eltex ESR |
[OOTB] Eltex ESR syslog |
Syslog |
Designed to process part of the events received from Eltex ESR network devices via syslog. |
Eltex MES |
[OOTB] Eltex MES syslog |
regexp |
Designed for processing events received from Eltex MES network devices via syslog (supported device models: MES14xx, MES24xx, MES3708P). |
Eset Protect |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Extreme Networks Summit Wireless Controller
|
[OOTB] Extreme Networks Summit Wireless Controller
|
regexp
|
Normalizer for certain audit events of the Extreme Networks Summit Wireless Controller (model: WM3700, firmware version: 5.5.5.0-018R).
|
Factor-TS Dionis NX |
[OOTB] Factor-TS Dionis NX syslog |
regexp |
Designed for processing some audit events received from the Dionis-NX system (version 2.0.3) via syslog. |
F5 Advanced Web Application Firewall |
[OOTB] F5 Advanced Web Application Firewall syslog |
regexp |
Designed for processing audit events received from the F5 Advanced Web Application Firewall system via syslog. |
F5 BigIP Advanced Firewall Manager (AFM) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
FFRI FFR yarai |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
FireEye CM Series |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
FireEye Malware Protection System |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Forcepoint NGFW |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Forcepoint SMC |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Fortinet FortiAnalyzer |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Fortinet FortiGate |
[OOTB] Syslog-CEF |
regexp |
Designed for processing events in the CEF format. |
Fortinet FortiGate |
[OOTB] FortiGate syslog KV |
Syslog |
Designed for processing events from FortiGate firewalls (version 7.0) via syslog. The event source is FortiGate logs in key-value format. |
Fortinet Fortimail |
[OOTB] Fortimail |
regexp |
Designed for processing events of the FortiMail email protection system. The event source is Fortimail mail system logs. |
Fortinet FortiSOAR |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
FreeBSD |
[OOTB] FreeBSD file |
regexp |
Designed for processing events of the FreeBSD operating system (version 13.1-RELEASE) stored in a file. The normalizer can process files produced by the praudit utility. Example: praudit -xl /var/audit/AUDITFILE >> file_name.log |
FreeIPA |
[OOTB] FreeIPA |
json |
Designed for processing events from the FreeIPA system. The event source is Free IPA directory service logs. |
FreeRADIUS |
[OOTB] FreeRADIUS syslog |
Syslog |
Designed for processing events of the FreeRADIUS system received via Syslog. The normalizer supports events from FreeRADIUS version 3.0. |
GajShield Firewall |
[OOTB] GajShield Firewall syslog |
regexp |
Designed for processing part of the events received from the GajShield Firewall version GAJ_OS_Bulwark_Firmware_v4.35 via syslog. |
Garda Monitor |
[OOTB] Garda Monitor syslog |
syslog |
Designed for processing events of the Garda Monitor system version 3.4 received via syslog. |
Gardatech GardaDB |
[OOTB] Gardatech GardaDB syslog |
Syslog |
Designed for processing events of the Gardatech Perimeter system version 5.3, 5.4 received via syslog. |
Gardatech Perimeter |
[OOTB] Gardatech Perimeter syslog |
Syslog |
Designed for processing events of the Gardatech Perimeter system version 5.3 received via syslog. |
Gigamon GigaVUE |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
HAProxy |
[OOTB] HAProxy syslog |
Syslog |
Designed for processing logs of the HAProxy system. The normalizer supports events of the HTTP log, TCP log, Error log type from HAProxy version 2.8. |
HashiCorp Vault |
[OOTB] HashiCorp Vault json |
json |
Designed for processing events received from the HashiCorp Vault system version 1.16 in JSON format. The normalizer package is available in KUMA 3.0 and later. |
Huawei Eudemon |
[OOTB] Huawei Eudemon |
regexp |
Designed for processing events from Huawei Eudemon firewalls. The event source is logs of Huawei Eudemon firewalls. |
Huawei iManager 2000 |
[OOTB] Huawei iManager 2000 file
|
regexp
|
This normalizer supports processing some of the events of the Huawei iManager 2000 system, which are stored in the \client\logs\rpc, \client\logs\deploy\ossDeployment files.
|
Huawei USG |
[OOTB] Huawei USG Basic |
Syslog |
Designed for processing events received from Huawei USG security gateways via Syslog. |
Huawei VRP |
[OOTB] Huawei VRP syslog |
regexp |
Designed for processing some types of Huawei VRP system events received via syslog. The normalizer makes a partial selection of event data. The normalizer is available in KUMA 3.0 and later. |
IBM InfoSphere Guardium |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Ideco UTM |
[OOTB] Ideco UTM Syslog |
Syslog |
Designed for processing events received from Ideco UTM via Syslog. The normalizer supports events of Ideco UTM 14.7, 14.10, 17.5. |
Illumio Policy Compute Engine (PCE) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Imperva Incapsula |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Imperva SecureSphere |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Indeed Access Manager |
[OOTB] Indeed Access Manager syslog |
Syslog |
Designed for processing events received from the Indeed Access Manager system via syslog. |
Indeed PAM |
[OOTB] Indeed PAM syslog |
Syslog |
Designed for processing events of Indeed PAM (Privileged Access Manager) version 2.6. |
Indeed SSO |
[OOTB] Indeed SSO xml |
xml |
Designed for processing events of the Indeed SSO (Single Sign-On) system. The normalizer supports KUMA 2.1.3 and later. |
InfoWatch Person Monitor |
[OOTB] InfoWatch Person Monitor SQL |
sql |
Designed for processing system audit events from the MS SQL database of InfoWatch Person Monitor 10.2. |
InfoWatch Traffic Monitor |
[OOTB] InfoWatch Traffic Monitor SQL |
sql |
Designed for processing events received by the connector from the database of the InfoWatch Traffic Monitor system. |
Intralinks VIA |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
IPFIX |
[OOTB] IPFIX |
ipfix |
Designed for processing events in the IP Flow Information Export (IPFIX) format. |
Juniper JUNOS |
[OOTB] Juniper - JUNOS |
regexp |
Designed for processing audit events received from Juniper network devices. |
Kaspersky Anti Targeted Attack (KATA) |
[OOTB] KATA |
cef |
Designed for processing alerts or events from the Kaspersky Anti Targeted Attack activity log. |
Kaspersky CyberTrace |
[OOTB] CyberTrace |
regexp |
Designed for processing Kaspersky CyberTrace events. |
Kaspersky Endpoint Detection and Response (KEDR) |
[OOTB] KEDR telemetry |
json |
Designed for processing Kaspersky EDR telemetry tagged by KATA. The event source is kafka, EnrichedEventTopic |
KICS/KATA |
[OOTB] KICS4Net v2.x |
cef |
Designed for processing KICS/KATA version 2.x events. |
KICS/KATA |
[OOTB] KICS4Net v3.x |
Syslog |
Designed for processing KICS/KATA version 3.x events. |
KICS/KATA 4.2 |
[OOTB] Kaspersky Industrial CyberSecurity for Networks 4.2 syslog |
Syslog |
Designed for processing events received from the KICS/KATA 4.2 system via syslog. |
Kaspersky KISG |
[OOTB] Kaspersky KISG syslog |
Syslog |
Designed for processing events received from Kaspersky IoT Secure Gateway (KISG) 3.0 via syslog. |
Open Single Management Platform |
[OOTB] KSC |
cef |
Designed for processing Open Single Management Platform events received in CEF format. |
Open Single Management Platform |
[OOTB] KSC from SQL |
sql |
Designed for processing events received by the connector from the database of the Open Single Management Platform system. |
Kaspersky Security for Linux Mail Server (KLMS) |
[OOTB] KLMS Syslog CEF |
Syslog |
Designed for processing events from Kaspersky Security for Linux Mail Server in CEF format via Syslog. |
Kaspersky Security for MS Exchange SQL
|
[OOTB] Kaspersky Security for MS Exchange SQL
|
sql
|
Normalizer for Kaspersky Security for Exchange (KSE) 9.0 events stored in the database.
|
Kaspersky Secure Mail Gateway (KSMG) |
[OOTB] KSMG Syslog CEF |
Syslog |
Designed for processing events of Kaspersky Secure Mail Gateway version 2.0 in CEF format via Syslog. |
Kaspersky Web Traffic Security (KWTS) |
[OOTB] KWTS Syslog CEF |
Syslog |
Designed for processing events received from Kaspersky Web Traffic Security in CEF format via Syslog. |
Kaspersky Web Traffic Security (KWTS) |
[OOTB] KWTS (KV) |
Syslog |
Designed for processing events in Kaspersky Web Traffic Security for Key-Value format. |
Kemptechnologies LoadMaster |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Kerio Control |
[OOTB] Kerio Control |
Syslog |
Designed for processing events of Kerio Control firewalls. |
KUMA |
[OOTB] KUMA forwarding |
json |
Designed for processing events forwarded from KUMA. |
Libvirt |
[OOTB] Libvirt syslog |
Syslog |
Designed for processing events of Libvirt version 8.0.0 received via syslog. |
Lieberman Software ERPM |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Linux |
[OOTB] Linux audit and iptables Syslog v1 |
Syslog |
Designed for processing events of the Linux operating system. This normalizer does not support processing events in the "ENRICHED" format. |
MariaDB |
[OOTB] MariaDB Audit Plugin Syslog |
Syslog |
Designed for processing events coming from the MariaDB audit plugin over Syslog. |
Microsoft 365 (Office 365) |
[OOTB] Microsoft Office 365 json |
json |
This normalizer is designed for processing Microsoft 365 events. |
Microsoft Active Directory Federation Service (AD FS) |
[OOTB] Microsoft Products for KUMA 3 |
xml |
Designed for processing Microsoft AD FS events. The [OOTB] Microsoft Products for KUMA 3 normalizer supports this event source in KUMA 3.0.1 and later versions. |
Microsoft Active Directory Domain Service (AD DS) |
[OOTB] Microsoft Products for KUMA 3 |
xml |
Designed for processing Microsoft AD DS events. The [OOTB] Microsoft Products for KUMA 3 normalizer supports this event source in KUMA 3.0.1 and later versions. |
Microsoft Defender |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3 |
xml |
Designed for processing Microsoft Defender events. |
Microsoft DHCP |
[OOTB] MS DHCP file |
regexp |
Designed for processing Microsoft DHCP server events. The event source is Windows DHCP server logs. |
Microsoft DNS |
[OOTB] DNS Windows |
regexp |
Designed for processing Microsoft DNS server events. The event source is Windows DNS server logs. |
Microsoft Exchange |
[OOTB] Exchange CSV |
csv |
Designed for processing the event log of the Microsoft Exchange system. The event source is Exchange server MTA logs. |
Microsoft Hyper-V |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3 |
xml |
Designed for processing Microsoft Windows events. The event source is Microsoft Hyper-V logs: Microsoft-Windows-Hyper-V-VMMS-Admin, Microsoft-Windows-Hyper-V-Compute-Operational, Microsoft-Windows-Hyper-V-Hypervisor-Operational, Microsoft-Windows-Hyper-V-StorageVSP-Admin, Microsoft-Windows-Hyper-V-Hypervisor-Admin, Microsoft-Windows-Hyper-V-VMMS-Operational, Microsoft-Windows-Hyper-V-Compute-Admin. |
Microsoft IIS |
[OOTB] IIS Log File Format |
regexp |
The normalizer processes events in the format described at https://learn.microsoft.com/en-us/windows/win32/http/iis-logging. The event source is Microsoft IIS logs. |
Microsoft Network Policy Server (NPS) |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3 |
xml |
The normalizer is designed for processing events of the Microsoft Windows operating system. The event source is Network Policy Server events. |
Microsoft SCCM |
[OOTB] Microsoft SCCM file |
regexp |
Designed for processing events of the Microsoft SCCM system version 2309. The normalizer supports processing of some of the events stored in the AdminService.log file. |
Microsoft SharePoint Server |
[OOTB] Microsoft SharePoint Server diagnostic log file |
regexp |
The normalizer supports processing part of Microsoft SharePoint Server 2016 events stored in diagnostic logs. |
Microsoft Sysmon |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3 |
xml |
This normalizer is designed for processing Microsoft Sysmon module events. |
Microsoft Windows 7, 8.1, 10, 11 |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN
|
xml |
Designed for processing part of events from the Security, System, Application logs of the Microsoft Windows operating system. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.
|
Microsoft PowerShell |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN |
xml |
Designed for processing Microsoft Windows PowerShell log events. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog. |
Microsoft SQL Server |
[Deprecated][OOTB] Microsoft SQL Server xml |
xml |
Designed for processing events of MS SQL Server versions 2008, 2012, 2014, 2016. The normalizer supports KUMA 2.1.3 and later. |
Microsoft Windows Remote Desktop Services |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN |
xml |
Designed for processing Microsoft Windows events. The event source is the log at Applications and Services Logs - Microsoft - Windows - TerminalServices-LocalSessionManager - Operational The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog.
|
Microsoft Windows Server 2008 R2, 2012 R2, 2016, 2019, 2022 |
[OOTB] Microsoft Products, [OOTB] Microsoft Products for KUMA 3, [OOTB] Microsoft Products via KES WIN |
xml |
Designed for processing part of events from the Security, System logs of the Microsoft Windows Server operating system. The "[OOTB] Microsoft Products via KES WIN" normalizer supports a limited number of audit event types sent to KUMA by Kaspersky Endpoint Security 12.6 for Windows via Syslog. |
Microsoft Windows XP/2003 |
[OOTB] SNMP. Windows {XP/2003} |
json |
Designed for processing events received from workstations and servers running Microsoft Windows XP, Microsoft Windows 2003 operating systems using the SNMP protocol. |
Microsoft WSUS |
[OOTB] Microsoft WSUS file |
regexp |
Designed for processing events of the Gardatech Perimeter system version 5.3, 5.4 received via syslog. |
MikroTik |
[OOTB] MikroTik syslog |
regexp |
Designed for events received from MikroTik devices via Syslog. |
Minerva Labs Minerva EDR |
[OOTB] Minerva EDR |
regexp |
Designed for processing events from the Minerva EDR system. |
MongoDb |
[OOTB] MongoDb syslog |
Syslog |
Designed for processing part of events received from the MongoDB 7.0 database via syslog. |
Multifactor Radius Server for Windows |
[OOTB] Multifactor Radius Server for Windows syslog |
Syslog |
Designed for processing events received from the Multifactor Radius Server 1.0.2 for Microsoft Windows via Syslog. |
MySQL 5.7 |
[OOTB] MariaDB Audit Plugin Syslog |
Syslog |
Designed for processing events coming from the MariaDB audit plugin over Syslog. |
NetApp ONTAP (AFF, FAM) |
[OOTB] NetApp syslog, [OOTB] NetApp file |
regexp |
[OOTB] NetApp syslog — designed for processing events of the NetApp system (version — ONTAP 9.12) received via syslog. [OOTB] NetApp file — designed for processing events of the NetApp system (version — ONTAP 9.12) stored in a file. |
NetApp SnapCenter |
[OOTB] NetApp SnapCenter file |
regexp |
Designed to process part of the events of the NetApp SnapCenter system (SnapCenter Server 5.0). The normalizer supports processing some of the events from the C:\Program Files\NetApp\SnapCenter WebApp\App_Data\log\napManagerWeb.*.log file. Types of supported events in xml format from the SnapManagerWeb.*.log file: SmDiscoverPluginRequest, SmDiscoverPluginResponse, SmGetDomainsResponse, SmGetHostPluginStatusRequest, SmGetHostPluginStatusResponse, SmGetHostRequest, SmGetHostResponse, SmRequest. The normalizer supports processing some of the events from the C:\Program Files\NetApp\SnapCenter WebApp\App_Data\log\audit.log file. |
NetIQ Identity Manager |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
NetScout Systems nGenius Performance Manager |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Netskope Cloud Access Security Broker |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Netwrix Auditor |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Nextcloud |
[OOTB] Nextcloud syslog |
Syslog |
Designed for events of Nextcloud version 26.0.4 received via syslog. The normalizer does not save information from the Trace field. |
Nexthink Engine |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Nginx |
[OOTB] Nginx regexp |
regexp |
Designed for processing Nginx web server log events. |
NIKSUN NetDetector |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
One Identity Privileged Session Management |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
OpenLDAP |
[OOTB] OpenLDAP |
regexp |
Designed for line-by-line processing of some events of the OpenLDAP 2.5 system in an auditlog.ldif file. |
Open VPN |
[OOTB] OpenVPN file |
regexp |
Designed for processing the event log of the OpenVPN system. |
Oracle |
[OOTB] Oracle Audit Trail |
sql |
Designed for processing database audit events received by the connector directly from an Oracle database. |
OrionSoft Termit |
[OOTB] OrionSoft Termit syslog |
syslog |
Designed for processing events received from the OrionSoft Termit 2.2 system via syslog. |
Orion soft zVirt |
[OOTB] Orion Soft zVirt syslog |
regexp |
Designed for processing events of the Orion soft zVirt 3.1 virtualization system. |
PagerDuty |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Palo Alto Cortex Data Lake |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Palo Alto Networks NGFW |
[OOTB] PA-NGFW (Syslog-CSV) |
Syslog |
Designed for processing events from Palo Alto Networks firewalls received via Syslog in CSV format. |
Palo Alto Networks PANOS |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Parsec ParsecNet |
[OOTB] Parsec ParsecNet |
sql |
Designed for processing events received by the connector from the database of the Parsec ParsecNet 3 system. |
Passwork |
[OOTB] Passwork syslog |
Syslog |
Designed for processing events received from the Passwork version 050219 system via Syslog. |
Penta Security WAPPLES |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Positive Technologies ISIM |
[OOTB] PTsecurity ISIM |
regexp |
Designed for processing events from the PT Industrial Security Incident Manager system. |
Positive Technologies Sandbox |
[OOTB] PTsecurity Sandbox |
regexp |
Designed for processing events of the PT Sandbox system. |
Positive Technologies Web Application Firewall |
[OOTB] PTsecurity WAF |
Syslog |
Designed for processing events from the PTsecurity (Web Application Firewall) system. |
Postfix |
[OOTB] Postfix syslog |
regexp |
The [OOTB] Postfix package contains a set of resources for processing Postfix 3.6 events. It supports processing syslog events received over TCP. The package is available for KUMA 3.0 and newer versions. |
PostgreSQL pgAudit |
[OOTB] PostgreSQL pgAudit Syslog |
Syslog |
Designed for processing events of the pgAudit audit plug-in for the PostgreSQL database received via Syslog. |
PowerDNS |
[OOTB] PowerDNS syslog |
Syslog |
Designed for processing events of PowerDNS Authoritative Server 4.5 received via Syslog. |
Proofpoint Insider Threat Management |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Proxmox |
[OOTB] Proxmox file |
regexp |
Designed for processing events of the Proxmox system version 7.2-3 stored in a file. The normalizer supports processing of events in access and pveam logs. |
PT NAD |
[OOTB] PT NAD json |
json |
Designed for processing events coming from PT NAD in json format. This normalizer supports events from PT NAD version 11.1, 11.0. |
QEMU - hypervisor logs |
[OOTB] QEMU - Hypervisor file |
regexp |
Designed for processing events of the QEMU hypervisor stored in a file. QEMU 6.2.0 and Libvirt 8.0.0 are supported. |
QEMU - virtual machine logs |
[OOTB] QEMU - Virtual Machine file |
regexp |
Designed for processing events from logs of virtual machines of the QEMU hypervisor version 6.2.0, stored in a file. |
Radware DefensePro AntiDDoS |
[OOTB] Radware DefensePro AntiDDoS |
Syslog |
Designed for processing events from the DDOS Mitigator protection system received via Syslog. |
Reak Soft Blitz Identity Provider |
[OOTB] Reak Soft Blitz Identity Provider file |
regexp |
Designed for processing events of the Reak Soft Blitz Identity Provider system version 5.16, stored in a file. |
RedCheck Desktop |
[OOTB] RedCheck Desktop file |
regexp |
Designed for processing logs of the RedCheck Desktop 2.6 system stored in a file. |
RedCheck WEB |
[OOTB] RedCheck WEB file |
regexp |
Designed for processing logs of the RedCheck Web 2.6 system stored in files. |
RED SOFT RED ADM |
[OOTB] RED SOFT RED ADM syslog |
regexp |
Designed for processing events received from the RED ADM system (RED ADM: Industrial edition 1.1) via syslog. The normalizer supports processing: - Management subsystem events - Controller events |
ReversingLabs N1000 Appliance |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Rubicon Communications pfSense |
[OOTB] pfSense Syslog |
Syslog |
Designed for processing events from the pfSense firewall received via Syslog. |
Rubicon Communications pfSense |
[OOTB] pfSense w/o hostname |
Syslog |
Designed for processing events from the pfSense firewall. The Syslog header of these events does not contain a hostname. |
SailPoint IdentityIQ |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
SecurityCode Continent 4 |
[OOTB] SecurityCode Continent 4 syslog |
regexp |
Designed for processing events of the SecurityCode Continent system version 4 received via syslog. |
Sendmail |
[OOTB] Sendmail syslog |
Syslog |
Designed for processing events of Sendmail version 8.15.2 received via syslog. |
SentinelOne |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Skype for Business |
[OOTB] Microsoft Products for KUMA 3 |
xml |
Designed for processing some of the events from the log of the Skype for Business system, the Lync Server log. |
Snort |
[OOTB] Snort 3 json file |
json |
Designed for processing events of Snort version 3 in JSON format. |
Sonicwall TZ |
[OOTB] Sonicwall TZ Firewall |
Syslog |
Designed for processing events received via Syslog from the SonicWall TZ firewall. |
SolarWinds DameWare MRC
|
[OOTB] SolarWinds DameWare MRC xml
|
xml
|
This normalizer supports processing some of the DameWare Mini Remote Control (MRC) 7.5 events stored in the Application log of Windows. The normalizer processes events generated by the "dwmrcs" provider.
|
Sophos Firewall |
[OOTB] Sophos Firewall syslog |
regexp |
Designed for processing events received from Sophos Firewall 20 via syslog. |
Sophos XG |
[OOTB] Sophos XG |
regexp |
Designed for processing events from the Sophos XG firewall. |
Squid |
[OOTB] Squid access Syslog |
Syslog |
Designed for processing events of the Squid proxy server received via the Syslog protocol. |
Squid |
[OOTB] Squid access.log file |
regexp |
Designed for processing Squid log events from the Squid proxy server. The event source is access.log logs |
S-Terra VPN Gate |
[OOTB] S-Terra |
Syslog |
Designed for processing events from S-Terra VPN Gate devices. |
Suricata |
[OOTB] Suricata json file |
json |
This package contains a normalizer for Suricata 7.0.1 events stored in a JSON file. The normalizer supports processing the following event types: flow, anomaly, alert, dns, http, ssl, tls, ftp, ftp_data, ftp, smb, rdp, pgsql, modbus, quic, dhcp, bittorrent_dht, rfb. |
ThreatConnect Threat Intelligence Platform |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
ThreatQuotient |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Tionix Cloud Platform |
[OOTB] Tionix Cloud Platform syslog |
Syslog |
Designed for processing events of the Tionix Cloud Platform system version 2.9 received via syslog. The normalizer makes a partial selection of event data. The normalizer is available in KUMA 3.0 and later. |
Tionix VDI
|
[OOTB] Tionix VDI file
|
regexp
|
This normalizer supports processing some of the Tionix VDI system (version 2.8) events stored in the tionix_lntmov.log file.
|
TrapX DeceptionGrid |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Trend Micro Control Manager |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Trend Micro Deep Security |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Trend Micro NGFW |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Trustwave Application Security DbProtect |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Unbound |
[OOTB] Unbound Syslog |
Syslog |
Designed for processing events from the Unbound DNS server received via Syslog. |
UserGate |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format received from the UserGate system via Syslog. |
Varonis DatAdvantage |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Veriato 360 |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
ViPNet TIAS |
[OOTB] Vipnet TIAS syslog |
Syslog |
Designed for processing events of ViPNet TIAS 3.8 received via Syslog. |
VMware ESXi |
[OOTB] VMware ESXi syslog |
regexp |
Designed for processing VMware ESXi events (support for a limited number of events from ESXi versions 5.5, 6.0, 6.5, 7.0) received via Syslog. |
VMWare Horizon |
[OOTB] VMware Horizon - Syslog |
Syslog |
Designed for processing events received from the VMware Horizon 2106 system via Syslog. |
VMwareCarbon Black EDR |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Vormetric Data Security Manager |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Votiro Disarmer for Windows |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Wallix AdminBastion |
[OOTB] Wallix AdminBastion syslog |
regexp |
Designed for processing events received from the Wallix AdminBastion system via Syslog. |
WatchGuard - Firebox |
[OOTB] WatchGuard Firebox |
Syslog |
Designed for processing WatchGuard Firebox events received via Syslog. |
Webroot BrightCloud |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Winchill Fracas |
[OOTB] PTC Winchill Fracas |
regexp |
Designed for processing events of the Windchill FRACAS failure registration system. |
Yandex Browser corporate |
[OOTB] Yandex Browser |
json |
Designed for processing events received from the corporate version of Yandex Browser 23 or 24.4. |
Yandex Cloud |
[OOTB] Yandex Cloud |
regexp |
Designed for processing part of Yandex Cloud audit events. The normalizer supports processing audit log events of the configuration level: IAM (Yandex Identity and Access Management), Compute (Yandex Compute Cloud), Network (Yandex Virtual Private Cloud), Storage (Yandex Object Storage), Resourcemanager (Yandex Resource Manager). |
Zabbix |
[OOTB] Zabbix SQL |
sql |
Designed for processing events of Zabbix 6.4. |
Zecurion DLP |
[OOTB] Zecurion DLP syslog |
regexp |
Designed for processing events of the Zecurion DLP system version 12.0 received via syslog. |
ZEEK IDS |
[OOTB] ZEEK IDS json file |
json |
Designed for processing logs of the ZEEK IDS system in JSON format. The normalizer supports events from ZEEK IDS version 1.8. |
Zettaset BDEncrypt |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
Zscaler Nanolog Streaming Service (NSS) |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format. |
IT-Bastion – SKDPU |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format received from the IT-Bastion SKDPU system via Syslog. |
A-Real Internet Control Server (ICS) |
[OOTB] A-real IKS syslog |
regexp |
Designed for processing events of the A-Real Internet Control Server (ICS) system received via Syslog. The normalizer supports events from A-Real ICS version 7.0 and later. |
Apache web server |
[OOTB] Apache HTTP Server file |
regexp |
Designed for processing Apache HTTP Server 2.4 events stored in a file. The normalizer supports processing of events from the Application log in the Common or Combined Log formats, as well as the Error log. Expected format of the Error log events: "[%t] [%-m:%l] [pid %P:tid %T] [server\ %v] [client\ %a] %E: %M;\ referer\ %-{Referer}i" |
Apache web server |
[OOTB] Apache HTTP Server syslog |
Syslog |
Designed for processing events of the Apache HTTP Server received via syslog. The normalizer supports processing of Apache HTTP Server 2.4 events from the Access log in the Common or Combined Log format, as well as the Error log. Expected format of the Error log events: "[%t] [%-m:%l] [pid %P:tid %T] [server\ %v] [client\ %a] %E: %M;\ referer\ %-{Referer}i" |
Lighttpd web server |
[OOTB] Lighttpd syslog |
Syslog |
Designed for processing Access events of the Lighttpd system received via syslog. The normalizer supports processing of Lighttpd version 1.4 events. Expected format of Access log events: $remote_addr $http_request_host_name $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" |
IVK Kolchuga-K |
[OOTB] Kolchuga-K Syslog |
Syslog |
Designed for processing events from the IVK Kolchuga-K system, version LKNV.466217.002, via Syslog. |
infotecs ViPNet IDS |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format received from the infotecs ViPNet IDS system via Syslog. |
infotecs ViPNet Coordinator |
[OOTB] VipNet Coordinator Syslog |
Syslog |
Designed for processing events from the ViPNet Coordinator system received via Syslog. |
Kod Bezopasnosti — Continent |
[OOTB][regexp] Continent IPS/IDS & TLS |
regexp |
Designed for processing events of Continent IPS/IDS device log. |
Kod Bezopasnosti — Continent |
[OOTB] Continent SQL |
sql |
Designed for getting events of the Continent system from the database. |
Kod Bezopasnosti SecretNet 7 |
[OOTB] SecretNet SQL |
sql |
Designed for processing events received by the connector from the database of the SecretNet system. |
Confident - Dallas Lock |
[OOTB] Confident Dallas Lock |
regexp |
Designed for processing events from the Dallas Lock 8 information protection system. |
CryptoPro NGate |
[OOTB] Ngate Syslog |
Syslog |
Designed for processing events received from the CryptoPro NGate system via Syslog. |
H3C (Huawei-3Com) routers
|
[OOTB] H3C Routers syslog
|
regexp
|
Normalizer for some types of events received from H3C (Huawei-3Com) SR6600 network devices (Comware 7 firmware) via Syslog. The normalizer supports the "standard" event format (RFC 3164-compliant format).
|
NT Monitoring and Analytics |
[OOTB] Syslog-CEF |
Syslog |
Designed for processing events in the CEF format received from the NT Monitoring and Analytics system via Syslog. |
BlueCoat proxy server |
[OOTB] BlueCoat Proxy v0.2 |
regexp |
Designed to process BlueCoat proxy server events. The event source is the BlueCoat proxy server event log. |
SKDPU NT Access Gateway |
[OOTB] Bastion SKDPU-GW |
Syslog |
Designed for processing events of the SKDPU NT Access gateway system received via Syslog. |
Solar Dozor |
[OOTB] Solar Dozor Syslog |
Syslog |
Designed for processing events received from the Solar Dozor system version 7.9 via Syslog. The normalizer supports custom format events and does not support CEF format events. |
- |
[OOTB] Syslog header |
Syslog |
Designed for processing events received via Syslog. The normalizer parses the header of the Syslog event, the message field of the event is not parsed. If necessary, you can parse the message field using other normalizers. |
Aggregation rules
Aggregation rules let you combine repetitive events of the same type and replace them with one common event. Aggregation rules support fields of the standard KUMA event schema as well as fields of the extended event schema. In this way, you can reduce the number of similar events sent to the storage and/or the correlator, reduce the workload on services, conserve data storage space and licensing quota (EPS). An aggregation event is created when a time or number of events threshold is reached, whichever occurs first.
For aggregation rules, you can configure a filter and apply it only to events that match the specified conditions.
You can configure aggregation rules under Resources → Aggregation rules, and then select the created aggregation rule from the drop-down list in the collector settings. You can also configure aggregation rules directly in collector settings. Available aggregation rule settings are listed in the table below.
Available aggregation rule settings
Setting |
Description |
||
---|---|---|---|
Name |
Unique name of the resource. Maximum length of the name: 128 Unicode characters. Required setting. |
||
Tenant |
The name of the tenant that owns the resource. Required setting. |
||
Threshold |
Threshold on the number of events. After accumulating the specified number of events with identical fields, the collector creates an aggregation event and begins accumulating events for the next aggregated event. The default value is |
||
Triggered rule lifetime |
Threshold on time in seconds. When the specified time expires, the accumulation of base events stops, the collector creates an aggregated event and starts obtaining events for the next aggregated event. The default value is Required setting. |
||
Description |
Description of the resource. Maximum length of the description: 4000 Unicode characters. |
||
Identical fields |
Fields of normalized events whose values must match. For example, for network events, Required setting. |
||
Unique fields |
Fields whose range of values must be preserved in the aggregated event. For example, if the |
||
Sum fields |
Fields whose values are summed up during aggregation and written to the same-name fields of the aggregated event. The following special considerations are relevant to field behavior:
|
||
Filter |
Conditions for determining which events must be processed by the resource. In the drop-down list, you can select an existing filter Create new to create a new filter. In aggregation rules, do not use filters with the TI operand or the TIDetect, inActiveDirectoryGroup, or hasVulnerability operators. The |
The KUMA distribution kit includes aggregation rules listed in the table below.
Predefined aggregation rules
Aggregation rule name |
Description |
[OOTB] Netflow 9 |
The rule is triggered after 100 events or 10 seconds. Events are aggregated by the following fields:
The |
Enrichment rules
Event enrichment involves adding information to events that can be used to identify and investigate an incident.
Enrichment rules let you add supplementary information to event fields by transforming data that is already present in the fields, or by querying data from external systems. For example, suppose that a user name is recorded in the event. You can use an enrichment rule to add information about the department, position, and manager of this user to the event fields.
Enrichment rules can be used in the following KUMA services and features:
- Collector. In the collector, you can create an enrichment rule, and it becomes a resource that you can reuse in other services. You can also link an enrichment rule created as a standalone resource.
- Correlator. In the correlator, you can create an enrichment rule, and it becomes a resource that you can reuse in other services. You can also link an enrichment rule created as a standalone resource.
- Normalizer. In the normalizer, you can only create an enrichment rule linked to that normalizer. Such a rule will not be available as a standalone resource for reuse in other services.
Available enrichment rule settings are listed in the table below.
Basic settings tab
Setting |
Description |
---|---|
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. |
Source kind |
Required setting. Drop-down list for selecting the type of incoming events. Depending on the selected type, you may see the following additional settings: |
Debug |
You can use this toggle switch to enable the logging of service operations. Logging is disabled by default. |
Description |
Resource description: up to 4,000 Unicode characters. |
Filter |
Group of settings in which you can specify the conditions for identifying events that must be processed by this resource. You can select an existing filter from the drop-down list or create a new filter. |
Predefined enrichment rules
The KUMA distribution kit includes enrichment rules listed in the table below.
Predefined enrichment rules
Enrichment rule name |
Description |
[OOTB] KATA alert |
Used to enrich events received from KATA in the form of a hyperlink to an alert. The hyperlink is put in the DeviceExternalId field. |
Data collection and analysis rules
Data collection and analysis rules are used to recognize events from stored data.
Data collection and analysis rules, in contrast to real-time streaming correlation, allow using the SQL language to recognize and analyze events stored in the database.
To manage the section, you need one of the following roles: General administrator, Tenant administrator, Tier 1 analyst, Tier 2 analyst.
When creating or editing data collection and analysis rules, you need to specify the settings listed in the table below.
Settings of data collection and analysis rules
Setting |
Description |
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. If you have access to only one tenant, this field is filled in automatically. If you have access to multiple tenants, the name of the first tenant from your list of available tenants is inserted. You can select any tenant from this list. |
Sql |
Required setting. The SQL query must contain an aggregation function with a LIMIT and/or a data grouping with a LIMIT. You must use a LIMIT value between 1 and 10,000. Examples of SQL queries
You can also use SQL function sets: |
Query interval |
Required setting. The interval for executing the SQL query. You can specify the interval in minutes, hours, and days. The minimum interval is 1 minute. The default timeout of the SQL query is equal to the interval that you specify in this field. If the execution of the SQL query takes longer than the timeout, an error occurs. In this case, we recommend increasing the interval. For example, if the interval is 1 minute, and the query takes 80 seconds to execute, we recommend setting the interval to at least 90 seconds. |
Tags |
Optional setting. Tags for resource search. |
Depth |
Optional setting. Expression for the lower bound of the interval for searching events in the database. To select a value from the list or to specify the depth as a relative interval, place the cursor in the field. For example, if you want to find all events from one hour ago to now, set the relative interval of |
Description |
Optional setting. Description of data collection and analysis rules. |
Mapping |
Settings of the mapping the fields of an SQL query result to KUMA events: Source field is the field from the SQL query result that you want to convert into a KUMA event. Event field is the KUMA event field. You can select one of the values in the list by placing the mouse cursor in this field. Label is a unique custom label for event fields that begin with DeviceCustom*. You can add new table rows or delete table rows. To add a new table row, click Add mapping. To delete a row, select the check box next to the row and click the If you do not want to fill in the fields manually, you can click the Add mapping from SQL button. The field mapping table is populated with the values of the SQL query fields, including aliases (if any). For example, if the value of an SQL query field is Clicking the Add mapping from SQL button again does not refresh the table, and fields from the SQL query are added to it again. |
You can create a data collection and analysis rule in one of the following ways:
- In the Resources → Resources and services → Data collection and analysis rules section.
- In the Events section.
To create a data collection and analysis rule in the Events section:
- Create or generate an SQL query and click the
button.
A new browser tab for creating a data collection and analysis rule is opened in the browser with pre-filled SQL query and Depth fields. The field mapping table is also be populated automatically if you did not use an asterisk (*) in the SQL query.
- Fill in the required fields.
If necessary, you can change the value in the Query interval field.
- Save the settings.
The data collection and analysis rule is saved and is available in the Resources and services → Data collection and analysis rules section.
Page topConfiguring the scheduler for a data collection and analysis rule
For a data collection and analysis rule to run, you must create a scheduler for it.
The scheduler makes SQL queries to specified storage partitions with the interval and search depth configured in the rule, and then converts the SQL query results into base events, which it then sends to the correlator.
SQL query results converted to base events are not stored in the storage.
For the scheduler to work correctly, you must configure the link between the data collection and analysis rule, the storage, and the correlators in the Resources → Data collection and analysis section.
To manage this section, you need one of the following roles: General administrator, Tenant administrator, Tier 2 analyst, Access to shared resources, Manage shared resources.
The schedulers are arranged in the table by the date of their last launch. You can sort the data in columns in ascending or descending order by clicking the icon in the column heading.
Available columns of the table of schedulers:
- Rule name is the name of the data collection and analysis rule for which you created the scheduler.
- Tenant name is the name of the tenant to which the data collection and analysis rule belongs.
- Status is the status of the scheduler. The following values are possible:
- Enabled means the scheduler is running, and the data collection and analysis rule will be started in accordance with the specified schedule.
- Disabled means the scheduler is not running.
This is the default status of a newly created scheduler. For the scheduler to run, it must be Enabled.
- The scheduler finished at is the last time the scheduler's data collection and analysis rule was started.
- Rule run status is the status with which the scheduler has finished. The following values are possible:
- Ok means the scheduler finished without errors, the rule was started.
- Unknown means the scheduler was Enabled and its status is currently unknown. The Unknown status is displayed if you have linked storages and correlators on the corresponding tabs and Enabled the scheduler, but have not yet started it.
- Stopped means the scheduler is stopped, the rule is not running.
- Error means the scheduler has finished, and the rule was completed with an error.
- Last error lists errors (if any) that occurred during the execution of the data collection and analysis rule.
Failure to send events to the configured correlator does not constitute an error.
You can use the toolbar in the upper part of the table to perform actions on schedulers.
To edit the scheduler, click the corresponding line in the table.
Available scheduler settings for data collection and analysis rules are described below.
General tab
On this tab you can:
- Enable or disable the scheduler using a toggle switch.
If the toggle switch is enabled, the data collection and analysis rule runs in accordance with the schedule configured in its settings.
- Edit the following settings of the data collection and analysis rule:
- Name
- Query interval
- Depth
- Sql
- Description
- Mapping
The Linked storages tab
On this tab you need to specify the storage to which the scheduler will send SQL queries.
To specify a storage:
- Click the Link button in the toolbar.
- This opens a window; in that window, specify the name of the storage to which you want to add the link, as well as the name of the section of the selected storage.
You can select only one storage, but multiple sections of that storage.
- Click Add.
The link is created and displayed in the table on the Linked storages tab.
If necessary, you can remove the links by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.
The Linked correlators tab
On this tab, you must add correlators for handling base events.
To add a correlator:
- Click the Link button in the toolbar.
- This opens a window; in that window, hover over the Correlator field.
- In the displayed list of correlators, select check boxes next to the correlators you want to add.
- Click Add.
The correlators are added and displayed in the table on the Linked correlators tab.
If necessary, you can remove the correlators by selecting the check boxes in the relevant rows of the table and clicking the Unlink selected button.
You can also view the result of the scheduler in the Core log; to do so, you must first configure the Debug mode in Core settings. To download the log, select the Resources → Active services in KUMA, then select the Core service and click the Log button.
Log records with scheduler results have the datamining scheduler
prefix.
Correlation rules
Correlation rules are used to recognize specific sequences of processed events and to take certain actions after recognition, such as creating correlation events/alerts or interacting with an active list.
Correlation rules can be used in the following KUMA services and features:
- Correlator.
- Notification rule.
- Links of segmentation rules.
- Retroscan.
The available correlation rule settings depend on the selected type. Types of correlation rules:
- standard—used to find correlations between several events. Resources of this kind can create correlation events.
This rule kind is used to determine complex correlation patterns. For simpler patterns you should use other correlation rule kinds that require less resources to operate.
- simple—used to create correlation events if a certain event is found.
- operational—used for operations with Active lists and context tables. This rule kind cannot create correlation events.
For these resources, you can enable the display of control characters in all input fields except the Description field.
If a correlation rule is used in the correlator and an alert was created based on it, any change to the correlation rule will not result in a change to the existing alert even if the correlator service is restarted. For example, if the name of a correlation rule is changed, the name of the alert will remain the same. If you close the existing alert, a new alert will be created and it will take into account the changes made to the correlation rule.
Correlation rules of the 'standard' type
Correlation rules of the standard type are used for identifying complex patterns in processed events.
The search for patterns is conducted by using buckets
Settings for a correlation rule of the standard type are described in the following tables.
General tab
This tab lets you specify the general settings of the correlation rule.
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Correlation rule type: standard. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Identical fields |
Event fields that must be grouped in a Bucket. The hash of the values of the selected event fields is used as the Bucket key. If one of the selectors specified on the Selectors tab is triggered, the selected event fields are copied to the correlation event. If different selectors of the correlation rule use event fields that have different meanings in the events, do not specify such event fields in the Identical fields drop-down list. You can specify local variables. To refer to a local variable, its name must be preceded with the Required setting. |
Window, sec |
Bucket lifetime in seconds. The time starts counting when the bucket is created, when the bucket receives the first event. When the bucket lifetime expires, the trigger specified on the Actions → On timeout tab is triggered, and the container is deleted. Triggers specified on the Actions → On every threshold and On subsequent thresholds tabs can trigger more than once during the lifetime of the bucket. Required setting. |
Unique fields |
Unique event fields to be sent to the bucket. If you specify unique event fields, only these event fields will be sent to the container. The hash of values of the selected fields is used as the Bucket key. You can specify local variables. To refer to a local variable, its name must be preceded with the |
Rate limit |
Maximum number of times a correlation rule can be triggered per second. The default value is If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to |
Base events keep policy |
This drop-down list lets you select base events that you want to put in the correlation event:
|
Severity |
Base coefficient used to determine the importance of a correlation rule:
|
Order by |
Event field to be used by selectors of the correlation rule to track the evolution of the situation. This can be useful, for example, if you want to configure a correlation rule to be triggered when several types of events occur in a sequence. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
MITRE techniques |
Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix. |
Use unique field mapping |
This toggle switch allows you to save the values of unique fields to an array and pass it to a correlation event field. If the toggle switch is enabled, in the lower part of the General tab, an additional Unique field mapping group of settings is displayed, in which you can configure the mapping of the source original unique fields to correlation event fields. When processing an event using a correlation rule, field mapping takes place first, and then operations from the Actions tab are applied to the correlation event resulting from the initial mapping. The toggle switch is turned off by default. Optional setting. |
Unique field mapping group of settings
If you need to pass values of fields listed under Unique fields to the correlation event, here you can configure the mapping of unique fields to correlation event fields. This group of settings is displayed on the General tab if the Use unique field mapping toggle switch is enabled. Values of unique fields are an array, therefore the field in the correlation event must have the appropriate type: SA, NA, FA.
You can add a mapping by clicking the Add button and selecting a field from the drop-down list in the Raw event field column. You can select fields specified in the Unique fields parameter. In the drop-down list in the Target event field column, select the correlation event field to which you want to write the array of values of the source field. You can select fields whose type matches the type of the array (SA, NA, or FA, depending on the type of the source field).
You can delete one or more mappings by selecting the check boxes next to the relevant mappings and clicking Delete.
Selectors tab
This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. To add a selector, click the + Add selector button. You can add multiple selectors, reorder selectors, or remove selectors. To reorder selectors, use the reorder icons. To remove a selector, click the delete
icon next to it.
Each selector has a Settings tab and a Local variables tab.
The settings available on the Settings tab are described in the table below.
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Selector threshold (event count) |
The number of events that must be received for the selector to trigger. The default value is Required setting. |
Recovery |
This toggle switch lets the correlation rule not trigger when the selector receives the number of events specified in the Selector threshold (event count) field. This toggle switch is turned off by default. |
Filter |
The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil Filtering based on data from the Extra event field The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter. Consider two examples of selector filters that select successful authentication events in Microsoft Windows. Selector filter 1: Condition 1: Condition 2: Selector filter 2: Condition 1: Condition 2: The order of conditions specified in selector filter 2 is preferable because it places less load on the system. |
On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.
In the selector of the correlation rule, you can use regular expressions conforming to the RE2 standard. Using regular expressions in correlation rules is computationally intensive compared to other operations. When designing correlation rules, we recommend limiting the use of regular expressions to the necessary minimum and using other available operations.
To use a regular expression, you must use the match
operator. The regular expression must be placed in a constant. The use of capture groups in regular expressions is optional. For the correlation rule to trigger, the field text matched against the regexp must exactly match the regular expression.
For a primer on the syntax and examples of correlation rules that use regular expressions in their selectors, see the following rules that are provided with KUMA:
- R105_04_Suspicious PowerShell commands. Suspected obfuscation.
- R333_Suspicious creation of files in the autorun folder.
Actions tab
You can use this tab to configure the triggers of the correlation rule. You can configure triggers on the following tabs:
- On first threshold triggers when the Bucket registers the first triggering of the selector during the lifetime of the Bucket.
- On subsequent thresholds triggers when the Bucket registers the second and all subsequent triggering of the selector during the lifetime of the Bucket.
- On every threshold triggers every time the Bucket registers the triggering of the selector.
- On timeout triggers when the lifetime of the Bucket ends, and is used together with a selector that has the Recovery check box selected in its settings. Thus, this trigger activates if the situation detected by the correlation rule is not resolved within the specified lifetime.
Available trigger settings are listed in the table below.
Setting |
Description |
---|---|
Output |
This check box enables the sending of correlation events for post-processing, that is, for external enrichment outside the correlation rule, for response, and to destinations. By default, this check box is cleared. |
Loop to correlator |
This check box enables the processing of the created correlation event by the rule chain of the current correlator. This makes hierarchical correlation possible. By default, this check box is cleared. If the Output and Loop to correlator check boxes are selected, the correlation rule is sent to post-processing first, and then to the selectors of the current correlation rule. |
No alert |
The check box disables the creation of alerts when the correlation rule is triggered. By default, this check box is cleared. If you do not want to create an alert when a correlation rule is triggered, but you still want to send a correlation event to the storage, select the Output and No alert check boxes. If you select only the No alert check box, a correlation event is not saved in the storage. |
Enrichment |
Enrichment rules for modifying the values of correlation event fields. Enrichment rules are stored in the correlation rule where they were created. To create an enrichment rule, click the + Add enrichment button. Available enrichment rule settings:
You can create multiple enrichment rules, reorder enrichment rules, or delete enrichment rules. To reorder enrichment rules, use the reorder |
Categorization |
Categorization rules for assets involved in the event. Using categorization rules, you can link and unlink only reactive categories to and from assets. To create an enrichment rule, click the + Add categorization button. Available categorization rule settings:
You can create multiple categorization rules, reorder categorization rules, or delete categorization rules. To reorder categorization rules, use the reorder |
Active lists update |
Operations with active lists. To create an operation with an active list, click the + Add active list action button. Available parameters of an active list operation:
You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder |
Updating context tables |
Operations with context tables. To create an operation with a context table, click the + Add context table action button. Available parameters of a context table operation:
You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder |
Correlators tab
This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.
To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.
You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.
Page topCorrelation rules of the 'simple' type
Correlation rules of the simple type are used to define simple sequences of events. Settings for a correlation rule of the simple type are described in the following tables.
General tab
This tab lets you specify the general settings of the correlation rule.
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Correlation rule type: simple. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Propagated fields |
Event fields by which events are selected. If a selector specified on the Selectors tab is triggered, the selected event fields are copied to the correlation event. |
Rate limit |
Maximum number of times a correlation rule can be triggered per second. The default value is If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to |
Severity |
Base coefficient used to determine the importance of a correlation rule:
|
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
MITRE techniques |
Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix. |
Selectors tab
This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. A selector has a Settings tab and a Local variables tab.
The settings available on the Settings tab are described in the table below.
Setting |
Description |
---|---|
Filter |
The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil Filtering based on data from the Extra event field The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter. Consider two examples of selector filters that select successful authentication events in Microsoft Windows. Selector filter 1: Condition 1: Condition 2: Selector filter 2: Condition 1: Condition 2: The order of conditions specified in selector filter 2 is preferable because it places less load on the system. |
On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.
Actions tab
You can use this tab to configure the trigger of the correlation rule. A correlation rule of the simple type can have only one trigger, which is activated each time the bucket registers the selector triggering. Available trigger settings are listed in the table below.
Setting |
Description |
---|---|
Output |
This check box enables the sending of correlation events for post-processing, that is, for external enrichment outside the correlation rule, for response, and to destinations. By default, this check box is cleared. |
Loop to correlator |
This check box enables the processing of the created correlation event by the rule chain of the current correlator. This makes hierarchical correlation possible. By default, this check box is cleared. If the Output and Loop to correlator check boxes are selected, the correlation rule is sent to post-processing first, and then to the selectors of the current correlation rule. |
No alert |
The check box disables the creation of alerts when the correlation rule is triggered. By default, this check box is cleared. If you do not want to create an alert when a correlation rule is triggered, but you still want to send a correlation event to the storage, select the Output and No alert check boxes. If you select only the No alert check box, a correlation event is not saved in the storage. |
Enrichment |
Enrichment rules for modifying the values of correlation event fields. Enrichment rules are stored in the correlation rule where they were created. To create an enrichment rule, click the + Add enrichment button. Available enrichment rule settings:
You can create multiple enrichment rules, reorder enrichment rules, or delete enrichment rules. To reorder enrichment rules, use the reorder |
Categorization |
Categorization rules for assets involved in the event. Using categorization rules, you can link and unlink only reactive categories to and from assets. To create an enrichment rule, click the + Add categorization button. Available categorization rule settings:
You can create multiple categorization rules, reorder categorization rules, or delete categorization rules. To reorder categorization rules, use the reorder |
Active lists update |
Operations with active lists. To create an operation with an active list, click the + Add active list action button. Available parameters of an active list operation:
You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder |
Updating context tables |
Operations with context tables. To create an operation with a context table, click the + Add context table action button. Available parameters of a context table operation:
You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder |
Correlators tab
This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.
To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.
You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.
Page topCorrelation rules of the 'operational' type
Correlation rules of the operational type are used for working with active lists. Settings for a correlation rule of the operational type are described in the following tables.
General tab
This tab lets you specify the general settings of the correlation rule.
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Correlation rule type: operational. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Rate limit |
Maximum number of times a correlation rule can be triggered per second. The default value is If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the way rule triggers are counted in KUMA. In this case, we recommend increasing the Rate limit , for example, to |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
MITRE techniques |
Downloaded MITRE ATT&CK techniques for analyzing the security coverage status using the MITRE ATT&CK matrix. |
Selectors tab
This tab is used to define the conditions that the processed events must fulfill to trigger the correlation rule. A selector has a Settings tab and a Local variables tab.
The settings available on the Settings tab are described in the table below.
Setting |
Description |
---|---|
Filter |
The filter that defines criteria for identifying events that trigger the selector when received. You can select an existing filter or create a new filter. To create a new filter, select Create new. If you want to edit the settings of an existing filter, click the pencil Filtering based on data from the Extra event field The order of conditions specified in the selector filter of the correlation rule is significant and affects system performance. We recommend putting the most unique condition in the first place in the selector filter. Consider two examples of selector filters that select successful authentication events in Microsoft Windows. Selector filter 1: Condition 1: Condition 2: Selector filter 2: Condition 1: Condition 2: The order of conditions specified in selector filter 2 is preferable because it places less load on the system. |
On the Local variables tab, you can add variables that will be valid inside the correlation rule. To add a variable, click the + Add button, then specify the variable and its value. You can add multiple variables or delete variables. To delete a variable, select the check box next to it and click the Delete button.
Actions tab
You can use this tab to configure the trigger of the correlation rule. A correlation rule of the operational type can have only one trigger, which is activated each time the bucket registers the selector triggering. Available trigger settings are listed in the table below.
Setting |
Description |
---|---|
Active lists update |
Operations with active lists. To create an operation with an active list, click the + Add active list action button. Available parameters of an active list operation:
You can create multiple operations with active lists, reorder operations with active lists, or delete operations with active lists. To reorder operations with active lists, use the reorder |
Updating context tables |
Operations with context tables. To create an operation with a context table, click the + Add context table action button. Available parameters of a context table operation:
You can create multiple operations with context tables, reorder operations with context tables, or delete operations with context tables. To reorder operations with context tables, use the reorder |
Correlators tab
This tab is displayed only when you edit the settings of the created correlation rule; on this tab, you can link correlators to the correlation rule.
To add correlators, click the + Add button, specify one or more correlators in the displayed window, and click OK. The correlation rule is linked to the specified correlators and added to the end of the execution queue in the correlator settings. If you want to change the position of a correlation rule in the execution queue, go to the Resources → Correlator section, click the correlator, and in the displayed window, go to the Correlation section, select the check box next to the correlation rule, and change the position of the correlation rule by clicking the Move up and Move down buttons.
You can add multiple correlators or delete correlators. To delete a correlator, select the check box next to it and click Delete.
Page topVariables in correlators
If tracking values in event fields, active lists, or dictionaries is not enough to cover some specific security scenarios, you can use global and local variables. You can use them to take various actions on the values received by the correlators by implementing complex logic for threat detection. Variables can be declared in the correlator (global variables) or in the correlation rule (local variables) by assigning a function to them, then querying them from correlation rules as if they were ordinary event fields and receiving the triggered function result in response.
Usage scope of variables:
- When searching for identical or unique field values in correlation rules.
- In the correlation rule selectors, in the filters of the conditions under which the correlation rule must be triggered.
- When enriching correlation events. Select Event as the source type.
- When populating active lists with values.
Variables can be queried the same way as event fields by preceding their names with the $ character.
You can use extended event schema fields in correlation rules, local variables, and global variables.
Local variables in identical and unique fields
You can use local variables in the Identical fields and Unique fields sections of 'standard' type correlation rules. To use a local variable, its name must be preceded with the "$" character.
For an example of using local variables in the Identical fields and Unique fields sections, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.
Page topLocal variables in selector
To use a local variable in a selector:
- Add a local variable to the rule.
- In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
- In Correlation rules window, go to the Selectors tab, select an existing filter or create a new filter and click Add condition.
- Select the event field as the operand.
- Select the local variable as the event field value and prefix the variable name with a "$" character.
- Specify the remaining filter settings.
- Click Save.
For an example of using local variables, refer to the rule provided with KUMA: R403_Access to malicious resources from a host with disabled protection or an out-of-date anti-virus database.
Page topLocal Variables in event enrichment
You can use 'standard' and 'simple' correlation rules to enrich events with local variables.
Enrichment with text and numbers
You can enrich events with text (strings). To do so, you can use functions that modify strings: to_lower, to_upper, str_join, append, prepend, substring, tr, replace, str_join.
You can enrich events with numbers. To do so, you can use the following functions: addition ("+"), subtraction ("-"), multiplication ("*"), division ("/"), round, ceil, floor, abs, pow.
You can also use regular expressions to manage data in local variables.
Using regular expressions in correlation rules is computationally intensive compared to other operations. Therefore, when designing correlation rules, we recommend limiting the use of regular expressions to the necessary minimum and using other available operations.
Timestamp enrichment
You can enrich events with timestamps (date and time). To do so, you can use functions that let you get or modify timestamps: now, extract_from_timestamp, parse_timestamp, format_timestamp, truncate_timestamp, time_diff.
Operations with active lists and tables
You can enrich events with local variables and data from active lists and tables.
To enrich events with data from an active list, use the active_list, active_list_dyn functions.
To enrich events with data from a table, use the table_dict, dict functions.
You can create conditional statements by using the 'conditional' function in local variables. In this way, the variable can return one of the values depending on what data was received for processing.
Enriching events with a local variable
To use a local variable to enrich events:
- Add a local variable to the rule.
- In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
- In the Correlation rules window, go to the Actions tab, and under Enrichment, in the Source kind drop-down list, select Event.
- From the Target field drop-down list, select the KUMA event field to which you want to pass the value of the local variable.
- From the Source field drop-down list, select a local variable. Prefix the local variable name with a "$" character.
- Specify the remaining rule settings.
- Click Save.
Local variables in active list enrichment
You can use local variables to enrich active lists.
To enrich the active list with a local variable:
- Add a local variable to the rule.
- In the Correlation rules window, go to the General tab and add the created local variable to the Identical fields section. Prefix the local variable name with a "$" character.
- In the Correlation rules window, go to the Actions tab and under Active lists update, add the local variable to the Key fields field. Prefix the local variable name with a "$" character.
- Under Mapping, specify the correspondence between the event fields and the active list fields.
- Click the Save button.
Properties of variables
Local and global variables
The properties of global variables differ from the properties of local variables.
Global variables:
- Global variables are declared at the correlator level and are applied only within the scope of this correlator.
- The global variables of the correlator can be queried from all correlation rules that are specified in it.
- In standard correlation rules, the same global variable can take different values in each selector.
- It is not possible to transfer global variables between different correlators.
Local variables:
- Local variables are declared at the correlation rule level and are applied only within the limits of this rule.
- In standard correlation rules, the scope of a local variable consists of only the selector in which the variable was declared.
- Local variables can be declared in any type of correlation rule.
- Local variables cannot be transferred between rules or selectors.
- A local variable cannot be used as a global variable.
Variables used in various types of correlation rules
- In operational correlation rules, on the Actions tab, you can specify all variables available or declared in this rule.
- In standard correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Identical fields field.
- In simple correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Inherited Fields field.
Requirements for variables
When adding a variable function, you must first specify the name of the function, and then list its parameters in parentheses. Basic mathematical operations (addition, subtraction, multiplication, division) are an exception to this requirement. When these operations are used, parentheses are used to designate the severity of the operations.
Requirements for function names:
- Must be unique within the correlator.
- Must contain 1 to 128 Unicode characters.
- Must not begin with the character $.
- Must be written in camelCase or CamelCase.
Special considerations when specifying functions of variables:
- The sequence of parameters is important.
- Parameters are separated by a comma:
,
. - String parameters are passed in single quotes:
'
. - Event field names and variables are specified without quotation marks.
- When querying a variable as a parameter, add the
$
character before its name. - You do not need to add a space between parameters.
- In all functions in which a variable can be used as parameters, nested functions can be created.
Functions of variables
Operations with active lists and dictionaries
"active_list" and "active_list_dyn" functions
These functions allow you to receive information from an active list and dynamically generate a field name for an active list and key.
You must specify the parameters in the following sequence:
- Name of the active list.
- Expression that returns the field name of the active list.
- One or more expressions whose results are used to generate the key.
Usage example
Result
active_list('Test', to_lower('DeviceHostName'), to_lower(DeviceCustomString2), to_lower(DeviceCustomString1))
Gets the field value of the active list.
Use these functions to query the active list of the shared tenant from a variable. To do so, add the @Shared suffix after the name of the active list (case sensitive). For example, active_list('exampleActiveList@Shared', 'score', SourceAddress, SourceUserName).
"table_dict" function
Gets information about the value in the specified column of a dictionary of the table type.
You must specify the parameters in the following sequence:
- Dictionary name
- Dictionary column name
- One or more expressions whose results are used to generate the dictionary row key.
Usage example
Result
table_dict('exampleTableDict', 'office', SourceUserName)
Gets data from the
exampleTableDict
dictionary from the row with theSourceUserName
key in theoffice
column.table_dict('exampleTableDict', 'office', SourceAddress, to_lower(SourceUserName))
Gets data from the
exampleTableDict
dictionary from a composite key string from theSourceAddress
field value and the lowercase value of theSourceUserName
field from theoffice
column.
Use this function to access the dictionary of the shared tenant from a variable. To do so, add the @Shared
suffix after the name of the active list (case sensitive). For example, table_dict('exampleTableDict@Shared', 'office', SourceUserName)
.
"dict" function
Gets information about the value in the specified column of a dictionary of the dictionary type.
You must specify the parameters in the following sequence:
- Dictionary name
- One or more expressions whose results are used to generate the dictionary row key.
Usage example
Result
dict('exampleDictionary', SourceAddress)
Gets data from
exampleDictionary
from the row with theSourceAddress
key.dict('exampleDictionary', SourceAddress, to_lower(SourceUserName))
Gets data from the
exampleDictionary
from a composite key string from theSourceAddress
field value and the lowercase value of theSourceUserName
field.
Use this function to access the dictionary of the shared tenant from a variable. To do so, add the @Shared
suffix after the name of the active list (case sensitive). For example, dict('exampleDictionary@Shared', SourceAddress)
.
Operations with context tables
"context_table" function
Returns the value of the specified field in the base type (for example, integer, array of integers).
You must specify the parameters in the following sequence:
- Name of the context table. The name must be specified.
- Expression that returns the field name of context table.
- Expression that returns the name of key field 1 of the context table.
- Expression that returns the value of key field 1 of the context table.
The function must contain at least 4 parameters.
Usage example |
Result |
|
|
"len" function
Returns the length of a string or array.
The function returns the length of the array if the passed array is of one of the following types:
- array of integers
- array of floats
- array of strings
- array of booleans
If an array of a different type is passed, the data of the array is cast to the string type, and the function returns the length of the resulting string.
Usage examples |
|
|
"distinct_items" function
Returns a list of unique elements in an array.
The function returns the list of unique elements of the array if the passed array is of one of the following types:
- array of integers
- array of floats
- array of strings
- array of booleans
If an array of a different type is passed, the data of the array is cast to the string type, and the function returns a string consisting of the unique characters from the original string.
Usage examples |
|
|
"sort_items" function
Returns a sorted list of array elements.
You must specify the parameters in the following sequence:
- Expression that returns the object of the sorting.
- Sorting order Possible values:
asc
,desc
. If the parameter is not specified, the default value isasc
.
The function returns the list of sorted elements of the array if the passed array is of one of the following types:
- array of integers
- array of floats
- array of strings
For a boolean array, the function returns the list of array elements in the original order.
If an array of a different type is passed, the data of the array is cast to the string type, and the function returns a string of sorted characters.
Usage examples |
|
|
"item" function
Returns the array element with the specified index or the character of a string with the specified index if an array of integers, floats, strings, or boolean values is passed.
You must specify the parameters in the following sequence:
- Expression that returns the object of the indexing.
- Expression that returns the index of the element or character.
The function must contain at least 2 parameters.
The function returns the array element with the specified index or the string character with the specified index if the index falls within the range of the array and the passed array is of one of the following types:
- array of integers
- array of floats
- array of strings
- array of booleans
If an array of a different type is passed and the index falls within the range of the array, the data is cast to the string type, and the function returns the string character with the specified index. If an array of a different type is passed and the index is outside the range of the array, the function returns an empty string.
Usage examples |
|
|
Operations with strings
"to_lower" function
Converts characters in a string to lowercase. Supported for standard fields and extended event schema fields of the "string" type.
A string can be passed as a string, field name or variable.
Usage examples |
|
|
|
"to_upper" function
Converts characters in a string to uppercase. Supported for standard fields and extended event schema fields of the "string" type. A string can be passed as a string, field name or variable.
Usage examples |
|
|
|
"append" function
Adds characters to the end of a string. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Added string.
Strings can be passed as a string, field name or variable.
Usage examples |
Usage result |
|
The string |
|
The string |
|
A string from |
"prepend" function
Adds characters to the beginning of a string. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Added string.
Strings can be passed as a string, field name or variable.
Usage examples |
Usage result |
|
The string |
|
The string |
|
A string from |
"substring" function
Returns a substring from a string. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Substring start position (natural number or 0).
- (Optional) substring end position.
Strings can be passed as a string, field name or variable. If the position number is greater than the original data string length, an empty string is returned.
Usage examples |
Usage result |
|
Returns a part of the string from the |
|
Returns a part of the string from the |
|
Returns the entire string from the |
"index_of" function
The "index_of" function returns the position of the first occurrence of a character or substring in a string; the first character in the string has index 0. If the function does not find the substring, the function returns -922337203685477580.
The function accepts the following parameters:
- As source data, an event field, another variable, or constant.
- Any expression out of those that are available in local variables.
To use this function, you must specify the parameters in the following order:
- Character or substring whose position you want to find.
- String to be searched.
Usage examples |
Usage result |
|
The function looks for the "@" character in the SourceUserName field. The SourceUserName field contains the "user@example.com" string. Result = 4 The function returns the index of the first occurrence of the character in the string. The first character in the string has index 0. |
|
The function looks for the "m" character in the Result = 8 The function returns the index of the first occurrence of the character in the string. The first character in the string has index 0. |
"last_index_of" function
The "last_index_of" function returns the position of the last occurrence of a character or substring in a string; the first character in the string has index 0. If the function does not find the substring, the function returns -922337203685477580.
The function accepts the following parameters:
- As source data, an event field, another variable, or constant.
- Any expression out of those that are available in local variables.
To use this function, you must specify the parameters in the following order:
- Character or substring whose position you want to find.
- String to be searched.
Usage examples |
Usage result |
|
The function looks for the "m" character in the SourceUserName field. The SourceUserName field contains the "user@example.com" string. Result = 15 The function returns the index of the last occurrence of the character in the string. The first character in the string has index 0. |
"tr" function
Removes the specified characters from the beginning and end of a string. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- (Optional) string that should be removed from the beginning and end of the original string.
Strings can be passed as a string, field name or variable. If you do not specify a string to be deleted, spaces will be removed from the beginning and end of the original string.
Usage examples |
Usage result |
|
Spaces have been removed from the beginning and end of the string from the |
|
If the |
|
If the |
"replace" function
Replaces all occurrences of character sequence A in a string with character sequence B. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Search string: sequence of characters to be replaced.
- Replacement string: sequence of characters to replace the search string.
Strings can be passed as an expression.
Usage examples |
Usage result |
|
Returns a string from the |
|
Returns a string from |
"regexp_replace" function
Replaces a sequence of characters that match a regular expression with a sequence of characters and regular expression capturing groups. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Search string: regular expression.
- Replacement string: sequence of characters to replace the search string, and IDs of the regular expression capturing groups. A string can be passed as an expression.
Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.
In regular expressions used in variable functions, each backslash character must be additionally escaped. For example, ^example\\\\
must be used instead of the regular expression ^example\\
.
Usage examples |
Usage result |
|
Returns a string from the |
"regexp_capture" function
Gets the result matching the regular expression condition from the original string. Supported for standard fields and extended event schema fields of the "string" type.
You must specify the parameters in the following sequence:
- Original string.
- Search string: regular expression.
Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.
In regular expressions used in variable functions, each backslash character must be additionally escaped. For example, ^example\\\\
must be used instead of the regular expression ^example\\
.
Usage examples |
Example values |
Usage result |
|
|
|
"template" function
Returns the string specified in the function, with variables replaced with their values. Variables for substitution can be passed in the following ways:
- Inside the string.
- After the string. In this case, inside the string, you must specify variables in the
{{index.<n>}}
notation, where <n> is the index of the variable passed after the string. The index is 0-based.Usage examples
template('Very long text with values of rule={{.DeviceCustomString1}} and {{.Name}} event fields, as well as values of {{index.0}} and {{index.1}} local variables and then {{index.2}}', $var1, $var2, $var10)
Operations with timestamps
now function
Gets a timestamp in epoch format. Runs with no arguments.
Usage examples |
|
"extract_from_timestamp" function
Gets atomic time representations (year, month, day, hour, minute, second, day of the week) from fields and variables with time in the epoch format.
The parameters must be specified in the following sequence:
- Event field of the timestamp type, or variable.
- Notation of the atomic time representation. This parameter is case sensitive.
Possible variants of atomic time notation:
- y refers to the year in number format.
- M refers to the month in number notation.
- d refers to the number of the month.
- wd refers to the day of the week: Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday.
- h refers to the hour in 24-hour format.
- m refers to the minutes.
- s refers to the seconds.
- (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.
Usage examples
extract_from_timestamp(Timestamp, 'wd')
extract_from_timestamp(Timestamp, 'h')
extract_from_timestamp($otherVariable, 'h')
extract_from_timestamp(Timestamp, 'h', 'Europe/Moscow')
"parse_timestamp" function
Converts the time from RFC3339 format (for example, "2022-05-24 00:00:00", "2022-05-24 00:00:00+0300) to epoch format.
Usage examples |
|
|
"format_timestamp" function
Converts the time from epoch format to RFC3339 format.
The parameters must be specified in the following sequence:
- Event field of the timestamp type, or variable.
- Time format notation: RFC3339.
- (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.
Usage examples
format_timestamp(Timestamp, 'RFC3339')
format_timestamp($otherVariable, 'RFC3339')
format_timestamp(Timestamp, 'RFC3339', 'Europe/Moscow')
"truncate_timestamp" function
Rounds the time in epoch format. After rounding, the time is returned in epoch format. Time is rounded down.
The parameters must be specified in the following sequence:
- Event field of the timestamp type, or variable.
- Rounding parameter:
- 1s rounds to the nearest second.
- 1m rounds to the nearest minute.
- 1h rounds to the nearest hour.
- 24h rounds to the nearest day.
- (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.
Usage examples
Examples of rounded values
Usage result
truncate_timestamp(Timestamp, '1m')
1654631774175 (7 June 2022, 19:56:14.175)
1654631760000 (7 June 2022, 19:56:00)
truncate_timestamp($otherVariable, '1h')
1654631774175 (7 June 2022, 19:56:14.175)
1654628400000 (7 June 2022, 19:00:00)
truncate_timestamp(Timestamp, '24h', 'Europe/Moscow')
1654631774175 (7 June 2022, 19:56:14.175)
1654560000000 (7 June 2022, 0:00:00)
"time_diff" function
Gets the time interval between two timestamps in epoch format.
The parameters must be specified in the following sequence:
- Interval end time. Event field of the timestamp type, or variable.
- Interval start time. Event field of the timestamp type, or variable.
- Time interval notation:
- ms refers to milliseconds.
- s refers to seconds.
- m refers to minutes.
- h refers to hours.
- d refers to days.
Usage examples
time_diff(EndTime, StartTime, 's')
time_diff($otherVariable, Timestamp, 'h')
time_diff(Timestamp, DeviceReceiptTime, 'd')
Mathematical operations
These are comprised of basic mathematical operations and functions.
Basic mathematical operations
Supported for integer and float fields of the extended event schema.
Operations:
- Addition
- Subtraction
- Multiplication
- Division
- Modulo division
Parentheses determine the sequence of actions
Available arguments:
- Numeric event fields
- Numeric variables
- Real numbers
When modulo dividing, only natural numbers can be used as arguments.
Usage constraints:
- Division by zero returns zero.
- Mathematical operations on a number and a strings return the number unchanged. For example, 1 + abc returns 1.
- Integers resulting from operations are returned without a dot.
Usage examples
(Type=3; otherVariable=2; Message=text)
Usage result
Type + 1
4
$otherVariable - Type
-1
2 * 2.5
5
2 / 0
0
Type * Message
0
(Type + 2) * 2
10
Type % $otherVariable
1
"round" function
Rounds numbers. Supported for integer and float fields of the extended event schema.
Available arguments:
- Numeric event fields
- Numeric variables
- Numeric constants
Usage examples
(DeviceCustomFloatingPoint1=7.75; DeviceCustomFloatingPoint2=7.5 otherVariable=7.2)
Usage result
round(DeviceCustomFloatingPoint1)
8
round(DeviceCustomFloatingPoint2)
8
round($otherVariable)
7
"ceil" function
Rounds up numbers. Supported for integer and float fields of the extended event schema.
Available arguments:
- Numeric event fields
- Numeric variables
- Numeric constants
Usage examples
(DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)
Usage result
ceil(DeviceCustomFloatingPoint1)
8
ceil($otherVariable)
9
"floor" function
Rounds down numbers. Supported for integer and float fields of the extended event schema.
Available arguments:
- Numeric event fields
- Numeric variables
- Numeric constants
Usage examples
(DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)
Usage result
floor(DeviceCustomFloatingPoint1)
7
floor($otherVariable)
8
"abs" function
Gets the modulus of a number. Supported for integer and float fields of the extended event schema.
Available arguments:
- Numeric event fields
- Numeric variables
- Numeric constants
Usage examples
(DeviceCustomNumber1=-7; otherVariable=-2)
Usage result
abs(DeviceCustomFloatingPoint1)
7
abs($otherVariable)
2
"pow" function
Exponentiates a number. Supported for integer and float fields of the extended event schema.
The parameters must be specified in the following sequence:
- Base — real numbers.
- Power — natural numbers.
Available arguments:
- Numeric event fields
- Numeric variables
- Numeric constants
Usage examples
pow(DeviceCustomNumber1, DeviceCustomNumber2)
pow($otherVariable, DeviceCustomNumber1)
"str_join" function
Join multiple strings into one using a separator. Supported for integer and float fields of the extended event schema.
The parameters must be specified in the following sequence:
- Separator. String.
- String1, string2, stringN. At least 2 expressions.
Usage examples
Usage result
str_join('|', to_lower(Name), to_upper(Name), Name)
String.
"conditional" function
Get one value if a condition is met and another value if the condition is not met. Supported for integer and float fields of the extended event schema.
The parameters must be specified in the following sequence:
- Condition. String. The syntax is similar to the conditions of the Where statement in SQL. You can use the functions of the KUMA variables and references to other variables in a condition.
- The value if the condition is met. Expression.
- The value if the condition is not met. Expression.
Supported operators:
- AND
- OR
- NOT
- =
- !=
- <
- <=
- >
- >=
- LIKE (RE2 regular expression is used, rather than an SQL expression)
- ILIKE (RE2 regular expression is used, rather than an SQL expression)
- BETWEEN
- IN
- IS NULL (check for an empty value, such as 0 or an empty string)
Usage examples (the value depends on arguments 2 and 3)
conditional('SourceUserName = \\'root\\' AND DestinationUserName = SourceUserName', 'match', 'no match')
conditional(`DestinationUserName ILIKE 'svc_.*'`, 'match', 'no match')
conditional(`DestinationUserName NOT LIKE 'svc_.*'`, 'match', 'no match')
Operations for extended event schema fields
For extended event schema fields of the "string" type, the following kinds of operations are supported:
- "len" function
- "to_lower" function
- "to_upper" function
- "append" function
- "prepend" function
- "substring" function
- "tr" function
- "replace" function
- "regexp_replace" function
- "regexp_capture" function
For extended event schema fields of the integer or float type, the following kinds of operations are supported:
- Basic mathematical operations:
- "round" function
- "ceil" function
- "floor" function
- "abs" function
- "pow" function
- "str_join" function
- "conditional" function
For extended event schema fields of the "array of integers", "array of floats", and "array of strings" types, KUMA supports the following functions:
- Get the i-th element of the array. Example: item(<type>.someStringArray).
- Get an array of values. Example: <type>.someStringArray. Returns ["string1", "string2", "string3"].
- Get the count of elements in an array. Example: len(<type>.someStringArray). Returns ["string1", "string2"].
- Gett unique elements from an array. Example: distinct_items(<type>.someStringArray).
- Generate a TSV string of array elements. Example: to_string(<type>.someStringArray).
- Sort the elements of the array. Example: sort_items(<type>.someStringArray).
In the examples, instead of <type>, you must specify the array type: NA for an array of integers, FA for an array of floats, SA for an array of strings.
For fields of the "array of integers" and "array of floats" types, the following functions are supported:
• math_min — returns the minimum element of an array. Example: math_min(NA.NumberArray), math_min(FA.FloatArray)
• math_max — returns the maximum element of an array Example: math_max(NA.NumberArray), math_max(FA.FloatArray)
• math_avg — returns the average value of an array. Example: math_avg(NA.NumberArray), math_avg(FA.FloatArray)
Page topDeclaring variables
To declare variables, they must be added to a correlator or correlation rule.
To add a global variable to an existing correlator:
- In the KUMA Console, under Resources → Correlators, select the resource set of the relevant correlator.
The Correlator Installation Wizard opens.
- Select the Global variables step of the Installation Wizard.
- click the Add variable button and specify the following parameters:
- In the Variable window, enter the name of the variable.
- In the Value window, enter the variable function.
When entering functions, you can use autocomplete as a list of hints with possible function names, their brief description and usage examples. You can select a function from the list and insert it together with its list of arguments into the input field.
To display the list of all hints in the field, press Ctrl+Space. Press Enter to select a function from the list. Press Tab to go to the next argument in the list of arguments of the selected function.
Multiple variables can be added. Added variables can be edited or deleted by using the
icon.
- Select the Setup validation step of the Installation Wizard and click Save.
A global variable is added to the correlator. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.
To add a local variable to an existing correlation rule:
- In the KUMA Console, under Resources → Correlation rules, select the relevant correlation rule.
The correlation rule settings window opens. The parameters of a correlation rule can also be opened from the correlator to which it was added by proceeding to the Correlation step of the Installation Wizard.
- Click the Selectors tab.
- In the selector, open the Local variables tab, click the Add variable button and specify the following parameters:
- In the Variable window, enter the name of the variable.
- In the Value window, enter the variable function.
When entering functions, you can use autocomplete as a list of hints with possible function names, their brief description and usage examples. You can select a function from the list and insert it together with its list of arguments into the input field.
To display the list of all hints in the field, press Ctrl+Space. Press Enter to select a function from the list. Press Tab to go to the next argument in the list of arguments of the selected function.
Multiple variables can be added. Added variables can be edited or deleted by using the
icon.
For standard correlation rules, repeat this step for each selector in which you want to declare variables.
- Click Save.
The local variable is added to the correlation rule. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.
Added variables can be edited or deleted. If the correlation rule queries an undeclared variable (for example, if its name has been changed), an empty string is returned.
If you change the name of a variable, you will need to manually change the name of this variable in all correlation rules where you have used it.
Page topAdding a temporary exclusion list for a correlation rule
Users that do not have the rights to edit correlation rules in the KUMA Console, can create a temporary list of exclusions (for example, create exclusions for false positives when managing alerts). A user with the rights to edit correlation rules can then add the exclusions to the rule and remove them from the temporary list.
To add exclusions to a correlation rule when managing alerts:
- Go to the Alerts section and select an alert.
- Click the Find in events button.
Events of the alert are displayed on the events page.
- Open the correlation event.
This opens the event card in which each field has a
(arrow) button that lets you add an exclusion.
- Click the
button and select Add to exclusions.
A sidebar is displayed, containing the following fields: Correlation rule, Exclusion, Alert, Comment.
- Click the Create button.
The exclusion rule is added.
The exclusion is added to the temporary list. This list is available to anyone with rights to read correlation rules: in the Resources → Correlation rules section, in the toolbar of the rule list, click the List of exclusions button. If you want to view the exclusions of a specific rule, open the card of the rule and select the Exclusions tab.
The exclusion list contains entries with the following parameters:
Exclusion
Exclusion condition.
Correlation rule
Name of the correlation rule.
Alert
Name of the alert from which the exclusion was added.
Tenant
The tenant to which the rule and the exclusion apply.
Condition
Generated automatically based on the selected field of the correlation event.
Сreation date
Date and time when the exclusion was added.
Expires
Date and time when the exclusion will be automatically removed from the list.
Created
Name of the user that added the exclusion.
Comment
After the exclusion is added, by default, the correlation rule takes the exclusion into account for 7 days. In the Options → General section, you can configure the duration of exclusions by editing the corr_rule_exclusion_ttl_hours
parameter in the Core properties section. You can configure the lifetime of exclusions in hours and days. The minimum value is 1 hour, the maximum is 365 days. This setting is available only for users with the General administrator role.
For fields from base events to be propagated to correlation events, these fields must be specified in the card of the correlation rule on the General tab, in the Propagated fields field. If the fields of base events are not mapped to the correlation event, these fields cannot be added to exclusions.
To remove exclusions from a correlation rule:
- Go to the Resources → Correlation rules section.
- In the toolbar of the rule list, click List of exclusions button.
This opens the window with the list of exclusions.
- Select the exclusions that you want to delete and click the Delete button.
Exceptions are deleted from the correlation rule.
KUMA generates an audit event whenever an exception is created or deleted. You can view the changes of event settings in the Event details window.
Page topPredefined correlation rules
The KUMA distribution kit includes correlation rules listed in the table below.
Predefined correlation rules
Correlation rule name |
Description |
[OOTB] KATA alert |
Used for enriching KATA events. |
[OOTB] Successful Bruteforce |
Triggers when a successful authentication attempt is detected after multiple unsuccessful authentication attempts. This rule works based on the events of the sshd daemon. |
[OOTB][AD] Account created and deleted within a short period of time |
Detects instances of creation and subsequent deletion of accounts on Microsoft Windows hosts. |
[OOTB][AD] An account failed to log on from different hosts |
Detects multiple unsuccessful attempts to authenticate on different hosts. |
[OOTB][AD] Membership of sensitive group was modified |
Works based on Microsoft Windows events. |
[OOTB][AD] Multiple accounts failed to log on from the same host |
Triggers after multiple failed authentication attempts are detected on the same host from different accounts. |
[OOTB][AD] Successful authentication with the same account on multiple hosts |
Detects connections to different hosts under the same account. This rule works based on Microsoft Windows events. |
[OOTB][AD] The account added and deleted from the group in a short period of time |
Detects the addition of a user to a group and subsequent removal. This rule works based on Microsoft Windows events. |
[OOTB][Net] Possible port scan |
Detects suspected port scans. This rule works based on Netflow, Ipfix events. |
MITRE ATT&CK matrix coverage
If you want to assess the coverage of the MITRE ATT&CK matrix by your correlation rules:
- Download the list of MITRE techniques from the official MITRE ATT&CK repository and import it into KUMA.
- Map MITRE techniques to correlation rules.
- Export correlation rules to MITRE ATT&CK Navigator.
As a result, you can visually assess the coverage of the MITRE ATT&CK matrix.
Importing the list of MITRE techniques
Only a user with the General Administrator role can import the list of MITRE techniques.
To import the list of MITRE ATT&CK techniques:
- Download the list of MITRE ATT&CK techniques from the GitHub portal.
KUMA 3.2 supports only the MITRE ATT&CK technique list version 14.1.
- In the KUMA Console, go to the Settings → Other.
- In the MITRE technique list settings, click Import from file.
This opens the file selection window.
- Select the downloaded MITRE ATT&CK technique list and click Open.
This closes the file selection window.
The list of MITRE ATT&CK techniques is imported into KUMA. You can see the list of imported techniques and the version of the MITRE ATT&CK technique list by clicking View list.
Mapping MITRE techniques to correlation rules
To map MITRE ATT&CK techniques to correlation rules:
- In the KUMA Console, go to the Resources → Correlation rules section.
- Click the name of the correlation rule to open the correlation rule editing window.
This opens the correlation rule editing window.
- On the General tab, clicking the MITRE techniques field opens a list of available techniques. For the convenience of searching, a filter is provided, in which you can enter the name of a technique or the ID of a technique or tactic. One or more MITRE ATT&CK techniques are available for linking to a correlation rule.
- Click the Save button.
The MITRE ATT&CK techniques are mapped to the correlation rule. In the web interface, in the Resources → Correlation rules section, the MITRE techniques column of the edited rule displays the ID of the selected technique, and when you hover over the item, the full name of the technique is displayed, including the ID of the technique and tactic.
Exporting correlation rules to MITRE ATT&CK Navigator
To export correlation rules with mapped MITRE techniques to MITRE ATT&CK Navigator:
- In the KUMA Console, go to the Resources → Correlation rules section.
- Click the
button in the upper-right corner.
- In the drop-down list, click Export to MITRE ATT&CK Navigator.
- This opens a window; in that window, select the correlation rules that you want to export.
- Click OK.
A file with exported rules is downloaded to your computer.
- Upload the file from your computer to MITRE ATT&CK Navigator to assess the coverage of the MITRE ATT&CK matrix.
You can assess the coverage of the MITRE ATT&CK matrix.
Page topFilters
Filters let you select events based on specified conditions. The collector service uses filters to select events that you want to send to KUMA. Events that satisfy the filter conditions are sent to KUMA for further processing.
You can use filters in the following KUMA services and features:
- Collector.
- Correlator.
- Storage.
- Correlation rules.
- Enrichment rules.
- Aggregation rules.
- Destinations.
- Response rules.
- Segmentation rules.
You can use standalone filters or built-in filters that are stored in the service or resource in which they were created. For resources in input fields except the Description field, you can enable the display of control characters. Available filter settings are listed in the table below.
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. Inline filters are created in other resources or services and do not have names. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
You can create filter conditions and filter groups, or add existing filters to a filter.
To create filtering criteria, you can use builder mode or source code mode. In builder mode, you can create or edit filter criteria by selecting filter conditions and operators from drop-down lists. In source code mode, you can use text commands to create and edit search queries. The builder mode is used by default.
You can freely switch between modes when creating filtering criteria. To switch to source code mode, select the Code tab. When switching between modes, the created condition filters are preserved. If the filter code is not displayed on the Code tab after linking the created filter to the resource, go to the Builder tab and then go back to the Code tab to display the filter code.
Creating filtering criteria in builder mode
To create filtering criteria in builder mode, you need to select one of the following operators from the drop-down list:
- AND: The filter selects events that match all of the specified conditions.
- OR: The filter selects events that match one of the specified conditions.
- NOT: The filter selects events that match none of the specified conditions.
You can add filtering criteria in one of the following ways:
- To add a condition, click the + Add condition button.
- To add a group of conditions, click the + Add group button. When adding groups of conditions, you can also select the AND, OR, and NOT operators. In turn, you can add conditions and condition groups to a condition group.
You can add multiple filtering criteria, reorder the filtering criteria, or remove filtering criteria. To reorder filtering criteria, use the reorder icons. To remove a filtering criterion, click the delete
icon next to it.
Available condition settings are listed in the table below.
Setting |
Description |
---|---|
<Condition type> |
Condition type. The default is If. You can click the default value and select If not from the displayed drop-down list. Required setting. |
<Left operand> and <Right operand> |
Values to be processed by the operator. The available types of values of the right operand depend on the selected operator. Required settings. |
<Operator> |
Condition operator. When selecting a condition operator in the drop-down list, you can select the do not match case check box if you want the operator to ignore the case of values. This check box is ignored if the inSubnet, inActiveList, inCategory, InActiveDirectoryGroup, hasBit, and inDictionary operators are selected. By default, this check box is cleared. You can change or delete the specified operator. To change the operator, click it and specify a new operator. To delete the operator, click it, then press Backspace. |
The available operand kinds depends on whether the operand is left (L) or right (R).
Available operand kinds for left (L) and right (R) operands
Operator |
Event field type |
Active list type |
Dictionary type |
Context table type |
Table type |
TI type |
Constant type |
List type |
= |
L,R |
L,R |
L,R |
L,R |
L,R |
L,R |
R |
R |
> |
L,R |
L,R |
L,R |
L,R (only when looking up a table value by index) |
L,R |
L |
R |
|
>= |
L,R |
L,R |
L,R |
L,R (only when looking up a table value by index) |
L,R |
L |
R |
|
< |
L,R |
L,R |
L,R |
L,R (only when looking up a table value by index) |
L,R |
L |
R |
|
<= |
L,R |
L,R |
L,R |
L,R (only when looking up a table value by index) |
L,R |
L |
R |
|
inSubnet |
L,R |
L,R |
L,R |
L,R |
L,R |
L,R |
R |
R |
contains |
L,R |
L,R |
L,R |
L,R |
L,R |
L,R |
R |
R |
startsWith |
L,R |
L,R |
L,R |
L,R |
L,R |
L,R |
R |
R |
endsWith |
L,R |
L,R |
L,R |
L,R |
L,R |
L,R |
R |
R |
match |
L |
L |
L |
L |
L |
L |
R |
R |
hasVulnerability |
L |
L |
L |
L |
L |
|||
hasBit |
L |
L |
L |
L |
L |
R |
R |
|
inActiveList |
||||||||
inDictionary |
||||||||
inCategory |
L |
L |
L |
L |
L |
R |
R |
|
inContextTable |
||||||||
inActiveDirectoryGroup |
L |
L |
L |
L |
L |
R |
R |
|
TIDetect |
You can use hotkeys when managing filters. Hotkeys are described in the table below.
Hotkeys and their functions
Key |
Function |
---|---|
e |
Invokes a filter by the event field |
d |
Invokes a filter by the dictionary field |
a |
Invokes a filter by the active list field |
c |
Invokes a filter by the context table field |
t |
Invokes a filter by the table field |
f |
Invokes a filter |
t+i |
Invokes a filter using TI |
Ctrl+Enter |
Finish editing a condition |
The usage of extended event schema fields of the "String", "Number", or "Float" types is the same as the usage of fields of the KUMA event schema.
When using filters with extended event schema fields of the "Array of strings", "Array of numbers", and "Array of floats" types, you can use the following operations:
- The
contains
operation returnsTrue
if the specified substring is present in the array, otherwise it returnsFalse
. - The
match
operation matches the string against a regular expression. - The
intersec
operation.
Creating filtering criteria in source code mode
The source code mode allows you to quickly edit conditions, select and copy blocks of code. In the right part of the builder, you can find the navigator, which lets you to navigate the filter code. Line wrapping is performed automatically at AND, OR, NOT logical operators, or at commas that separate the items in the list of values.
Names of resources used in the filter are automatically specified. Fields containing the names of linked resources cannot be edited. The names of shared resource categories are not displayed in the filter if you do not have the "Access to shared resources" role. To view the list of resources for the selected operand inside the expression, press Ctrl+Space. This displays a list of resources.
The filters listed in the table below are included in the KUMA kit.
Predefined filters
Filter name |
Description |
[OOTB][AD] A member was added to a security-enabled global group (4728) |
Selects events of adding a user to an Active Directory security-enabled global group. |
[OOTB][AD] A member was added to a security-enabled universal group (4756) |
Selects events of adding a user to an Active Directory security-enabled universal group. |
[OOTB][AD] A member was removed from a security-enabled global group (4729) |
Selects events of removing a user from an Active Directory security-enabled global group. |
[OOTB][AD] A member was removed from a security-enabled universal group (4757) |
Selects events of removing a user from an Active Directory security-enabled universal group. |
[OOTB][AD] Account Created |
Selects Windows user account creation events. |
[OOTB][AD] Account Deleted |
Selects Windows user account deletion events. |
[OOTB][AD] An account failed to log on (4625) |
Selects Windows logon failure events. |
[OOTB][AD] Successful Kerberos authentication (4624, 4768, 4769, 4770) |
Selects successful Windows logon events and events with IDs 4769, 4770 that are logged on domain controllers. |
[OOTB][AD][Technical] 4768. TGT Requested |
Selects Microsoft Windows events with ID 4768. |
[OOTB][Net] Possible port scan |
Selects events that may indicate a port scan. |
[OOTB][SSH] Accepted Password |
Selects events of successful SSH connections with a password. |
[OOTB][SSH] Failed Password |
Selects attempts to connect over SSH with a password. |
Active lists
The active list is a bucket for data that is used by KUMA correlators for analyzing events according to the correlation rules.
For example, for a list of IP addresses with a bad reputation, you can:
- Create a correlation rule of the operational type and add these IP addresses to the active list.
- Create a correlation rule of the standard type and specify the active list as filtering criteria.
- Create a correlator with this rule.
In this case, KUMA selects all events that contain the IP addresses in the active list and creates a correlation event.
You can fill active lists automatically using correlation rules of the simple type or import a file that contains data for the active list.
You can add, copy, or delete active lists.
Active lists can be used in the following KUMA services and features:
The same active list can be used by different correlators. However, a separate entity of the active list is created for each correlator. Therefore, the contents of the active lists used by different correlators differ even if the active lists have the same names and IDs.
Only data based on correlation rules of the correlator are added to the active list.
You can add, edit, duplicate, delete, and export records in the active correlator sheet.
During the correlation process, when entries are deleted from active lists after their lifetime expires, service events are generated in the correlators. These events only exist in the correlators, and they are not redirected to other destinations. Correlation rules can be configured to track these events so that they can be processed and used to identify threats. Service event fields for deleting an entry from the active list are described below.
Event field |
Value or comment |
|
Event identifier |
|
Time when the expired entry was deleted |
|
|
|
|
|
|
|
Correlator ID |
|
Correlator name |
|
Active list ID |
|
Key of the expired entry |
|
Number of deleted entry updates increased by one |
S.<active list field> |
Dropped-out entry of the active list in the following format: S.<active list field> = <value of active list field> |
Viewing the table of active lists
To view the table of correlator active lists:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
The table contains the following data:
- Name—the name of the correlator list.
- Records—the number of record the active list contains.
- Size on disk—the size of the active list.
- Directory—the path to the active list on the KUMA Core server.
Adding active list
To add active list:
- In the KUMA Console, select the Resources section.
- In the Resources section, click the Active lists button.
- Click the Add active list button.
- Do the following:
- In the Name field, enter a name for the active list.
- In the Tenant drop-down list, select the tenant that owns the resource.
- In the TTL field, specify time the record added to the active list is stored in it.
When the specified time expires, the record is deleted. The time is specified in seconds.
The default value is 0. If the value of the field is 0, the record is retained for 36,000 days (roughly 100 years).
- In the Description field, provide any additional information.
You can use up to 4,000 Unicode characters.
This field is optional.
- Click the Save button.
The active list is added.
Page topViewing the settings of an active list
To view the settings of an active list:
- In the KUMA Console, select the Resources section.
- In the Resources section, click the Active lists button.
- In the Name column, select the active list whose settings you want to view.
This opens the active list settings window. It displays the following information:
- ID—identifier selected Active list.
- Name—unique name of the resource.
- Tenant—the name of the tenant that owns the resource.
- TTL—the record added to the active list is stored in it for this time. This value is specified in seconds.
- Description—any additional information about the resource.
Changing the settings of an active list
To change the settings of an active list:
- In the KUMA Console, select the Resources section.
- In the Resources section, click the Active lists button.
- In the Name column, select the active list whose settings you want to change.
- Specify the values of the following parameters:
- Name—unique name of the resource.
- TTL—the record added to the active list is stored in it for this time. This value is specified in seconds.
If the field is set to 0, the record is stored indefinitely.
- Description—any additional information about the resource.
The ID and Tenant fields are not editable.
Duplicating the settings of an active list
To copy an active list:
- In the KUMA Console, select the Resources section.
- In the Resources section, click the Active lists button.
- Select the check box next to the active lists you want to copy.
- Click Duplicate.
- Specify the necessary settings.
- Click the Save button.
The active list is copied.
Page topDeleting an active list
To delete an active list:
- In the KUMA Console, select the Resources section.
- In the Resources section, click the Active lists button.
- Select the check boxes next to the active lists you want to delete.
To delete all lists, select the check box next to the Name column.
At least one check box must be selected.
- Click the Delete button.
- Click OK.
The active lists are deleted.
Page topViewing records in the active list
To view the records in the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A table of records for the selected list is opened.
The table contains the following data:
- Key – the value of the record key.
- Record repetitions – total number of times the record was mentioned in events and identical records were downloaded when importing active lists to KUMA.
- Expiration date – date and time when the record must be deleted.
If the TTL field had the value of 0 when the active list was created, the records of this active list are retained for 36,000 days (roughly 100 years).
- Created – the time when the active list was created.
- Updated – the time when the active list was last updated.
Searching for records in the active list
To find a record in the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A window with the records for the selected list is opened.
- In the Search field, enter the record key value or several characters from the key.
The table of records of the active list displays only the records with the key containing the entered characters.
Page topAdding a record to an active list
To add a record to the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the required correlator.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A window with the records for the selected list is opened.
- Click Add.
The Create record window opens.
- Specify the values of the following parameters:
- In the Key field, enter the name of the record.
You can specify several values separated by the "|" character.
The Key field cannot be empty. If the field is not filled in, KUMA returns an error when trying to save the changes.
- In the Value field, specify the values for fields in the Field column.
KUMA takes field names from the correlation rules with which the active list is associated. These names are not editable. You can delete these fields if necessary.
- Click the Add new element button to add more values.
- In the Field column, specify the field name.
The name must meet the following requirements:
- To be unique
- Do not contain tab characters
- Do not contain special characters except for the underscore character
- The maximum number of characters is 128.
The name must not begin with an underscore and contain only numbers.
- In the Value column, specify the value for this field.
It must meet the following requirements:
- Do not contain tab characters.
- Do not contain special characters except for the underscore character.
- The maximum number of characters is 1024.
This field is optional.
- In the Key field, enter the name of the record.
- Click the Save button.
The record is added. After saving, the records in the active list are sorted in alphabet order.
Page topDuplicating records in the active list
To duplicate a record in the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A window with the records for the selected list is opened.
- Select the check boxes next to the record you want to copy.
- Click Duplicate.
- Specify the necessary settings.
The Key field cannot be empty. If the field is not filled in, KUMA returns an error when trying to save the changes.
Editing the field names in the Field column is not available for the records that have been added to the active list before. You can change the names only for records added at the time of editing. The name must not begin with an underscore and contain only numbers.
- Click the Save button.
The record is copied. After saving, the records in the active list are sorted in alphabet order.
Page topChanging a record in the active list
To edit a record in the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A window with the records for the selected list is opened.
- Click the record name in the Key column.
- Specify the required values.
- Click the Save button.
The record is overwritten. After saving, the records in the active list are sorted in alphabet order.
Restrictions when editing a record:
- The record name is not editable. You can change it by importing the same data with a different name.
- Editing the field names in the Field column is not available for the records that have been added to the active list before. You can change the names only for records added at the time of editing. The name must not begin with an underscore and contain only numbers.
- The values in the Value column must meet the following requirements:
- Do not contain Cyrillic characters.
- Do not contain spaces or tabs.
- Do not contain special characters except for the underscore character.
- The maximum number of characters is 128.
Deleting records from the active list
To delete records from the active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- In the Name column, select the desired active list.
A window with the records for the selected list is opened.
- Select the check boxes next to the records you want to delete.
To delete all records, select the check box next to the Key column.
At least one check box must be selected.
- Click the Delete button.
- Click OK.
The records will be deleted.
Page topImport data to an active list
To import active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- Point the mouse over the row with the desired active list.
- Click
to the left of the active list name.
- Select Import.
The active list import window opens.
- In the File field select the file you wan to import.
- In the Format drop-down list select the format of the file:
- csv
- tsv
- internal
- Under Key field, enter the name of the column containing the active list record keys.
- Click the Import button.
The data from the file is imported into the active list. The records included in the list before are saved.
Data imported from a file is not checked for invalid characters. If you use this data in widgets, widgets are displayed incorrectly if invalid characters are present in the data.
Page topExporting data from the active list
To export active list:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- Select the check box next to the correlator for which you want to view the active list.
- Click the Go to active lists button.
The Correlator active lists table is displayed.
- Point the mouse over the row with the desired active list.
- Click
to the left of the desired active list.
- Click the Export button.
The active list is downloaded in the JSON format using your browsers settings. The name of the downloaded file reflects the name of active list.
Page topPredefined active lists
The active lists listed in the table below are included in the KUMA distribution kit.
Predefined active lists
Active list name |
Description |
[OOTB][AD] End-users tech support accounts |
This active list is used as a filter for the "[OOTB][AD] Successful authentication with same user account on multiple hosts" correlation rule. Accounts of technical support staff may be added to the active list. Records are not deleted from the active list. |
[OOTB][AD] List of sensitive groups |
This active list is used as a filter for the "[OOTB][AD] Membership of sensitive group was modified" correlation rule. Critical domain groups, whose membership must be monitored, can be added to the active list. Records are not deleted from the active list. |
[OOTB][Linux] CompromisedHosts |
This active list is populated by the [OOTB] Successful Bruteforce by potentially compromised Linux hosts rule. Records are removed from the list 24 hours after they are recorded. |
Dictionaries
Description of parameters
Dictionaries are resources storing data that can be used by other KUMA resources and services. Dictionaries can be used in the following KUMA services and features:
Available dictionary settings are listed in the table below.
Available dictionary settings
Setting |
Description |
---|---|
Name |
Unique name for this resource type. Maximum length of the name: 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Description |
Description of the resource. Maximum length of the description: 4000 Unicode characters. |
Type |
Dictionary type. The selected dictionary type determines the format of the data that the dictionary can contain:
Required setting. |
Values |
Table with dictionary data.
If the dictionary contains more than 5,000 entries, they are not displayed in the KUMA Console. To view the contents of such dictionaries, the contents must be exported in CSV format. If you edit the CSV file and import it back into KUMA, the dictionary is updated. |
Importing and exporting dictionaries
You can import or export dictionary data in CSV format (in UTF-8 encoding) by using the Import CSV or Export CSV buttons.
The format of the CSV file depends on the dictionary type:
- Dictionary type:
{KEY},{VALUE}\n
- Table type:
{Column header 1}, {Column header N}, {Column header N+1}\n
{Key1}, {ValueN}, {ValueN+1}\n
{Key2}, {ValueN}, {ValueN+1}\n
The keys must be unique for both the CSV file and the dictionary. In tables, the keys are specified in the first column. Keys must contain 1 to 128 Unicode characters.
Values must contain 0 to 256 Unicode characters.
During an import, the contents of the dictionary are overwritten by the imported file. When imported into the dictionary, the resource name is also changed to reflect the name of the imported file.
If the key or value contains comma or quotation mark characters (, and "), they are enclosed in quotation marks (") when exported. Also, quotation mark character (") is shielded with additional quotation mark (").
If incorrect lines are detected in the imported file (for example, invalid separators), these lines will be ignored during import into the dictionary, and the import process will be interrupted during import into the table.
Interacting with dictionaries via API
You can use the REST API to read the contents of Table-type dictionaries. You can also modify them even if these resources are being used by active services. This lets you, for instance, configure enrichment of events with data from dynamically changing tables exported from third-party applications.
Predefined dictionaries
The dictionaries listed in the table below are included in the KUMA distribution kit.
Predefined dictionaries
Dictionary name |
Type |
Description |
[OOTB] Ahnlab. Severity |
dictionary |
Contains a table of correspondence between a priority ID and its name. |
[OOTB] Ahnlab. SeverityOperational |
dictionary |
Contains values of the SeverityOperational parameter and a corresponding description. |
[OOTB] Ahnlab. VendorAction |
dictionary |
Contains a table of correspondence between the ID of the operation being performed and its name. |
[OOTB] Cisco ISE Message Codes |
dictionary |
Contains Cisco ISE event codes and their corresponding names. |
[OOTB] DNS. Opcodes |
dictionary |
Contains a table of correspondence between decimal opcodes of DNS operations and their IANA-registered descriptions. |
[OOTB] IANAProtocolNumbers |
dictionary |
Contains the port numbers of transport protocols (TCP, UDP) and their corresponding service names, registered by IANA. |
[OOTB] Juniper - JUNOS |
dictionary |
Contains JUNOS event IDs and their corresponding descriptions. |
[OOTB] KEDR. AccountType |
dictionary |
Contains the ID of the user account type and its corresponding type name. |
[OOTB] KEDR. FileAttributes |
dictionary |
Contains IDs of file attributes stored by the file system and their corresponding descriptions. |
[OOTB] KEDR. FileOperationType |
dictionary |
Contains IDs of file operations from the KATA API and their corresponding operation names. |
[OOTB] KEDR. FileType |
dictionary |
Contains modified file IDs from the KATA API and their corresponding file type descriptions. |
[OOTB] KEDR. IntegrityLevel |
dictionary |
Contains the SIDs of the Microsoft Windows INTEGRITY LEVEL parameter and their corresponding descriptions. |
[OOTB] KEDR. RegistryOperationType |
dictionary |
Contains IDs of registry operations from the KATA API and their corresponding values. |
[OOTB] Linux. Sycall types |
dictionary |
Contains Linux call IDs and their corresponding names. |
[OOTB] MariaDB Error Codes |
dictionary |
The dictionary contains MariaDB error codes and is used by the [OOTB] MariaDB Audit Plugin syslog normalizer to enrich events. |
[OOTB] Microsoft SQL Server codes |
dictionary |
Contains MS SQL Server error IDs and their corresponding descriptions. |
[OOTB] MS DHCP Event IDs Description |
dictionary |
Contains Microsoft Windows DHCP server event IDs and their corresponding descriptions. |
[OOTB] S-Terra. Dictionary MSG ID to Name |
dictionary |
Contains IDs of S-Terra device events and their corresponding event names. |
[OOTB] S-Terra. MSG_ID to Severity |
dictionary |
Contains IDs of S-Terra device events and their corresponding Severity values. |
[OOTB] Syslog Priority To Facility and Severity |
table |
The table contains the Priority values and the corresponding Facility and Severity field values. |
[OOTB] VipNet Coordinator Syslog Direction |
dictionary |
Contains direction IDs (sequences of special characters) used in ViPNet Coordinator to designate a direction, and their corresponding values. |
[OOTB] Wallix EventClassId - DeviceAction |
dictionary |
Contains Wallix AdminBastion event IDs and their corresponding descriptions. |
[OOTB] Windows.Codes (4738) |
dictionary |
Contains operation codes present in the MS Windows audit event with ID 4738 and their corresponding names. |
[OOTB] Windows.Codes (4719) |
dictionary |
Contains operation codes present in the MS Windows audit event with ID 4719 and their corresponding names. |
[OOTB] Windows.Codes (4663) |
dictionary |
Contains operation codes present in the MS Windows audit event with ID 4663 and their corresponding names. |
[OOTB] Windows.Codes (4662) |
dictionary |
Contains operation codes present in the MS Windows audit event with ID 4662 and their corresponding names. |
[OOTB] Windows. EventIDs and Event Names mapping |
dictionary |
Contains Windows event IDs and their corresponding event names. |
[OOTB] Windows. FailureCodes (4625) |
dictionary |
Contains IDs from the Failure Information\Status and Failure Information\Sub Status fields of Microsoft Windows event 4625 and their corresponding descriptions. |
[OOTB] Windows. ImpersonationLevels (4624) |
dictionary |
Contains IDs from the Impersonation level field of Microsoft Windows event 4624 and their corresponding descriptions. |
[OOTB] Windows. KRB ResultCodes |
dictionary |
Contains Kerberos v5 error codes and their corresponding descriptions. |
[OOTB] Windows. LogonTypes (Windows all events) |
dictionary |
Contains IDs of user logon types and their corresponding names. |
[OOTB] Windows_Terminal Server. EventIDs and Event Names mapping |
dictionary |
Contains Microsoft Terminal Server event IDs and their corresponding names. |
[OOTB] Windows. Validate Cred. Error Codes |
dictionary |
Contains IDs of user logon types and their corresponding names. |
[OOTB] ViPNet Coordinator Syslog Direction |
dictionary |
Contains direction IDs (sequences of special characters) used in ViPNet Coordinator to designate a direction, and their corresponding values. |
[OOTB] Syslog Priority To Facility and Severity |
table |
Contains the Priority values and the corresponding Facility and Severity field values. |
Response rules
Response rules let you initiate automatic running of Open Single Management Platform tasks, Threat Response actions for Kaspersky Endpoint Detection and Response, KICS/KATA, Active Directory, and running a custom script for specific events.
Automatic execution of Open Single Management Platform tasks, Kaspersky Endpoint Detection and Response tasks, and KICS/KATA and Active Directory tasks in accordance with response rules is available when integrated with the relevant programs.
You can configure response rules under Resources → Response, and then select the created response rule from the drop-down list in the correlator settings. You can also configure response rules directly in the correlator settings.
Response rules for Open Single Management Platform
You can configure response rules to automatically start tasks of anti-virus scan and updates on Open Single Management Platform assets.
When creating and editing response rules for Open Single Management Platform, you need to define values for the following settings.
Response rule settings
Setting |
Description |
---|---|
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. |
Type |
Required setting, available if KUMA is integrated with Open Single Management Platform. Response rule type, ksctasks. |
Open Single Management Platform task |
Required setting. Name of the Open Single Management Platform task to run. Tasks must be created beforehand, and their names must begin with " You can use KUMA to run the following types of Open Single Management Platform tasks:
|
Event field |
Required setting. Defines the event field of the asset for which the Open Single Management Platform task should be started. Possible values:
|
Handlers |
The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed. |
Description |
Description of the response rule. You can add up to 4,000 Unicode characters. |
Filter |
Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter. |
To send requests to Open Single Management Platform, you must ensure that Open Single Management Platform is available over the UDP protocol.
If a response rule is owned by the shared tenant, the displayed Open Single Management Platform tasks that are available for selection are from the Open Single Management Platform server that the main tenant is connected to.
If a response rule has a selected task that is absent from the Open Single Management Platform server that the tenant is connected to, the task is not performed for assets of this tenant. This situation could arise when two tenants are using a common correlator, for example.
Page topResponse rules for a custom script
You can create a script containing commands to be executed on the Kaspersky Unified Monitoring and Analysis Platform server when selected events are detected and configure response rules to automatically run this script. In this case, the program will run the script when it receives events that match the response rules.
The script file is stored on the server where the correlator service using the response resource is installed: /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts. The kuma
user of this server requires the permissions to run the script.
When creating and editing response rules for a custom script, you need to define values for the following parameters.
Response rule settings
Setting |
Description |
---|---|
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. |
Type |
Required setting. Response rule type, script. |
Timeout |
The number of seconds allotted for the script to finish. If this amount of time is exceeded, the script is terminated. |
Script name |
Required setting. Name of the script file. If the response resource is attached to the correlator service but there is no script file in the /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts folder, the correlator will not work. |
Script arguments |
Arguments or event field values that must be passed to the script. If the script includes actions taken on files, you should specify the absolute path to these files. Parameters can be written with quotation marks ("). Event field names are passed in the Example: |
Handlers |
The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed. |
Description |
Description of the resource. You can add up to 4,000 Unicode characters. |
Filter |
Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter. |
Response rules for KICS for Networks
You can configure response rules to automatically trigger response actions on KICS for Networks assets. For example, you can change the asset status in KICS for Networks.
When creating and editing response rules for KICS for Networks, you need to define values for the following settings.
Response rule settings
Setting |
Description |
---|---|
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. |
Type |
Required setting. Response rule type, Response via KICS/KATA. |
Event field |
Required setting. Specifies the event field for the asset for which response actions must be performed. Possible values:
|
KICS for Networks task |
Response action to be performed when data is received that matches the filter. The following types of response actions are available:
When a response rule is triggered, KUMA will send KICS for Networks an API request to change the status of the specified device to Authorized or Unauthorized. |
Handlers |
The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed. |
Description |
Description of the resource. You can add up to 4,000 Unicode characters. |
Filter |
Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter. |
Response rules for Kaspersky Endpoint Detection and Response
You can configure response rules to automatically trigger response actions on Kaspersky Endpoint Detection and Response assets. For example, you can configure automatic asset network isolation.
When creating and editing response rules for Kaspersky Endpoint Detection and Response, you need to define values for the following settings.
Response rule settings
Setting |
Description |
---|---|
Event field |
Required setting. Specifies the event field for the asset for which response actions must be performed. Possible values:
|
Task type |
Response action to be performed when data is received that matches the filter. The following types of response actions are available:
At least one of the above fields must be completed.
All of the listed operations can be performed on assets that have Kaspersky Endpoint Agent for Windows. On assets that have Kaspersky Endpoint Agent for Linux, the program can only be started. At the software level, the capability to create prevention rules and network isolation rules for assets with Kaspersky Endpoint Agent for Linux is unlimited. Kaspersky Unified Monitoring and Analysis Platform and Kaspersky Endpoint Detection and Response do not notify about failing to apply these rules. |
Handlers |
The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed. |
Description |
Description of the response rule. You can add up to 4,000 Unicode characters. |
Filter |
Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter. |
Active Directory response rules
Active Directory response rules define the actions to be applied to an account if a rule is triggered.
When creating and editing response rules using Active Directory, specify the values for the following settings.
Response rule settings
Setting |
Description |
---|---|
Name |
Required setting. Unique name of the resource. Must contain 1 to 128 Unicode characters. |
Tenant |
Required setting. The name of the tenant that owns the resource. |
Type |
Required setting. Response rule type, Response via Active Directory. |
Source of the user account ID |
Event field from which the Active Directory account ID value is taken. Possible values:
|
AD command |
Command that is applied to the account when the response rule is triggered. Available values:
If your Active Directory domain allows selecting the User cannot change password check box, resetting the user account password as a response will result in a conflict of requirements for the user account: the user will not be able to authenticate. The domain administrator will need to clear one of the check boxes for the affected user account: User cannot change password or User must change password at next logon.
|
Group DN |
The DistinguishedName of the domain group in fields for each role. The users of this domain group must be able to authenticate with their domain user accounts. Example of entering a group: OU=KUMA users,OU=users,DC=example,DC=domain |
Handlers |
The number of handlers that the service can run simultaneously to process response rules in parallel. By default, the number of handlers is the same as the number of virtual processors on the server where the service is installed. |
Filter |
Used to define the conditions for the events to be processed using the response rule. You can select an existing filter from the drop-down list or create a new filter. |
Connectors
Connectors are used for establishing connections between Kaspersky Unified Monitoring and Analysis Platform services and for receiving events actively and passively.
You can specify connector settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector.
Connectors can have the following types:
- internal – Used for receiving data from KUMA services using the 'internal' protocol.
- tcp – Used for passively receiving events over TCP when working with Windows and Linux agents.
- udp – Used for passively receiving events over UDP when working with Windows and Linux agents.
- netflow – Used for passively receiving events in the NetFlow format.
- sflow – Used for passively receiving events in the sFlow format. For sFlow, only structures described in sFlow version 5 are supported.
- nats-jetstream – Used for interacting with a NATS message broker when working with Windows and Linux agents.
- kafka – Used for communicating with the Apache Kafka data bus when working with Windows and Linux agents.
- http – Used for receiving events over HTTP when working with Windows and Linux agents.
- sql – Used for querying databases. KUMA supports multiple types of databases. When creating a connector of the sql type, you must specify general connector settings and individual database connection settings.
- file – Used for getting data from text files when working with Windows and Linux agents. One line of a text file is considered to be one event.
\n
is used as the newline character. - 1c-log – Used for getting data from 1C technology logs when working with Linux agents.
\n
is used as the newline character. The connector accepts only the first line from a multi-line event record. - 1c-xml – Used for getting data from 1C registration logs when working with Linux agents. When the connector handles multi-line events, it converts them into single-line events.
- diode – Used for unidirectional data transmission in industrial ICS networks using data diodes.
- ftp – Used for getting data over File Transfer Protocol (FTP) when working with Windows and Linux agents.
- nfs – Used for getting data over Network File System (NFS) when working with Windows and Linux agents.
- wmi – Used for getting data using Windows Management Instrumentation when working with Windows agents.
- wec – Used for getting data using Windows Event Forwarding (WEF) and Windows Event Collector (WEC), or local operating system logs of a Windows host when working with Windows agents.
- etw – Used for getting extended logs of DNS servers.
- snmp – Used for getting data over Simple Network Management Protocol (SNMP) when working with Windows and Linux agents. To process events received over SNMP, you must use the json normalizer. Supported SNMP protocol versions:
- snmpV1
- snmpV2
- snmpV3
- snmp-trap – Used for passively receiving events using SNMP traps when working with Windows and Linux agents. The connector receives snmp-trap events and prepares them for normalization by mapping SNMP object IDs to temporary keys. Then the message is passed to the JSON normalizer, where the temporary keys are mapped to the KUMA fields and an event is generated. To process events received over SNMP, you must use the json normalizer. Supported SNMP protocol versions:
- snmpV1
- snmpV2
- kata/edr – Used for getting KEDR data via the API.
- vmware – Used for getting VMware vCenter data via the API.
- elastic – Used for getting Elasticsearch data. Elasticsearch version 7.0.0 is supported.
- office365 – Used for receiving Microsoft 365 (Office 365) data via the API.
Some connector types (such as tcp, sql, wmi, wec, and etw) support TLS encryption. KUMA supports TLS 1.2 and 1.3. When TLS mode is enabled for these connectors, the connection is established according to the following algorithm:
- If KUMA is being used as a client:
- KUMA sends a connection request to the server with a ClientHello message specifying the maximum supported TLS version (1.3), as well as a list of supported ciphersuites.
- The server responds with the preferred TLS version and a ciphersuite.
- Depending on the TLS version in the server response:
- If the server responds to the request with TLS 1.3 or 1.2, KUMA establishes a connection with the server.
- If the server responds to the request with TLS 1.1, KUMA terminates the connection with the server.
- If KUMA is being used as a server:
- The client sends a connection request to KUMA with the maximum supported TLS version, as well as a list of supported ciphersuites.
- Depending on the TLS version in the client request:
- If the ClientHello message of the client request specifies TLS 1.1, KUMA terminates the connection.
- If the client request specifies TLS 1.2 or 1.3, KUMA responds to the request with the preferred TLS version and a ciphersuite.
Viewing connector settings
To view connector settings:
- In the web interface of Kaspersky Unified Monitoring and Analysis Platform, go to the Resources → Connectors section.
- In the folder structure, select the folder containing the relevant connector.
- Select the connector whose settings you want to view.
The settings of connectors are displayed on two tabs: Basic settings and Advanced settings. For a detailed description of each connector settings, please refer to the Connector settings section.
Page topAdding a connector
You can enable the display of non-printing characters for all entry fields except the Description field.
To add a connector:
- In the web interface of Kaspersky Unified Monitoring and Analysis Platform, go to the Resources → Connectors section.
- In the folder structure, select the folder in which you want the connector to be located.
Root folders correspond to tenants. To make a connector available to a specific tenant, the resource must be created in the folder of that tenant.
If the required folder is absent from the folder tree, you need to create it.
By default, added connectors are created in the Shared folder.
- Click the Add connector button.
- Define the settings for the selected connector type.
The settings that you must specify for each type of connector are provided in the Connector settings section.
- Click the Save button.
Connector settings
This section contains the description of all connector types supported by Kaspersky Unified Monitoring and Analysis Platform.
Connector, internal type
Connectors of the internaltypeUsed for receiving data from KUMA services using the 'internal' protocol. For example, you must use such a connector to receive the following data:
- Internal data, such as event routes.
- File attributes. If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file, 1c-xml, or 1c-log type, at the Event parsing step, in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:
$kuma_fileSourceName
to pass the name of the file being processed by the collector in the KUMA event field.$kuma_fileSourcePath
to pass the path to the file being processed by the collector in the KUMA event field.
When you use a file, 1c-xml, or 1c-log connector, the new variables in the normalizer will only work with destinations of the internal type.
- Events to the event router. The event router can only receive events over the 'internal' protocol, therefore you can only use internal destinations when sending events to the event router.
Settings for a connector of the internal type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: internal. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
The URL and port that the connector is listening on. You can enter a value in one of the following formats:
You can specify IPv6 addresses in the following format: You can add multiple values or delete values. To add a value, click the + Add button. To delete a value, click the delete Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Connector, tcp type
Connectors of the tcp typeUsed for passively receiving events over TCP when working with Windows and Linux agents. Settings for a connector of the tcp type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: tcp. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. You can enter a URL in one of the following formats:
Required setting. |
Auditd |
This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event. If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism. If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events. The maximum size of a grouped auditd event is approximately 4,174,304 characters. |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Character encoding |
Character encoding. The default is UTF-8. |
Event buffer TTL |
Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event. The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: from 50 to 30,000. The default value is This field is available if you have enabled the Auditd toggle switch on the Basic settings tab. The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics. If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector. |
Transport header |
Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it. The regular expression must contain the You can revert to the default regular expression for auditd events by clicking Reset to default value. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
Connector, udp type
Connectors of the udp typeUsed for passively receiving events over UDP when working with Windows and Linux agents. Settings for a connector of the udp type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: udp. Required setting. |
URL |
URL that you want to connect to. You can enter a URL in one of the following formats:
Required setting. |
Auditd |
This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event. If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism. If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events. The maximum size of a grouped auditd event is approximately 4,174,304 characters. |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Character encoding |
Character encoding. The default is UTF-8. |
Event buffer TTL |
Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event. The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: from 50 to 30,000. The default value is This field is available if you have enabled the Auditd toggle switch on the Basic settings tab. The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics. If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector. |
Transport header |
Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it. The regular expression must contain the You can revert to the default regular expression for auditd events by clicking Reset to default value. |
Compression |
Drop-down list for configuring Snappy compression:
|
Connector, netflow type
Connectors of the netflow typeUsed for passively receiving events in the NetFlow format. Settings for a connector of the netflow type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: netflow. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, sflow type
Connectors of the sflow typeUsed for passively receiving events in the sFlow format. For sFlow, only structures described in sFlow version 5 are supported. Settings for a connector of the sflow type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: sflow. Required setting. |
URL |
URL that you want to connect to. You can enter a URL in one of the following formats:
Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, nats-jetstream type
Connectors of the nats-jetstream typeUsed for interacting with a NATS message broker when working with Windows and Linux agents. Settings for a connector of the nats-jetstream type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: nats-jetstream. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Authorization |
Type of authorization when connecting to the URL specified in the URL field:
|
Subject |
The topic of NATS messages. Characters are entered in Unicode encoding. Required setting. |
GroupID. |
The value of the |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
Connector, kafka type
Connectors of the kafka typeUsed for communicating with the Apache Kafka data bus when working with Windows and Linux agents. Settings for a connector of the kafka type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: kafka. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Authorization |
Type of authorization when connecting to the URL specified in the URL field:
|
Topic |
Subject of Kafka messages. The maximum length of the subject is 255 characters. You can use the following characters: a–z, A–Z, 0–9, ".", "_", "-". Required setting. |
GroupID. |
The value of the |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Size of message to fetch |
Size of one message in the request, in bytes. The default value of 16 MB is applied if no value is specified or 0 is specified. |
Maximum fetch wait time |
Timeout for one message in seconds. The default value of 5 seconds is applied if no value is specified or 0 is specified. |
Connection timeout |
Kafka broker connection timeout in seconds. Maximum possible value: 2147483647. The default value is 30 seconds. |
Read timeout |
Read operation timeout in seconds. Maximum possible value: 2147483647. The default value is 30 seconds. |
Write timeout |
Write operation timeout in seconds. Maximum possible value: 2147483647. The default value is 30 seconds. |
Group status update interval |
Group status update interval in seconds Cannot exceed session time. The recommended value is 1/3 of the session time. Maximum possible value: 2147483647. The default value is 30 seconds. |
Session time |
Session time in seconds. Maximum possible value: 2147483647. The default value is 30 seconds. |
Maximum time to process one message |
Maximum time to process one message by a single thread, in milliseconds. Maximum possible value: 2147483647. The default value is 100 milliseconds. |
Enable autocommit |
Enabled by default. |
Autocommit interval |
Autocommit interval in seconds The default value is 1 second. Maximum possible value: 18446744073709551615. Any positive number can be specified. |
Connector, http type
Connectors of the http typeUsed for receiving events over HTTP when working with Windows and Linux agents. Settings for a connector of the http type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: http. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. You can enter a URL in one of the following formats:
Required setting. |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Connector, sql type
Connectors of the sql typeUsed for querying databases. KUMA supports multiple types of databases. When creating a connector of the sql type, you must specify general connector settings and individual database connection settings. Settings for a connector of the sql type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: sql. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Default query |
SQL query that is executed when connecting to the database. Required setting. |
Reconnect to the database every time a query is sent |
This toggle enables reconnection of the connector to the database every time a query is sent. This toggle switch is turned off by default. |
Poll interval, sec |
Interval for executing SQL queries in seconds. The default value is 10 seconds. |
Connection |
Database connection settings:
You can add multiple connections or delete a connection. To add a connection, click the +Add connection button. To remove a connection, click the delete |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. KUMA converts SQL responses to UTF-8 encoding. You can configure the SQL server to send responses in UTF-8 encoding or change the encoding of incoming messages on the KUMA side. |
Within a single connector, you can create a connection for multiple supported databases. If a collector with a connector of the sql type cannot be started, check if the /opt/kaspersky/kuma/collector/<collector ID
>/sql/state-<file ID
> state file is empty. If the state file is empty, delete it and restart the collector.
Supported SQL types and their specific usage features
The following SQL types are supported:
- MSSQL.
For example:
sqlserver://{user}:{password}@{server:port}/{instance_name}?database={database}
We recommend using this URL variant.
sqlserver://{user}:{password}@{server}?database={database}
The characters
@p1
are used as a placeholder in the SQL query.If you want to connect using domain account credentials, specify the account name in
<
domain
>%5C<
user
>
format. For example:sqlserver://domain%5Cuser:password@ksc.example.com:1433/SQLEXPRESS?database=KAV
. - MySQL/MariaDB
For example:
mysql://{user}:{password}@tcp({server}:{port})/{database}
The characters
?
are used as placeholders in the SQL query. - PostgreSQL.
For example:
postgres://{user}:{password}@{server}/{database}?sslmode=disable
The characters
$1
are used as a placeholder in the SQL query. - CockroachDB
For example:
postgres://{user}:{password}@{server}:{port}/{database}?sslmode=disable
The characters
$1
are used as a placeholder in the SQL query. - SQLite3
For example:
sqlite3://file:{file_path}
A question mark (
?
) is used as a placeholder in the SQL query.When querying SQLite3, if the initial value of the ID is in datetime format, you must add a date conversion with the
sqlite datetime
function to the SQL query. For example:select * from connections where datetime(login_time) > datetime(?, 'utc') order by login_time
In this example,
connections
is the SQLite table, and the value of the variable?
is taken from the Identity seed field, and it must be specified in the{<
date
>}T{<
time
>}Z
format, for example,2021-01-01T00:10:00Z)
. - Oracle DB
Example URL of a secret with the 'oracle' driver:
oracle://{user}:{password}@{server}:{port}/{service_name}
oracle://{user}:{password}@{server}:{port}/?SID={SID_VALUE}
If the query execution time exceeds 30 seconds, the oracle driver aborts the SQL request, and the following error appears in the collector log: user requested cancel of current operation. To increase the execution time of an SQL query, specify the value of the timeout parameter in seconds in the connection string, for example:
oracle://{user}:{password}@{server}:{port}/{service_name}?timeout=300
The
:val
variable is used as a placeholder in the SQL query.When querying Oracle DB, if the identity seed is in the datetime format, you must consider the type of the field in the database and, if necessary, add conversions of the time string in the SQL query to make sure the SQL connector works correctly. For example, if the
Connections
table in the database has alogin_time
field, the following conversions are possible:- If the login_time field has the TIMESTAMP type, then depending on the configuration of the database, the login_time field may contain a value in the
YYYY-MM-DD HH24:MI:SS
format, for example,2021-01-01 00:00:00
. In this case, you need to specify2021-01-01T00:00:00Z
in the Identity seed field, and in the SQL query, perform the conversion using theto_timestamp
function, for example:select * from connections where login_time > to_timestamp(:val, 'YYYY-MM-DD"T"HH24:MI:SS"Z"')
- If the login_time field has the TIMESTAMP WITH TIME ZONE type, then depending on the configuration of the database, the login_time field may contain a value in the
YYYY-MM-DD"T"HH24:MI:SSTZH:TZM
format (for example,2021-01-01T00:00:00+03:00
). In this case, you need to specify2021-01-01T00:00:00+03:00
in the Identity seed field, and in the SQL query, perform the conversion using theto_timestamp_tz
function, for example:select * from connections_tz where login_time > to_timestamp_tz(:val, 'YYYY-MM-DD"T"HH24:MI:SSTZH:TZM')
For details about the
to_timestamp
andto_timestamp_tz
functions, please refer to the official Oracle documentation.
To interact with Oracle DB, you must install the libaio1 Astra Linux package.
- If the login_time field has the TIMESTAMP type, then depending on the configuration of the database, the login_time field may contain a value in the
- Firebird SQL
For example:
firebirdsql://{user}:{password}@{server}:{port}/{database}
A question mark (
?
) is used as a placeholder in the SQL query.If a problem occurs when connecting Firebird on Windows, use the full path to the database file, for example:
firebirdsql://{user}:{password}@{server}:{port}/C:\Users\user\firebird\db.FDB
- ClickHouse
When using TLS encryption, by default, the connector works with ClickHouse only on port 9000. If TLS encryption is not used, by default, the connector works with ClickHouse only on port 9440. If TLS encryption mode is configured on the ClickHouse server, and in connector settings, in the TLS mode drop-down list, you have selected Disabled or vice versa, the database connection cannot be established.
If you want to connect to the KUMA ClickHouse, in the SQL connector settings, specify the PublicPki secret type, which contains the base64-encoded PEM private key and the public key.
In the parameters of the SQL connector for the ClickHouse connection type, you need to select Disabled in the TLS mode drop-down list. This value must not be specified if a certificate is used for authentication. If in the TLS mode drop-down list, you select Custom CA, you need to specify the ID of a secret of the 'certificate' type in the Identity column field. You also need to select one of the following values in the Authorization type drop-down list:
- Disabled. If you select this value, you need to leave the Identity column field blank.
- Plain. Select this value if the Secret separately check box is selected and the ID of a secret of the 'credentials' type is specified in the Identity column field.
- PublicPki. Select this value if the Secret separately check box is selected and the ID of a secret of the 'PublicPki' type is specified in the Identity column field.
The Secret separately check box lets you specify the URL separately, not as part of the secret.
A sequential request for database information is supported in SQL queries. For example, if in the Query field, you enter select * from <
name of data table
> where id > <
placeholder
>
, the value of the Identity seed field is used as the placeholder value the first time you query the table. In addition, the service that utilizes the SQL connector saves the ID of the last read entry, and the ID of this entry will be used as the placeholder value in the next query to the database.
Connector, file type
Connectors of the file typeUsed for getting data from text files when working with Windows and Linux agents. One line of a text file is considered to be one event. \n
is used as the newline character.
If while creating the collector at the Transport step of the installation wizard, you specified a connector of the file type, at the Event parsing in the Mapping table, you can pass the name of the file being processed by the collector or the path to the file in the KUMA event field. To do this, in the Source column, specify one of the following values:
$kuma_fileSourceName
to pass the name of the file being processed by the collector in the KUMA event field.$kuma_fileSourcePath
to pass the path to the file being processed by the collector in the KUMA event field.
When you use a file connector, the new variables in the normalizer will only work with destinations of the internal type.
To read Windows files, you need to create a connector of the file type and manually install the agent on Windows. The Windows agent must not read its files in the folder where the agent is installed. The connector will work even with a FAT file system; if the disk is defragmented, the connector re-reads all files from scratch because all inodes of files are reset.
We do not recommend running the agent under an administrator account; read permissions for folders/files must be configured for the user account of the agent. We do not recommend installing the agent on important systems; it is preferable to send the logs and read them on dedicated hosts with the agent.
For each file that the connector of the file type interacts with, a state file (states.ini) is created with the offset
, dev
, inode
, and filename
parameters. The state file allows the connector, to resume reading from the position where the connector last stopped instead of starting over when rereading the file. Some special considerations are involved in rereading files:
- If the
inode
parameter in the state file changes, the connector rereads the corresponding file from the beginning. When the file is deleting and recreated, theinode
setting in the associated state file may remain unchanged. In this case, when rereading the file, the connector resumes reading in accordance with theoffset
parameter. - If the file has been truncated or its size has become smaller, the connector start reading from the beginning.
- If the file has been renamed, when rereading the file, the connector resumes reading from the position where the connector last stopped.
- If the directory with the file has been remounted, when rereading the file, the connector resumes reading from the position where the connector last stopped. You can specify the path to the files with which the connector interacts when configuring the connector in the File path field.
Settings for a connector of the file type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: file. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Path to the file. |
The full path to the file that the connector interacts with. For example,
File and folder mask templates Limitations when using prefixes in file paths Limiting the number of files for watching by mask Required setting. |
Modification timeout, sec |
The time in seconds for which the file must not be updated for KUMA to apply the action specified in the Action after timeout drop-down list to the file. Default value: The entered value must not be less than the value that you entered on the Advanced settings in the Poll interval, sec field. |
Action after timeout |
The action that KUMA applies to the file after the time specified in the Modification timeout, sec:
|
Auditd |
This toggle switch enables the auditd mechanism to group auditd event lines received from the connector into an auditd event. If you enable this toggle switch, you cannot select a value in the Delimiter drop-down list because \n is automatically selected for the auditd mechanism. If you enable this toggle switch in the connector settings of the agent, you need to select \n in the Delimiter drop-down list in the connector settings of the collector to which the agent sends events. The maximum size of a grouped auditd event is approximately 4,174,304 characters. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
File/folder polling mode |
Specifies how the connector rereads files in the directory:
|
Poll interval, ms |
The interval in milliseconds at which the connector rereads files in the directory. Default value: The entered value must not be less than the value that you entered on the Basic settings in the Modification timeout, sec field. We recommend entering a value less than the value that you entered in the Event buffer TTL field because this may adversely affect the performance of Auditd. |
Character encoding |
Character encoding. The default is UTF-8. |
Event buffer TTL |
Buffer lifetime for auditd event lines, in milliseconds. Auditd event lines enter the KUMA collector and accumulate in the buffer. This allows multiple auditd event lines to be grouped into a single auditd event. The buffer lifetime countdown begins when the first auditd event line is received or when the previous buffer lifetime expires. Possible values: 700 to 30,000. The default value is This field is available if you have enabled the Auditd toggle switch on the Basic settings tab. The auditd event lines accumulated in the buffer are kept in the RAM of the server. We recommend caution when increasing the buffer size because memory usage by the KUMA collector may become excessive. You can see how much server RAM the KUMA collector is using in KUMA metrics. If you want a buffer lifetime to exceed 30,000 milliseconds, we recommend using a different auditd event transport. For example, you can use an agent or pre-accumulate auditd events in a file, and then process this file with the KUMA collector. |
Transport header |
Regular expression for auditd events, which is used to identify auditd event lines. You can use the default value or edit it. The regular expression must contain the You can revert to the default regular expression for auditd events by clicking Reset to default value. |
Connector, 1c-log type
Connectors with the 1c-log typeUsed for getting data from 1C technology logs when working with Linux agents. \n
is used as the newline character. The connector accepts only the first line from a multi-line event record.
Settings for a connector of the 1c-log type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: 1c-log. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Directory path |
The full path to the directory with the files that you want to interact with, for example, Limitations when using prefixes in file paths Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
File/folder polling mode |
Specifies how the connector rereads files in the directory:
|
Poll interval, ms |
The interval in milliseconds at which the connector rereads files in the directory. Default value: |
Character encoding |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
Connector operation diagram:
- All 1C technology log files are searched. Log file requirements:
- Files with the LOG extension are created in the log directory (
/var/log/1c/logs/
by default) within a subdirectory for each process. - Events are logged to a file for an hour; after that, the next log file is created.
- The file names have the following format: <YY><MM><DD><HH>.log. For example, 22111418.log is a file created in 2022, in the 11th month, on the 14th at 18:00.
- Each event starts with the event time in the following format: <mm>:<ss>.<microseconds>-<duration in microseconds>.
- Files with the LOG extension are created in the log directory (
- The processed files are discarded. Information about processed files is stored in the file /<collector working directory>/1c_log_connector/state.json.
- Processing of the new events starts, and the event time is converted to the RFC3339 format.
- The next file in the queue is processed.
Connector limitations:
- Installation of a collector with a 1c-log connector is not supported in a Windows operating system. To set up transfer of 1C log files for processing by the KUMA collector:
- On the Windows server, grant read access over the network to the folder with the 1C log files.
- On the Linux server, mount the shared folder with the 1C log files on the Windows server (see the list of supported operating systems).
- On the Linux server, install the collector that you want to process 1C log files from the mounted shared folder.
- Only the first line from a multi-line event record is processed.
- The normalizer processes only the following types of events:
- ADMIN
- ATTN
- CALL
- CLSTR
- CONN
- DBMSSQL
- DBMSSQLCONN
- DBV8DBENG
- EXCP
- EXCPCNTX
- HASP
- LEAKS
- LIC
- MEM
- PROC
- SCALL
- SCOM
- SDBL
- SESN
- SINTEG
- SRVC
- TLOCK
- TTIMEOUT
- VRSREQUEST
- VRSRESPONSE
Connector, 1c-xml type
Connectors with the 1c-xml typeUsed for getting data from 1C registration logs when working with Linux agents. When the connector handles multi-line events, it converts them into single-line events.
Settings for a connector of the 1c-xml type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: 1c-xml. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Directory path |
The full path to the directory with the files that you want to interact with, for example, Limitations when using prefixes in file paths Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Buffer size |
Buffer size in bytes for accumulating events in the RAM of the server before sending them for further processing or storage. The value must be a positive integer. Default buffer size: 1,048,576 bytes (1 MB). Maximum buffer size: 67,108,864 bytes (64 MB). |
File/folder polling mode |
Specifies how the connector rereads files in the directory:
|
Poll interval, ms |
The interval in milliseconds at which the connector rereads files in the directory. Default value: |
Character encoding |
Character encoding. The default is UTF-8. |
Connector operation diagram:
- The files containing 1C logs with the XML extension are searched within the specified directory. Logs are placed in the directory either manually or using an application written in the 1C language, for example, using the
ВыгрузитьЖурналРегистрации()
function. The connector only supports logs received this way. For more information on how to obtain 1C logs, see the official 1C documentation. - Files are sorted by the last modification time in ascending order. All the files modified before the last read are discarded.
Information about processed files is stored in the file /<collector working directory>/1c_xml_connector/state.ini and has the following format: "offset=<number>\ndev=<number>\ninode=<number>".
- Events are defined in each unread file.
- Events from the file are processed one by one. Multi-line events are converted to single-line events.
Connector limitations:
- Installation of a collector with a 1c-xml connector is not supported in a Windows operating system. To set up transfer of 1C log files for processing by the KUMA collector:
- On the Windows server, grant read access over the network to the folder with the 1C log files.
- On the Linux server, mount the shared folder with the 1C log files on the Windows server (see the list of supported operating systems).
- On the Linux server, install the collector that you want to process 1C log files from the mounted shared folder.
- Files with an incorrect event format are not read. For example, if event tags in the file are in Russian, the collector does not read such events.
- If a file read by the connector is enriched with the new events and if this file is not the last file read in the directory, all events from the file are processed again.
Connector, diode type
Connectors of the diode typeUsed for unidirectional data transmission in industrial ICS networks using data diodes. Settings for a connector of the diode type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: diode. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Directory with events from the data diode |
Full path to the directory on the KUMA collector server, into which the data diode moves files with events from the isolated network segment. After the connector has read these files, the files are deleted from the directory. Maximum length of the path: 255 Unicode characters. Limitations when using prefixes in paths Required setting. |
Delimiter |
The character that marks the boundary between events:
If you do not select a value in this drop-down list, \n is selected by default. You must select the same value in the Delimiter drop-down list in the settings of the connector and the destination being used to transmit events from the isolated network segment using a data diode. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Number of handlers |
Number of handlers that the service can run simultaneously to process response rules in parallel. To determine the number of handlers, you can use the following formula: (<number of CPUs> / 2) + 2. The value must be a positive integer. |
Poll interval, sec |
Interval at which the files are read from the directory containing events from the data diode. The default value is 2 seconds. |
Character encoding |
Character encoding. The default is UTF-8. |
Compression |
Drop-down list for configuring Snappy compression:
You must select the same value in the Snappy drop-down list in the settings of the connector and the destination being used to transmit events from the isolated network segment using a data diode. |
Connector, ftp type
Connectors of the ftp typeUsed for getting data over File Transfer Protocol (FTP) when working with Windows and Linux agents. Settings for a connector of the ftp type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: ftp. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL of file or file mask that begins with the If the URL does not contain the port number of the FTP server, port Required setting. |
Secret |
|
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, nfs type
Connectors of the nfs typeUsed for getting data over Network File System (NFS) when working with Windows and Linux agents. Settings for a connector of the nfs type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: nfs. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
Path to the remote directory in the Required setting. |
File name mask |
A mask used to filter files containing events. The following wildcards are acceptable " |
Poll interval, sec |
Poll interval in seconds. The time interval after which files are re-read from the remote system. The default value is |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, wmi type
Connectors of the wmi typeUsed for getting data using Windows Management Instrumentation when working with Windows agents. Settings for a connector of the wmi type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: wmi. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
URL |
URL of the collector that you created to receive data using Windows Management Instrumentation, for example, When a collector is created, an agent is automatically created that will get data on the remote device and forward it to the collector service. If you know which server the collector service will be installed on, the URL is known in advance. You can enter the URL of the collector in the URL field after completing the installation wizard. To do so, you first need to copy the URL of the collector in the Resources → Active services section. Required setting. |
Default credentials |
No value. You need to specify credentials for connecting to hosts in the Remote hosts table. |
Remote hosts |
Settings of remote Windows devices to connect to.
You can add multiple remote Windows devices or remove a remote Windows device. To add a remote Windows device, click +Add. To remove a remote Windows device, select the check box next to it and click Delete. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.
Receiving events from a remote device
Conditions for receiving events from a remote Windows device hosting a KUMA agent:
- To start the KUMA agent on the remote device, you must use an account with the “Log on as a service” permissions.
- To receive events from the KUMA agent, you must use an account with Event Log Readers permissions. For domain servers, one such user account can be created so that a group policy can be used to distribute its rights to read logs to all servers and workstations in the domain.
- TCP ports 135, 445, and 49152–65535 must be opened on the remote Windows devices.
- You must run the following services on the remote machines:
- Remote Procedure Call (RPC)
- RPC Endpoint Mapper
Connector, wec type
Connectors of the wec typeUsed for getting data using Windows Event Forwarding (WEF) and Windows Event Collector (WEC), or local operating system logs of a Windows host when working with Windows agents. Settings for a connector of the wec type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: wec. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL of the collector that you created to receive data using Windows Event Collector, for example, When a collector is created, an agent is automatically created that will get data on the remote device and forward it to the collector service. If you know which server the collector service will be installed on, the URL is known in advance. You can enter the URL of the collector in the URL field after completing the installation wizard. To do so, you first need to copy the URL of the collector in the Resources → Active services section. Required setting. |
Windows logs |
The names of the Windows logs that you want to get. By default, this drop-down list includes only preconfigured logs, but you can add custom log to the list. To do so, enter the names of the custom logs in the Windows logs field, then press ENTER. KUMA service and resource configurations may require additional changes in order to process custom logs correctly. Preconfigured logs:
If the name of at least one log is specified incorrectly, the agent using the connector does not receive events from any log, even if the names of other logs are correct. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.
To start the KUMA agent on the remote device, you must use a service account with the “Log on as a service” permissions. To receive events from the operating system log, the service user account must also have Event Log Readers permissions.
You can create one user account with “Log on as a service” and “Event Log Readers” permissions, and then use a group policy to extend the rights of this account to read the logs to all servers and workstations in the domain.
We recommend that you disable interactive logon for the service account.
Page topConnector, etw type
Connectors of the etw typeUsed for getting extended logs of DNS servers. Settings for a connector of the etw type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: etw. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL of the DNS server. Required setting. |
Session name |
Session name that corresponds to the ETW provider: Microsoft-Windows-DNSServer {EB79061A-A566-4698-9119-3ED2807060E7}. If in a connector of the etw type, the session name is specified incorrectly, an incorrect provider is specified in the session, or an incorrect method is specified for sending events (to send events correctly, on the Windows Server side, you must specify "Real time" or "File and Real time" mode), events will not arrive from the agent, an error will be recorded in the agent log on Windows, and the status of the agent will be green. At the same time, no attempt will be made to get events every 60 seconds. If you modify session settings on the Windows side, you must restart the etw agent and/or the session for the changes to take effect. For details about specifying session settings on the Windows side to receive DNS server events, see the Configuring receipt of DNS server events using the ETW agent section. Required setting. |
Extract event information |
This toggle switch enables the extraction of the minimum set of event information that can be obtained without having to download third-party metadata from the disk. This method helps conserve CPU resources on the computer with the agent. By default, this toggle switch is enabled and all event data is extracted. |
Extract event properties |
This toggle switch enables the extraction of event properties. If this toggle switch is disabled, event properties are not extracted, which helps save CPU resources on the machine with the agent. By default, this toggle switch is enabled and event properties are extracted. You can enable the Extract event properties switch only if the Extract event information toggle switch is enabled. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Compression |
Drop-down list for configuring Snappy compression:
|
If you edit a connector of this type, the TLS mode and Compression settings are visible and available on the connector resource as well as the collector. If you are using a connector of this type on a collector, the values of TLS mode and Compression settings are sent to the destination of automatically created agents.
Page topConnector, snmp type
Connectors of the snmp typeUsed for getting data over Simple Network Management Protocol (SNMP) when working with Windows and Linux agents. To process events received over SNMP, you must use the json normalizer. Supported SNMP versions:
- snmpV1
- snmpV2
- snmpV3
Only one snmp connector created in the agent settings can be used in an agent. If you need to use multiple snmp connectors, you must create one or all snmp connectors as a separate resource and select it in the connection settings.
Available settings for a connector of the snmp type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: snmp. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
SNMP resource |
Settings for connecting to an SNMP resource:
You can add multiple connections to SNMP resources or delete an SNMP resource connection. To create a connection to an SNMP resource, click the + SNMP resource button. To delete a connection to an SNMP resource, click the delete |
Settings |
Rules for naming the received data, according to which OIDs (object identifiers) are converted to the keys with which the normalizer can interact. Available settings:
You can do the following with rules:
|
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, snmp-trap type
Connectors of the snmp-trap typeUsed for passively receiving events using SNMP traps when working with Windows and Linux agents. The connector receives snmp-trap events and prepares them for normalization by mapping SNMP object IDs to temporary keys. Then the message is passed to the JSON normalizer, where the temporary keys are mapped to the KUMA fields and an event is generated. To process events received over SNMP, you must use the json normalizer. Supported SNMP versions:
- snmpV1
- snmpV2
Settings for a connector of the snmp-trap type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: snmp-trap. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
SNMP resource |
Connection settings for receiving snmp-trap events:
You can add multiple connections or delete a connection. To add a connection, click the + SNMP resource button. To remove a SNMP resource, click the delete |
Settings |
Rules for naming the received data, according to which OIDs (object identifiers) are converted to the keys with which the normalizer can interact. Available settings:
You can do the following with rules:
|
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. When receiving snmp-trap events from Windows with Russian localization, if you encounter invalid characters in the event, we recommend changing the character encoding in the snmp-trap connector to Windows 1251. |
Configuring a Windows device to send SNMP trap messages to the KUMA collector proceeds in stages:
- Configuring and starting the SNMP and SNMP trap services
- Configuring the Event to Trap Translator service
Events from the source of SNMP trap messages must be received by the KUMA collector, which uses a connector of the snmp-trap type and a json normalizer.
To configure and start the SNMP and SNMP trap services in Windows 10:
- Open Settings → Apps → Apps and features → Optional features → Add feature → Simple Network Management Protocol (SNMP) and click Install.
- Wait for the installation to complete and restart your computer.
- Make sure that the SNMP service is running. If any of the following services are not running, enable them:
- Services → SNMP Service.
- Services → SNMP Trap.
- Right-click Services → SNMP Service, and in the context menu select Properties. Specify the following settings:
- On the Log On tab, select the Local System account check box.
- On the Agent tab, fill in the Contact (for example, specify
User-win10
) and Location (for example, specifydetroit
) fields. - On the Traps tab:
- In the Community Name field, enter community public and click Add to list.
- In the Trap destination field, click Add, specify the IP address or host of the KUMA server on which the collector that waits for SNMP events is deployed, and click Add.
- On the Security tab:
- Select the Send authentication trap check box.
- In the Accepted community names table, click Add, enter Community Name public and specify READ WRITE as the Community rights.
- Select the Accept SNMP packets from any hosts check box.
- Click Apply and confirm your selection.
- Right click Services → SNMP Service and select Restart.
To configure and start the SNMP and SNMP trap services in Windows XP:
- Open Start → Control Panel → Add or Remove Programs → Add / Remove Windows Components → Management and Monitoring Tools → Details.
- Select Simple Network Management Protocol and WMI SNMP Provider, and then click OK → Next.
- Wait for the installation to complete and restart your computer.
- Make sure that the SNMP service is running. If any of the following services are not running, enable them by setting the Startup type to Automatic:
- Services → SNMP Service.
- Services → SNMP Trap.
- Right-click Services → SNMP Service, and in the context menu select Properties. Specify the following settings:
- On the Log On tab, select the Local System account check box.
- On the Agent tab, fill in the Contact (for example, specify
User-win10
) and Location (for example, specifydetroit
) fields. - On the Traps tab:
- In the Community Name field, enter community public and click Add to list.
- In the Trap destination field, click Add, specify the IP address or host of the KUMA server on which the collector that waits for SNMP events is deployed, and click Add.
- On the Security tab:
- Select the Send authentication trap check box.
- In the Accepted community names table, click Add, enter Community Name public and specify READ WRITE as the Community rights.
- Select the Accept SNMP packets from any hosts check box.
- Click Apply and confirm your selection.
- Right click Services → SNMP Service and select Restart.
Changing the port for the SNMP trap service
You can change the SNMP trap service port if necessary.
To change the port of the SNMP trap service:
- Open the C:\Windows\System32\drivers\etc folder.
- Open the services file in Notepad as an administrator.
- In the service name section of the file, specify the snmp-trap connector port added to the KUMA collector for the SNMP trap service.
- Save the file.
- Open the Control Panel and select Administrative Tools → Services.
- Right-click SNMP Service and select Restart.
To configure the Event to Trap Translator service that translates Windows events to SNMP trap messages:
- In the command line, type
evntwin
and press Enter. - Under Configuration type, select Custom, and click the Edit button.
- In the Event sources group of settings, use the Add button to find and add the events that you want to send to KUMA collector with the SNMP trap connector installed.
- Click the Settings button, in the opened window, select the Don't apply throttle check box, and click OK.
- Click Apply and confirm your selection.
Connector, kata/edr type
Connectors of the kata/edr typeUsed for getting KEDR data via the API. Settings for a connector of the kata/edr type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: kata/edr. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL that you want to connect to. The following URL formats are supported:
You can add multiple URLs or remove an URL. To add an URL, click the + Add button. To remove an URL, click the delete Required setting. |
Secret |
Secret that stores the credentials for connecting to the KATA/EDR server. You can select an existing secret or create a new secret. To create a new secret, select Create new. If you want to edit the settings of an existing secret, click the pencil Required setting. |
External ID |
Identifier for external systems. KUMA automatically generates an ID and populates this field with it. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. We only recommend configuring a conversion if you find invalid characters in the fields of the normalized event. By default, no value is selected. |
Number of events |
Maximum number of events in one request. By default, the value set on the KATA/EDR server is used. |
Events fetch timeout |
The time in seconds to wait for receipt of events from the KATA/EDR server. Default value: |
Client timeout |
Time in seconds to wait for a response from the KATA/EDR server. Default value: |
KEDRQL filter |
Filter of requests to the KATA/EDR server. For more details on the query language, please refer to the KEDR Help. |
Connector, vmware type
Connectors of the vmware typeUsed for getting VMware vCenter data via the API. Settings for a connector of the vmware type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: vmware. Required setting. |
Tags |
Tags for resource search. Optional setting. |
URL |
URL of the VMware API. You need to include the hostname and port number in the URL. You can only specify one URL. Required setting. |
VMware credentials |
Secret that stores the user name and password for connecting to the VMware API. You can select an existing secret or create a new secret. To create a new secret, select Create new. If you want to edit the settings of an existing secret, click the pencil Required setting. |
Client timeout |
Time to wait after a request that did not return events before making a new request. The default value is 5 seconds. If you specify |
Maximum number of events |
Number of events requested from the VMware API in one request. The default value is |
Start timestamp |
Starting date and time from which you want to read events from the VMware API. By default, events are read from the VMware API from the time when the collector was started. If started after the collector is stopped, the events are read from the last saved date. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
TLS mode |
TLS encryption mode. When using TLS encryption, you cannot specify an IP address in the URL field on the Basic settings. Available values:
|
Connector, elastic type
Connectors of the elastic typeUsed for getting Elasticsearch data. Elasticsearch version 7.0.0 is supported. Settings for a connector of the elastic type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: elastic. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Connection |
Elasticsearch server connection settings:
You can add multiple connections to an Elasticsearch server resources or delete an Elasticsearch server connection. To add an Elasticsearch server connection, click the + Add connection button. To delete an Elasticsearch server connection, click the delete |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
Connector, office365 type
Connectors of the office365 type are used for receiving Microsoft 365 (Office 365) data via the API.
Available settings for a connector of the office365 type are described in the following tables.
Basic settings tab
Setting |
Description |
---|---|
Name |
Unique name of the resource. The maximum length of the name is 128 Unicode characters. Required setting. |
Tenant |
The name of the tenant that owns the resource. Required setting. |
Type |
Connector type: office365. Required setting. |
Tags |
Tags for resource search. Optional setting. |
Office365 content types |
Content types that you want to receive in KUMA. The following content types are available, providing information about actions and events in Microsoft 365, grouped by information source:
You can find detailed information about the properties of the available content types and related events in the schema on the Microsoft website. Required setting. You can select one or more content types. |
Office365 tenant ID |
Unique ID that you get after registering an account with Microsoft 365. If you do not have one, contact your administrator or Microsoft. Required setting. |
Office365 client ID |
Unique ID that you get after registering an account with Microsoft 365. If you do not have one, contact your administrator or Microsoft. Required setting. |
Authorization |
Authorization method for connecting to Microsoft 365. The following authorization methods are available:
For more information, see the section on secrets. |
Office365 credentials |
The field becomes available after selecting the authorization method. You can select one of the available authorization secrets or create a new secret of the selected type. Required setting. |
Description |
Description of the resource. The maximum length of the description is 4000 Unicode characters. |
Advanced settings tab
Setting |
Description |
---|---|
Debug |
Ths switch enables resource logging. The toggle switch is turned off by default. |
Character encoding |
Character encoding. The default is UTF-8. |
Authentication host |
The URL that is used for connection and authorization. By default, a connection is made to https://login.microsoftonline.com. |
Resource host |
URL from which the events are to be received. The default address is https://manage.office.com. |
Retrospective analysis interval, hours |
The period for which all new events are requested, in hours. To avoid losing some events, it is important to set overlapping event reception intervals, because some types of Microsoft 365 content may be sent with a delay. In this case, previously received events are not duplicated. By default, all new events for the last 12 hours are requested. |
Request timeout, sec |
Time to wait for a response to a request to get new events, in seconds. The default response timeout is 30 seconds. |
Repeat interval, sec |
The time in seconds after which a failed request to get new events must be repeated. By default, a request to get new events is repeated 10 seconds after getting an error or no response within the specified timeout. |
Clear interval, sec |
How often obsolete data is deleted, in seconds. The minimum value is 300 seconds. By default, obsolete data is deleted every 1800 seconds. |
Poll interval, min |
How often requests for new events are sent, in minutes. By default, requests are sent every 10 minutes. |
Proxy server |
Proxy settings, if necessary to connect to Microsoft 365. You can select one of the available proxy servers or create a new proxy server. |
Predefined connectors
The connectors listed in the table below are included in the KUMA distribution kit.
Predefined connectors
Connector name |
Comment |
[OOTB] Continent SQL |
Obtains events from the database of the Continent hardware and software encryption system. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] InfoWatch Trafic Monitor SQL |
Obtains events from the database of the InfoWatch Traffic Monitor system. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] KSC MSSQL |
Obtains events from the MS SQL database of the Open Single Management Platform system. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] KSC MySQL |
Obtains events from the MySQL database of the Open Single Management Platform system. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] KSC PostgreSQL |
Obtains events from the PostgreSQL database of the Open Single Management Platform 15.0 system. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] Oracle Audit Trail SQL |
Obtains audit events from the Oracle database. To use it, you must configure the settings of the corresponding secret type. |
[OOTB] SecretNet SQL |
Obtains events from the SecretNet SQL database. To use it, you must configure the settings of the corresponding secret type. |
Secrets
Secrets are used to securely store sensitive information such as user names and passwords that must be used by KUMA to interact with external services. If a secret stores account data such as user login and password, when the collector connects to the event source, the account specified in the secret may be blocked in accordance with the password policy configured in the event source system.
Secrets can be used in the following KUMA services and features:
- Collector (when using TLS encryption).
- Connector (when using TLS encryption).
- Destinations (when using TLS encryption or authorization).
- Proxy servers.
Available settings:
- Name (required)—a unique name for this type of resource. Must contain 1 to 128 Unicode characters.
- Tenant (required)—name of the tenant that owns the resource.
- Type (required)—the type of secret.
When you select the type in the drop-down list, the parameters for configuring this secret type also appear. These parameters are described below.
- Description—up to 4,000 Unicode characters.
Depending on the secret type, different fields are available. You can select one of the following secret types:
- credentials—this type of secret is used to store account credentials required to connect to external services, such as SMTP servers. If you select this type of secret, you must fill in the User and Password fields. If the Secret resource uses the 'credentials' type to connect the collector to an event source, for example, a database management system, the account specified in the secret may be blocked in accordance with the password policy configured in the event source system.
- token—this secret type is used to store tokens for API requests. Tokens are used when connecting to IRP systems, for example. If you select this type of secret, you must fill in the Token field.
- ktl—this secret type is used to store Kaspersky Threat Intelligence Portal account credentials. If you select this type of secret, you must fill in the following fields:
- User and Password (required fields)—user name and password of your Kaspersky Threat Intelligence Portal account.
- PFX file (required)—lets you upload a Kaspersky Threat Intelligence Portal certificate key.
- PFX password (required)—the password for accessing the Kaspersky Threat Intelligence Portal certificate key.
- urls—this secret type is used to store URLs for connecting to SQL databases and proxy servers. In the Description field, you must provide a description of the connection for which you are using the secret of urls type.
You can specify URLs in the following formats: hostname:port, IPv4:port, IPv6:port, :port.
- pfx—this type of secret is used for importing a PFX file containing certificates. If you select this type of secret, you must fill in the following fields:
- PFX file (required)—this is used to upload a PFX file. The file must contain a certificate and key. PFX files may include CA-signed certificates for server certificate verification.
- PFX password (required)—this is used to enter the password for accessing the certificate key.
- kata/edr—this type of secret is used to store the certificate file and private key required when connecting to the Kaspersky Endpoint Detection and Response server. If you select this type of secret, you must upload the following files:
- Certificate file—KUMA server certificate.
The file must be in PEM format. You can upload only one certificate file.
- Private key for encrypting the connection—KUMA server RSA key.
The key must be without a password and with the PRIVATE KEY header. You can upload only one key file.
You can generate certificate and key files by clicking the
button.
- Certificate file—KUMA server certificate.
- snmpV1—this type of secret is used to store the values of Community access (for example,
public
orprivate
) that is required for interaction over the Simple Network Management Protocol. - snmpV3—this type of secret is used for storing data required for interaction over the Simple Network Management Protocol. If you select this type of secret, you must fill in the following fields:
- User—user name indicated without a domain.
- Security Level—security level of the user.
- NoAuthNoPriv—messages are forwarded without authentication and without ensuring confidentiality.
- AuthNoPriv—messages are forwarded with authentication but without ensuring confidentiality.
- AuthPriv—messages are forwarded with authentication and ensured confidentiality.
You may see additional settings depending on the selected level.
- Password—SNMP user authentication password. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
- Authentication Protocol—the following protocols are available: MD5, SHA, SHA224, SHA256, SHA384, SHA512. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
- Privacy Protocol—protocol used for encrypting messages. Available protocols: DES, AES. This field becomes available when the AuthPriv security level is selected.
- Privacy password—encryption password that was set when the SNMP user was created. This field becomes available when the AuthPriv security level is selected.
- certificate—this secret type is used for storing certificate files. Files are uploaded to a resource by clicking the Upload certificate file button. X.509 certificate public keys in Base64 are supported.
- fingerprint—this type of secret is used to store the Elastic fingerprint value that can be used when connecting to the Elasticsearch server.
- PublicPKI—this type of secret is used to connect a KUMA collector to ClickHouse. If you select this option, you must specify the secret containing the base64-encoded PEM private key and the public key.
Predefined secrets
The secrets listed in the table below are included in the KUMA distribution kit.
Predefined secrets
Secret name |
Description |
[OOTB] Continent SQL connection |
Stores confidential data and settings for connecting to the APKSh Kontinent database. To use it, you must specify the login name and password of the database. |
[OOTB] KSC MSSQL connection |
Stores confidential data and settings for connecting to the MS SQL database of Open Single Management Platform (KSC). To use it, you must specify the login name and password of the database. |
[OOTB] KSC MySQL Connection |
Stores confidential data and settings for connecting to the MySQL database of Open Single Management Platform (KSC). To use it, you must specify the login name and password of the database. |
[OOTB] Oracle Audit Trail SQL Connection |
Stores confidential data and settings for connecting to the Oracle database. To use it, you must specify the login name and password of the database. |
[OOTB] SecretNet SQL connection |
Stores confidential data and settings for connecting to the MS SQL database of the SecretNet system. To use it, you must specify the login name and password of the database. |
Context tables
A context table is a container for a data array that is used by KUMA correlators for analyzing events in accordance with correlation rules. You can create context tables in the Resources section. The context table data is stored only in the correlator to which it was added using filters or actions in correlation rules.
You can populate context tables automatically using correlation rules of 'simple' and 'operational' types or import a file with data for the context table.
You can add, copy, and delete context tables, as well as edit their settings.
Context tables can be used in the following KUMA services and features:
The same context table can be used in multiple correlators. However, a separate entity of the context table is created for each correlator. Therefore, the contents of the context tables used by different correlators are different even if the context tables have the same name and ID.
Only data based on correlation rules of the correlator are added to the context table.
You can add, edit, delete, import, and export records in the context table of the correlator.
When records are deleted from context tables after their lifetime expires, service events are generated in the correlators. These events only exist in the correlators, and they are not redirected to other destinations. Service events are sent for processing by correlation rules of that correlator which uses the context table. Correlation rules can be configured to track these events so that they can be used to process events and identify threats.
Service event fields for deleting an entry from a context table are described below.
Event field |
Value or comment |
|
Event ID |
|
Time when the expired entry was deleted |
|
|
|
|
|
|
|
Correlator ID |
|
Correlator name |
|
Context table ID |
|
Key of the expired entry |
|
Number of updates for the deleted entry, incremented by one |
|
Name of the context table. |
|
Depending on the type of the entry that dropped out from the context table, the dropped-out context table entry is recorded in the corresponding type of event: for example, S.<context table field> = <context table field value> SA.<context table field> = <array of context table field values>
Context table records of the boolean type have the following format: S.<context table field> = true/false SA.<context table field> = false,true,false |
Viewing the list of context tables
To view the context table list of the correlator:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator for which you want to view context tables, select Go to context tables.
The Correlator context tables list is displayed.
The table contains the following data:
- Name—name of the context table.
- Size on disk—size of the context table.
- Directory—path to the context table on the KUMA correlator server.
Adding a context table
To add a context table:
- In the KUMA Console, select the Resources section.
- In the Resources section, click Context tables.
- In the Context tables window, click Add.
This opens the Create context table window.
- In the Name field, enter a name for the context table.
- In the Tenant drop-down list, select the tenant that owns the resource.
- In the TTL field, specify time the record added to the context table is stored in it.
When the specified time expires, the record is deleted. The time is specified in seconds. The maximum value is
31536000
(1 year).The default value is
0
. If the value of the field is 0, the record is stored indefinitely. - In the Description field, provide any additional information.
You can use up to 4,000 Unicode characters.
This field is optional.
- In the Schema section, specify which fields the context table has and the data types of the fields.
Depending on the data type, a field may or may not be a key field. At least one field in the table must be a key field. The names of all fields must be unique.
To add a table row, click Add and fill in the table fields:
- In the Name field, enter the name of the field. The maximum length is 128 characters.
- In the Type drop-down list, select the data type for the field.
- If you want to make a field a key field, select the Key field check box.
A table can have multiple key fields. Key fields are chosen when the context table is created, uniquely identify a table entry and cannot be changed.
If a context table has multiple key fields, each table entry is uniquely identified by multiple fields (composite key).
- Add the required number of context table rows.
After saving the context table, the schema cannot be changed.
- Click the Save button.
The context table is added.
Page topViewing context table settings
To view the context table settings:
- In the KUMA Console, select the Resources section.
- In the Resources section, click Context tables.
- In the list in the Context tables window, select the context table whose settings you want to view.
This opens the context table settings window. It displays the following information:
- Name—unique name of the resource.
- Tenant—the name of the tenant that owns the resource.
- TTL—the record added to the context table is stored in it for this duration. This value is specified in seconds.
- Description—any additional information about the resource.
- Schema is an ordered list of fields and their data types, with key fields marked.
Editing context table settings
To edit context table settings:
- In the KUMA Console, select the Resources section.
- In the Resources section, click Context tables.
- In the list in the Context tables window, select the context table whose settings you want to edit.
- Specify the values of the following parameters:
- Name—unique name of the resource.
- TTL—the record added to the context table is stored in it for this duration. This value is specified in seconds.
- Description—any additional information about the resource.
- Schema is an ordered list of fields and their data types, with key fields marked. If the context table is not used in a correlation rule, you can edit the list of fields.
If you want to edit the schema in a context table that is already being used in a correlation rule, follow the steps below.
The Tenant field is not editable.
- Click Save.
To edit the settings of the context table previously used by the correlator:
- Export data from the table.
- Copy and save the path to the file with the data of the table on the disk of the correlator. This path is specified in the Directory column in the Correlator context tables window. You will need this path later to delete the file from the disk of the correlator.
- Delete the context table from the correlator.
- Edit context table settings as necessary.
- Delete the file with data of the table on the disk of the correlator at the path from step 2.
- To apply the changes (delete the table), update the configuration of the correlator: in the Resources → Active services section, in the list of services, select the check box next to the relevant correlator and click Update configuration.
- Add the context table in which you edited the settings to the correlator.
- To apply the changes (add a table), update the configuration of the correlator: in the Resources → Active services section, in the list of services, select the check box next to the relevant correlator and click Update configuration.
- Adapt the fields in the exported table (see step 1) so that they match the fields of the table that you uploaded to the correlator at step 7.
- Import the adapted data to the context table.
The configuration of the context table is updated.
Page topDuplicating context table settings
To copy a context table:
- In the KUMA Console, select the Resources section.
- In the Resources section, click Context tables.
- Select the check box next to the context table that you want to copy.
- Click Duplicate.
- Specify the necessary settings.
- Click the Save button.
The context table is copied.
Page topDeleting a context table
You can delete only those context tables that are not used in any of the correlators.
To delete a context table:
- In the KUMA Console, select the Resources section.
- In the Resources section, click Context tables.
- Select the check boxes next to the context tables that you want to delete.
To delete all context tables, select the check box next to the Name column.
At least one check box must be selected.
- Click the Delete button.
- Click OK.
The context tables are deleted.
Page topViewing context table records
To view a list of context table records:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator for which you want to view the context table, select Go to context tables.
This opens the Correlator context tables window.
- In the Name column, select the relevant context table.
The list of records for the selected context table is displayed.
The list contains the following data:
- Key is the composite key of the record. It is comprised by one or more values of key fields, separated by the "|" character. If one of the key field values is absent, the separator character is still displayed.
For example, a record key consists of three fields:
DestinationAddress
,DestinationPort
, andSourceUserName
. If the last two fields do not contain values, the record key is displayed as follows:43.65.76.98| |
. - Record repetitions is the total number of times the record was mentioned in events and identical records were downloaded when importing context tables to KUMA.
- Expiration date – date and time when the record must be deleted.
If the TTL field had the value of 0 when the context table was created, the records of this context table are retained for 36,000 days (approximately 100 years).
- Updated is the date and time when the context table was updated.
Searching context table records
To find a record in the context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator in whose context table you want to find a record, select Go to context tables.
This opens the Correlator context tables window.
- In the Name column, select your context table.
This opens a window with the records of the selected context table.
- In the Search field, enter the record key value or several characters from the key.
The list of context table records displays only the records whose key contains the entered characters.
If the your search query matches records with empty key values, the text <Nothing found> is displayed in the widget on the Dashboard. We recommend clarifying the conditions of your search query.
Page topAdding a context table record
To add a record to the context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator to whose context table you want to add a record, select Go to context tables.
This opens the Correlator context tables window.
- In the Name column, select the relevant context table.
The list of records for the selected context table is displayed.
- Click Add.
The Create record window opens.
- In the Value field, specify the values for fields in the Field column.
KUMA takes field names from the correlation rules with which the context table is associated. These names are not editable. The list of fields cannot be edited.
If you do not specify some of the field values, the missing fields, including key fields, are populated with default values. The key of the record is determined from the full set of fields, and the record is added to the table. If an identical key already exists in the table, an error is displayed.
- Click the Save button.
The record is added.
Page topEditing a context table record
To edit a record in the context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator for which you want to edit the context table, select Go to context tables.
This opens the Correlator context tables window.
- In the Name column, select the relevant context table.
The list of records for the selected context table is displayed.
- Click on the row of the record that you want to edit.
- Specify your values in the Value column.
- Click the Save button.
The record is overwritten.
Restrictions when editing a record:
- The value of the key field of the record is not available for editing. You can change it by exporting and importing a record.
- Field names in the Field column are not editable.
- The values in the Value column must meet the following requirements:
- greater than or equal to 0 for fields of the Timestamp and Timestamp list types.
- IPv4 or IPv6 format for fields of the IP address and IP list types.
- is true or false for a Boolean field.
Deleting a context table record
To delete records from a context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator from whose context table you want to delete a record, select Go to context tables.
This opens the Correlator context tables window.
- In the Name column, select the relevant context table.
The list of records for the selected context table is displayed.
- Select the check boxes next to the records you want to delete.
To delete all records, select the check box next to the Key column.
At least one check box must be selected.
- Click the Delete button.
- Click OK.
The records will be deleted.
Page topImporting data into a context table
To import data to a context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator to whose context table you want to import data, select Go to context tables.
This opens the Correlator context tables window.
- Select the check box next to your context table and click Import.
This opens the context table data import window.
- Click Add and select the file that you want to import.
- In the Format drop-down list select the format of the file:
- csv
- tsv
- internal
- Click the Import button.
The data from the file is imported into the context table. Records that previously existed in the context table are preserved.
When importing, KUMA checks the uniqueness of each record's key. If a record already exists, its fields are populated with new values obtained by merging the previous values with the field values of the imported record.
If no record existed in the context table, a new record is created.
Data imported from a file is not checked for invalid characters. If you use this data in widgets, widgets are displayed incorrectly if invalid characters are present in the data.
Page topExporting data from a context table
To export data from a context table:
- In the KUMA Console, select the Resources section.
- In the Services section, click the Active services button.
- In the context menu of the correlator whose context table you want to export, select Go to context tables.
This opens the Correlator context tables window.
- Select the check box next to your context table and click Export.
The context table is downloaded to your computer in JSON format. The name of the downloaded file reflects the name of the context table. The order of the fields in the file is not defined.
Page top