Kaspersky Machine Learning for Anomaly Detection

Scenario: working with ML models

This section describes the sequence of actions required to work with ML models.

The functionality is available after a license key is added.

The scenario for working with ML models consists of the following steps:

  1. Adding markups

    If you need to select specific time intervals for the data that ML models must use for training or inference, create markups.

  2. Adding an ML model

    You can add an ML model to Kaspersky MLAD in one of the following ways:

  3. Training ML model elements

    The ML model needs to be trained before you can run inference on it. To do this, all predictive elements and elliptic envelope-based elements within the ML model must be pretrained. ML model elements based on diagnostic rules do not need to be trained, so they are considered to be pretrained.

    An ML model imported to Kaspersky MLAD has been previously trained by Kaspersky Lab experts or a certified integrator. ML models that are created from a template of an imported ML model or by cloning an imported ML model are also considered to be already trained. If necessary, you can change their training settings and retrain the elements.

    To generate a learning indicator, specify the created markup in the element training settings.

    After training the elements, examine the training results, adjust the training settings and retrain the elements, if necessary.

  4. ML model inference

    Run a historical or streaming inference on the ML model. Examine the artifacts under History and Monitoring, and incidents inferenced by the ML model.

    For better ML model performance, adjust the parameters of the model and/or markups. Re-train the elements of the ML model as needed. Run a repeat inference on the ML model. When restarting an inference on previously inferenced data, previous inference results will be deleted.

  5. Preparing an ML model for publication

    If you need to save the parameters of an ML model and its elements, prepare the ML model for publication after completing training and checking the inference results.

  6. Publishing an ML model

    After preparing the ML model for publication, notify the officer responsible for publishing the ML model that the ML model is ready, or publish the ML model if you have the required permissions. If necessary, the system administrator can create a role that has the right to publish ML models and assign this role to the relevant employee.

  7. Inferencing a published ML model

    Start inference of the ML model. During the inference process, published ML model analyzes telemetry data and log incidents. Recorded incidents, unlike those inferred by unpublished ML models, necessitate actions and reporting in production.