Scenario: working with ML models
This section describes the sequence of actions required to work with ML models.
The functionality is available after a license key is added.
The scenario for working with ML models consists of the following steps:
- Adding markups
If you need to select specific time intervals for the data that ML models must use for training or inference, create markups.
- Adding an ML model
You can add an ML model to Kaspersky MLAD in one of the following ways:
- Import an ML model created by Kaspersky specialists or by a certified integrator as part of the Kaspersky MLAD Model-building and Deployment Service. If the ML model uses markups, they will be incorporated into the same asset as the model itself. After an ML model is imported, it must be activated.
- Manually create an ML model. Add predictive elements, elliptic envelope-based elements, and/or diagnostic rules-based elements to the new ML model.
- Create an ML model from a template. Create a template based on the relevant ML model in advance. If the original ML model used for the template was created manually, you can add predictive elements, elliptic envelope-based elements, and/or diagnostic rule-based elements to the new ML model.
- Clone a previously added ML model. After cloning an ML model that was created manually or from a template based on a manually created ML model, you can add predictive elements, elliptic envelope-based elements, and/or diagnostic rule-based elements to the new ML model.
- Training ML model elements
The ML model needs to be trained before you can run inference on it. To do this, all predictive elements and elliptic envelope-based elements within the ML model must be pretrained. ML model elements based on diagnostic rules do not need to be trained, so they are considered to be pretrained.
An ML model imported to Kaspersky MLAD has been previously trained by Kaspersky Lab experts or a certified integrator. ML models that are created from a template of an imported ML model or by cloning an imported ML model are also considered to be already trained. If necessary, you can change their training settings and retrain the elements.
To generate a learning indicator, specify the created markup in the element training settings.
After training the elements, examine the training results, adjust the training settings and retrain the elements, if necessary.
- ML model inference
Run a historical or streaming inference on the ML model. Examine the artifacts under History and Monitoring, and incidents inferenced by the ML model.
For better ML model performance, adjust the parameters of the model and/or markups. Re-train the elements of the ML model as needed. Run a repeat inference on the ML model. When restarting an inference on previously inferenced data, previous inference results will be deleted.
- Preparing an ML model for publication
If you need to save the parameters of an ML model and its elements, prepare the ML model for publication after completing training and checking the inference results.
- Publishing an ML model
After preparing the ML model for publication, notify the officer responsible for publishing the ML model that the ML model is ready, or publish the ML model if you have the required permissions. If necessary, the system administrator can create a role that has the right to publish ML models and assign this role to the relevant employee.
- Inferencing a published ML model
Start inference of the ML model. During the inference process, published ML model analyzes telemetry data and log incidents. Recorded incidents, unlike those inferred by unpublished ML models, necessitate actions and reporting in production.