Skip to content

Data and model monitoring

Continual comes with many tools built-in for monitoring your data, models, and prediction jobs.

Data monitoring

Data profiling

When model versions or batch prediction jobs are created, Continual runs a data profiling on the data. This data is viewable by users in the Web UI and can help identify and diagnose problems with their data or models. For models, this profiling is done for training, validation, and test sets.

Continual provides a summary of every feature used, as shown below:

This can be useful for identify features that have many Null values, large outliers, or unexpected distributions. Issues here are often due to upstream data problems, so it's always a good idea to review these summaries periodically.

Correlation matrix

Continual also calculates a correlation matrix for model versions. This can be helpful in determining if there are any features that are highly correlated with the target variable, and thus represent "data leakage". Additionally, correlation matrices can help identify multicolinearity in data, which can make the feature importance more difficult to interpret.

Data Checks

During the train step, Continual will perform input data validation on the data that will be used to train models. This consists of a series of checks to ensure the data is valid and ready to be trained on. Some of the checks that Continual currently performs are:

  • Checks for duplicate columns
  • Checks for duplicate indices
  • Checks for type mismatches between the query and data
  • Size limit checks
  • Null value checks

By default, these checks are enabled. They can however be disabled by setting the disable_data_checks flag in the feature set configuration YAML file.

train:
  disable_data_checks: True

List of Checks

Index or Time Index Present

Pass: If either an index or time index are specified in the query.

Fail: Error if index or time index is specified by query but not present in dataset.

No Duplicates in Index or Time Index

Pass: If there are no duplicates in the index or time index columns (if specified).

Warn: Warnings if there are duplicates in the index or time index (if specified).

No Duplicate Column Names or Identical Columns

Pass: If there are no duplicate column names or columns with completely identical values.

Fail: Error if there are duplicated column names, and warning if there are columns with completely identical values.

No Null Values in Index or Time Index

Pass: If there are no null values in the index or time index columns (if specified).

Fail: Errors if there are null values in the index or time index (if specified).

Target Column is not Excluded

Pass: If the target column is not excluded in the query.

Fail: Error otherwise.

Query Features Exist in Dataset

Pass: If all features specified in the query exist in the dataset.

Fail: Error if a feature in the query does not exist in the dataset.

Column Types Match Query

Pass: If a best attempt at inferring column types matches those specified by the query.

Fail: Error if column specified as a NUMBER is not a numerical dtype. Warnings if there are other type mismatches.

Not Too Many Null Features

Pass: If there are less than a fixed threshold of nulls in feature values.

Warn: Warnings about percentage of null values across feature columns.

Timestamp Ranges Have Sufficient Overlap

Pass: If the range of each column specified to have timestamps overlaps sufficiently with the spine column (time index).

Warn: Warnings if the columns overlap too little.

Dataset is not Too Large

Pass: If memory usage of dataset is below the size limit specified in the feature set configuration YAML. Warn: Warning if dataset size exceeds the size limit.

Classes are reasonably balanced

Pass: If there are at least 2 unique non-null values in the column to be predicted (target).

Fail: If the model is a classifier and there is only 1 unique non-null value in the target column.

Model monitoring

Performance monitoring

A key thing to understand about your models is how they perform over time. Continual tracks all model versions in the system, and you can quickly view your model performance over time across any of the performance metrics. This chart displays both the performance of your model versions, as they are trained, as well as the currently promoted model version.

This chart makes it easy to determine at a glance if there has been any substantial change to your model as it has been retrained over time.

Time index coverage

Another result of the data profiling step is an analysis of the time index coverage. Since models may connect to many different entities and bring in many different feature sets, it can be difficult to determine if the various feature sets actually overlap with respect to their time indices. Continual takes the guess work out of this by displaying the time coverage of each time index in a nice graph, as shown below.

Any feature sets that do not have full coverage along the model spine will be treated as Null for those timestamps. This can inadvertently lead to a large number of Null values and/or unexpected feature importances (i.e., if a feature set is "missing" for 90% of the model spine, it is unlikely that the model will decide that any of those features are important in making predictions. )

Back to top