Anomaly Detection Tests

Elementary data anomaly detection tests monitor a specific metric (like row count, null rate, average value, etc.) and compare recent values to historical values. This is done to detect significant changes and deviations, that are probably data reliability issues.

What happens on each test?

Upon running a test, your data is split into time buckets based on the time_bucket field and is limited by the training_period var. The test then compares a certain metric (e.g. row count) of the buckets that are within the detection period (detection_period) to the row count of all the previous time buckets within the training_period period. If there were any anomalies in the detection period, the test will fail. On each test elementary package executes the relevant monitors, and searches for anomalies by comparing to historical metrics.

What does it mean when a test fails?

When a test fail, it means that an anomaly was detected on this metric and dataset. To learn more, refer to anomaly detection method.

Core concept

Anomaly

A value in the detection set that is an outlier comparing to the expected range calculated based on the training set.

Monitored data set

The data set we run the data monitor against, and includes the training set values and detection set values.

Data monitors

When we use anomaly detection tests we can monitor different metrics to detect problems - freshness, volume, nullness, uniqueness, distribution, etc. Each different metric we collect is a ‘data monitor’.

Training set

The set of values used as a reference point to calculate the expected range.

Detection set

The set of values that are compared to the expected range. If a value in the detection set is an outlier to the expected range, it will be flagged as an anomaly.

Expected range

Based of the values in the training test, we calculate an expected range for the monitor. Each data point in the detection period will be compared to the expected range calculated based on it’s training set.

Training period

The period of time for which the training set is collected. As data changed over time, we don’t consider the entire history of the metric, just a recent period.

Detection period

The values in the detection period will be compared to the expected range calculated using the training set. If a data point is outside the expected range and is part of the detection period, it is flagged as an anomaly.

Time bucket

To calculate how data changes over time and detect issues, we split the data into consistent time buckets. For example, if we use daily time bucket and monitor for row count anomalies, we will count new rows per day.

Data anomaly detection method

Elementary uses ”standard score”, also known as “Z-score” for anomaly detection. This score represents the number of standard deviations of a value from the average of a set of values.

According to the empirical rule, in a standard normal distribution:

  • ~68% of values have an absolute z-score of 1 or less.

  • ~95% of values have an absolute z-score of 2 or less.

  • ~99.7% of values have an absolute z-score of 3 or less.

Values with a standard score of 3 and above are considered outliers, and this is a recommended threshold for anomaly detection. This is the default Elementary uses as well, and it can be changed using the var anomaly_score_threshold in the global configuration.

You can use the model anomaly_sensitivity to see if values of metrics from your last run would have been considered anomalies in different scores. This can help you decide if there is a need to adjust the sensitivity:

Last updated