Custom Tests
While dbt comes with built-in tests, custom tests allow you to implement specific data quality checks tailored to your business logic. This guide will help you understand how to create and use custom tests to ensure the quality and reliability of your data transformations.
Types of Custom Tests
dbt supports two main types of custom tests:
Singular Tests
SQL files returning failing records
tests/
directory
Specific use case
Generic Tests
Reusable test definitions applicable to different models
macros/
directory
Reusable
Singular Tests
Singular tests are SQL queries that should return zero rows when the test passes:
-- tests/assert_total_payment_amount_matches_order_amount.sql
SELECT
order_id,
order_amount,
payment_amount,
ABS(order_amount - payment_amount) as amount_diff
FROM {{ ref('orders') }} o
LEFT JOIN {{ ref('payments') }} p USING (order_id)
WHERE ABS(order_amount - payment_amount) > 0.01
This test identifies orders where the payment amount doesn't match the order amount within a small tolerance.
To run singular tests:
dbt test --select assert_total_payment_amount_matches_order_amount
Organizing Singular Tests
For larger projects, you might want to organize singular tests by domain or purpose:
tests/
├── finance/
│ ├── assert_total_payment_amount_matches_order_amount.sql
│ └── assert_refund_amount_less_than_order_amount.sql
└── marketing/
├── assert_campaign_spend_within_budget.sql
└── assert_conversion_rates_above_threshold.sql
Creating Generic Custom Tests
Generic tests are more powerful because they can be applied to different models throughout your project. They are defined as macros with a special syntax:
{% test test_name(model, column_name, condition_parameter) %}
-- The test query should return failing records
SELECT
{{ column_name }}
FROM {{ model }}
WHERE -- Test condition that should return 0 rows when passing
{{ column_name }} NOT {{ condition_parameter }}
{% endtest %}
Example: Positive Values Test
Here's a simple custom test that checks if values in a column are positive:
{% test is_positive(model, column_name) %}
SELECT
{{ column_name }}
FROM {{ model }}
WHERE {{ column_name }} <= 0
{% endtest %}
Using Custom Tests in YAML Files
Once defined, you can reference these custom tests in your schema YAML files just like built-in tests:
models:
- name: orders
columns:
- name: order_amount
tests:
- is_positive
Parameterizing Custom Tests
You can make your custom tests more flexible by adding parameters:
{% test value_within_range(model, column_name, min_value, max_value) %}
SELECT
{{ column_name }}
FROM {{ model }}
WHERE {{ column_name }} < {{ min_value }} OR {{ column_name }} > {{ max_value }}
{% endtest %}
In your YAML file:
models:
- name: orders
columns:
- name: order_amount
tests:
- value_within_range:
min_value: 0
max_value: 10000
Combining Macros and Tests
You can use macros within your custom tests to create powerful, reusable testing frameworks:
{% macro get_valid_status_values() %}
{% set valid_statuses = ['pending', 'shipped', 'delivered', 'cancelled'] %}
{{ return(valid_statuses) }}
{% endmacro %}
{% test valid_status(model, column_name) %}
{% set valid_values = get_valid_status_values() %}
SELECT
{{ column_name }}
FROM {{ model }}
WHERE {{ column_name }} NOT IN (
{% for value in valid_values %}
'{{ value }}'{% if not loop.last %},{% endif %}
{% endfor %}
)
{% endtest %}
This approach lets you centralize business rules (like valid status values) and reuse them across tests.
Advanced Test Configurations
You can configure how tests behave using additional properties:
{% test custom_complex_test(model, column_name) %}
{{ config(
severity = 'warn',
store_failures = true,
limit = 100
) }}
-- Test query here
{% endtest %}
severity
Can be 'error' (default) or 'warn'
For tests that shouldn't block production runs
store_failures
When true, stores test failures in a table
For troubleshooting or monitoring over time
limit
Maximum number of failing records to return
For large tables where full results aren't needed
where
Apply additional filtering to the test query
For focusing tests on specific data subsets
enabled
Boolean that can conditionally disable the test
For environment-specific test configuration
Testing Data Quality with Packages
Popular packages like dbt-expectations extend dbt's testing capabilities with advanced data validation:
packages:
- package: calogica/dbt_expectations
version: 0.8.5
Example: Advanced Data Validation
Using dbt-expectations to implement sophisticated data quality checks:
models:
- name: customer_orders
tests:
- dbt_expectations.expect_table_row_count_to_be_between:
min_value: 1
max_value: 1000000
columns:
- name: order_amount
tests:
- dbt_expectations.expect_column_values_to_be_between:
min_value: 0
max_value: 50000
severity: warn
- name: customer_email
tests:
- dbt_expectations.expect_column_values_to_match_regex:
regex: '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
Popular Testing Packages
dbt-expectations
Data quality tests inspired by Great Expectations
Advanced schema validation, statistical tests
dbt-audit-helper
Compare query results between models
Model comparison, reconciliation tests
dbt-utils
Common test utilities
Equal rowcounts, relationships, cardinality checks
elementary
Anomaly detection and data validation
Historical test comparisons, metrics monitoring
Real-World Custom Test Examples
Date Range Validation
{% test date_between_project_dates(model, column_name) %}
SELECT
{{ column_name }}
FROM {{ model }}
WHERE {{ column_name }} NOT BETWEEN
(SELECT min_valid_date FROM {{ ref('project_date_settings') }})
AND
(SELECT max_valid_date FROM {{ ref('project_date_settings') }})
{% endtest %}
Numeric Distribution Test
{% test standard_deviation_within_range(model, column_name, min_stddev, max_stddev) %}
WITH stats AS (
SELECT
STDDEV({{ column_name }}) AS std_dev
FROM {{ model }}
WHERE {{ column_name }} IS NOT NULL
)
SELECT *
FROM stats
WHERE std_dev < {{ min_stddev }} OR std_dev > {{ max_stddev }}
{% endtest %}
Referential Integrity with Exceptions
{% test foreign_key_with_exceptions(model, column_name, to, field, exceptions) %}
{% set exceptions_list = [] %}
{% for exception in exceptions %}
{% do exceptions_list.append("'" ~ exception ~ "'") %}
{% endfor %}
SELECT
{{ column_name }}
FROM {{ model }}
WHERE {{ column_name }} NOT IN (
SELECT {{ field }}
FROM {{ to }}
)
AND {{ column_name }} NOT IN ({{ exceptions_list | join(', ') }})
{% endtest %}
Best Practices for Custom Tests
Test Critical Data First
Focus on testing key business metrics and join keys. Identify high-risk areas for data quality issues.
Make Tests Descriptive
Name tests clearly to indicate what they verify. Add comments explaining the purpose and expectations.
Balance Coverage and Performance
Consider the runtime impact of extensive testing. Use selective testing for large tables.
Group Related Tests
Organize tests that verify related business rules. Use consistent naming conventions.
Handle Edge Cases
Test boundary conditions. Consider null handling and empty tables.
Monitor Test Results
Track test failures over time. Establish alerting for critical test failures.
Pro Tip: Troubleshooting Failed Tests
When tests fail, use these strategies to diagnose issues:
Check error messages in the dbt logs
Examine a sample of failing records with
store_failures: true
Verify test logic by inspecting the compiled SQL in
target/compiled/
Use
--vars
to test with different parameters
By implementing custom tests, you can ensure your transformations meet business requirements and maintain high data quality standards throughout your dbt project.
Last updated
Was this helpful?