Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
      • GitHub Branch Protection Guide: Preventing Direct Commits to Main
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • Singular Tests
  • Creating Generic Custom Tests
  • Combining Macros and Tests
  • Advanced Test Configurations
  • Testing Data Quality with Packages
  • Real-World Custom Test Examples
  • Best Practices for Custom Tests

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Custom Tests

While dbt comes with built-in tests, custom tests allow you to implement specific data quality checks tailored to your business logic. This guide will help you understand how to create and use custom tests to ensure the quality and reliability of your data transformations.

Types of Custom Tests

dbt supports two main types of custom tests:

Test Type
Description
Where Defined
Scope

Singular Tests

SQL files returning failing records

tests/ directory

Specific use case

Generic Tests

Reusable test definitions applicable to different models

macros/ directory

Reusable


Singular Tests

Singular tests are SQL queries that should return zero rows when the test passes:

-- tests/assert_total_payment_amount_matches_order_amount.sql

SELECT
    order_id,
    order_amount,
    payment_amount,
    ABS(order_amount - payment_amount) as amount_diff
FROM {{ ref('orders') }} o
LEFT JOIN {{ ref('payments') }} p USING (order_id)
WHERE ABS(order_amount - payment_amount) > 0.01

This test identifies orders where the payment amount doesn't match the order amount within a small tolerance.

To run singular tests:

dbt test --select assert_total_payment_amount_matches_order_amount

Organizing Singular Tests

For larger projects, you might want to organize singular tests by domain or purpose:

tests/
  ├── finance/
  │   ├── assert_total_payment_amount_matches_order_amount.sql
  │   └── assert_refund_amount_less_than_order_amount.sql
  └── marketing/
      ├── assert_campaign_spend_within_budget.sql
      └── assert_conversion_rates_above_threshold.sql

Creating Generic Custom Tests

Generic tests are more powerful because they can be applied to different models throughout your project. They are defined as macros with a special syntax:

{% test test_name(model, column_name, condition_parameter) %}

    -- The test query should return failing records
    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE -- Test condition that should return 0 rows when passing
        {{ column_name }} NOT {{ condition_parameter }}

{% endtest %}

Example: Positive Values Test

Here's a simple custom test that checks if values in a column are positive:

{% test is_positive(model, column_name) %}

    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE {{ column_name }} <= 0
    
{% endtest %}

Using Custom Tests in YAML Files

Once defined, you can reference these custom tests in your schema YAML files just like built-in tests:

models:
  - name: orders
    columns:
      - name: order_amount
        tests:
          - is_positive

Parameterizing Custom Tests

You can make your custom tests more flexible by adding parameters:

{% test value_within_range(model, column_name, min_value, max_value) %}

    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE {{ column_name }} < {{ min_value }} OR {{ column_name }} > {{ max_value }}
    
{% endtest %}

In your YAML file:

models:
  - name: orders
    columns:
      - name: order_amount
        tests:
          - value_within_range:
              min_value: 0
              max_value: 10000

Combining Macros and Tests

You can use macros within your custom tests to create powerful, reusable testing frameworks:

{% macro get_valid_status_values() %}
    {% set valid_statuses = ['pending', 'shipped', 'delivered', 'cancelled'] %}
    {{ return(valid_statuses) }}
{% endmacro %}

{% test valid_status(model, column_name) %}

    {% set valid_values = get_valid_status_values() %}
    
    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE {{ column_name }} NOT IN (
        {% for value in valid_values %}
            '{{ value }}'{% if not loop.last %},{% endif %}
        {% endfor %}
    )
    
{% endtest %}

This approach lets you centralize business rules (like valid status values) and reuse them across tests.


Advanced Test Configurations

You can configure how tests behave using additional properties:

{% test custom_complex_test(model, column_name) %}

    {{ config(
        severity = 'warn',
        store_failures = true,
        limit = 100
    ) }}

    -- Test query here
    
{% endtest %}
Configuration
Description
Example Use Case

severity

Can be 'error' (default) or 'warn'

For tests that shouldn't block production runs

store_failures

When true, stores test failures in a table

For troubleshooting or monitoring over time

limit

Maximum number of failing records to return

For large tables where full results aren't needed

where

Apply additional filtering to the test query

For focusing tests on specific data subsets

enabled

Boolean that can conditionally disable the test

For environment-specific test configuration


Testing Data Quality with Packages

Popular packages like dbt-expectations extend dbt's testing capabilities with advanced data validation:

packages:
  - package: calogica/dbt_expectations
    version: 0.8.5

Example: Advanced Data Validation

Using dbt-expectations to implement sophisticated data quality checks:

models:
  - name: customer_orders
    tests:
      - dbt_expectations.expect_table_row_count_to_be_between:
          min_value: 1
          max_value: 1000000
    columns:
      - name: order_amount
        tests:
          - dbt_expectations.expect_column_values_to_be_between:
              min_value: 0
              max_value: 50000
              severity: warn
      - name: customer_email
        tests:
          - dbt_expectations.expect_column_values_to_match_regex:
              regex: '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'

Popular Testing Packages

Package
Purpose
Key Features

dbt-expectations

Data quality tests inspired by Great Expectations

Advanced schema validation, statistical tests

dbt-audit-helper

Compare query results between models

Model comparison, reconciliation tests

dbt-utils

Common test utilities

Equal rowcounts, relationships, cardinality checks

elementary

Anomaly detection and data validation

Historical test comparisons, metrics monitoring


Real-World Custom Test Examples

Date Range Validation

{% test date_between_project_dates(model, column_name) %}
    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE {{ column_name }} NOT BETWEEN 
        (SELECT min_valid_date FROM {{ ref('project_date_settings') }})
        AND
        (SELECT max_valid_date FROM {{ ref('project_date_settings') }})
{% endtest %}

Numeric Distribution Test

{% test standard_deviation_within_range(model, column_name, min_stddev, max_stddev) %}
    WITH stats AS (
        SELECT 
            STDDEV({{ column_name }}) AS std_dev
        FROM {{ model }}
        WHERE {{ column_name }} IS NOT NULL
    )
    
    SELECT *
    FROM stats
    WHERE std_dev < {{ min_stddev }} OR std_dev > {{ max_stddev }}
{% endtest %}

Referential Integrity with Exceptions

{% test foreign_key_with_exceptions(model, column_name, to, field, exceptions) %}
    
    {% set exceptions_list = [] %}
    {% for exception in exceptions %}
        {% do exceptions_list.append("'" ~ exception ~ "'") %}
    {% endfor %}
    
    SELECT
        {{ column_name }}
    FROM {{ model }}
    WHERE {{ column_name }} NOT IN (
        SELECT {{ field }}
        FROM {{ to }}
    )
    AND {{ column_name }} NOT IN ({{ exceptions_list | join(', ') }})
    
{% endtest %}

Best Practices for Custom Tests

Best Practice
Description

Test Critical Data First

Focus on testing key business metrics and join keys. Identify high-risk areas for data quality issues.

Make Tests Descriptive

Name tests clearly to indicate what they verify. Add comments explaining the purpose and expectations.

Balance Coverage and Performance

Consider the runtime impact of extensive testing. Use selective testing for large tables.

Group Related Tests

Organize tests that verify related business rules. Use consistent naming conventions.

Handle Edge Cases

Test boundary conditions. Consider null handling and empty tables.

Monitor Test Results

Track test failures over time. Establish alerting for critical test failures.

Pro Tip: Troubleshooting Failed Tests

When tests fail, use these strategies to diagnose issues:

  1. Check error messages in the dbt logs

  2. Examine a sample of failing records with store_failures: true

  3. Verify test logic by inspecting the compiled SQL in target/compiled/

  4. Use --vars to test with different parameters

By implementing custom tests, you can ensure your transformations meet business requirements and maintain high data quality standards throughout your dbt project.

PreviousMacrosNextHooks & Operational Tasks

Last updated 2 months ago

Was this helpful?

🔍