Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • Why Testing Matters in dbt
  • Types of dbt Tests
  • Adding Tests to Your Models
  • Running Tests
  • Test Configuration Options
  • Creating Custom Generic Tests
  • Test Organization Strategies
  • Troubleshooting Failed Tests
  • Test Output Examples
  • Best Practices for dbt Testing

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Getting started with dbt™

Testing Data Quality

Ensuring data quality is critical for any analytics pipeline. dbt provides built-in testing capabilities that help catch issues early, enforce data integrity, and maintain confidence in your transformations. This guide explains how to implement tests in your dbt project.

Why Testing Matters in dbt

Data tests serve several essential purposes:

  • Validate assumptions about your data

  • Catch errors before they impact downstream consumers

  • Document expectations about data properties

  • Ensure consistency across transformations

Without testing, issues can creep into your data pipeline, potentially leading to incorrect business decisions or loss of trust in your analytics.

Benefits of dbt Testing

  • Ensures Data Integrity – Prevents duplicates, null values, and referential mismatches.

  • Validates Business Logic – Confirms that data meets expected criteria.

  • Catches Issues Early – Detects errors before they affect downstream analytics.

  • Automates Quality Checks – Reduces the need for manual data validation.

  • Supports Collaboration – Helps teams align on data expectations.


Types of dbt Tests

dbt supports two main types of tests:

1. Generic Tests (Built-in)

Generic tests are reusable test definitions that can be applied to multiple models and columns. dbt includes four built-in generic tests:

Test
Purpose
Example Use

unique

Ensures a column has no duplicate values

Primary keys, email addresses

not_null

Ensures a column contains no NULL values

Required fields, join keys

accepted_values

Validates that column values are within a specified list

Status fields, categories

relationships

Ensures referential integrity between tables

Foreign keys, dimensional references

2. Singular Tests

Singular tests are custom SQL queries that define specific test logic. These are one-off tests written as SQL queries that return failing records.


Adding Tests to Your Models

Tests in dbt are typically defined in YAML files alongside your models.

Generic Tests Example

# models/schema.yml
version: 2

models:
  - name: customers
    columns:
      - name: customer_id
        tests:
          - unique
          - not_null
      - name: email
        tests:
          - unique
          - not_null
      - name: status
        tests:
          - accepted_values:
              values: ['active', 'inactive', 'pending']
      - name: country_id
        tests:
          - relationships:
              to: ref('countries')
              field: id

This YAML configuration:

  • Tests that customer_id values are unique and not null

  • Tests that email values are unique and not null

  • Tests that status values are only 'active', 'inactive', or 'pending'

  • Tests that each country_id exists in the countries table

Singular Test Example

Singular tests are SQL files in the tests/ directory:

-- tests/assert_total_payment_amount_matches_order_amount.sql
-- This test checks that payment amounts sum to order amounts
SELECT
  orders.order_id,
  orders.amount as order_amount,
  SUM(payments.amount) as payment_amount
FROM {{ ref('orders') }}
LEFT JOIN {{ ref('payments') }} ON orders.order_id = payments.order_id
GROUP BY orders.order_id, orders.amount
HAVING ABS(orders.amount - SUM(COALESCE(payments.amount, 0))) > 0.01

This test identifies orders where the payment amounts don't match the order amount.


Running Tests

dbt makes it easy to run tests as part of your workflow.

Operation
Command
Description

Running All Tests

dbt test

Runs all tests in your dbt project

Testing Specific Models

dbt test --models customers

Runs all tests associated with a specific model

Running a Single Test

dbt test --select test_name

Runs a specific test by name

Testing Critical Models Only

dbt test --select tag:critical

Runs tests only for models tagged as 'critical'


Test Configuration Options

You can configure how tests behave using additional parameters.

Setting Severity Levels

Tests can be warnings instead of errors:

models:
  - name: orders
    columns:
      - name: status
        tests:
          - accepted_values:
              values: ['completed', 'shipped', 'returned']
              severity: warn  # Won't cause pipelines to fail

Store Test Failures

Save test failures for analysis:

models:
  - name: large_table
    columns:
      - name: id
        tests:
          - unique:
              config:
                store_failures: true  # Saves failures to a table

Limiting Failure Volume

Control how many failures are reported:

models:
  - name: events
    columns:
      - name: event_id
        tests:
          - unique:
              config:
                limit: 100  # Only show first 100 failures

Creating Custom Generic Tests

You can extend dbt's testing capabilities by creating custom generic tests as macros:

-- macros/test_is_valid_email.sql
{% macro test_is_valid_email(model, column_name) %}

select *
from {{ model }}
where 
    {{ column_name }} is not null 
    and not regexp_like(
        {{ column_name }}, 
        '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$'
    )

{% endmacro %}

Then use it just like built-in tests:

models:
  - name: customers
    columns:
      - name: email
        tests:
          - is_valid_email

Test Organization Strategies

As your project grows, organizing tests becomes important:

Test by Business Domain

Group tests alongside the models they validate:

models/
├── marketing/
│   ├── schema.yml      # Contains tests for marketing models
│   ├── campaigns.sql
│   └── ad_performance.sql
└── finance/
    ├── schema.yml      # Contains tests for finance models
    ├── transactions.sql
    └── accounts.sql

Centralized Tests

Maintain all tests in a dedicated location:

models/
└── ...
tests/
├── generic/            # Custom generic tests
│   └── is_valid_email.sql
└── singular/           # Singular tests
    ├── marketing/
    │   └── campaign_consistency.sql
    └── finance/
        └── transaction_reconciliation.sql

Troubleshooting Failed Tests

When tests fail, dbt provides information to help diagnose the issue:

  1. Review test SQL: dbt outputs the SQL for the failing test

  2. Examine failing records: Look at examples of failing records

  3. Check compiled SQL: Review the compiled test SQL in target/compiled/

  4. Store failures: Use store_failures: true to analyze failure patterns


Test Output Examples

Passing Tests

18:42:10 | 1 of 4 START test not_null_orders_order_id ............................ [RUN]
18:42:10 | 1 of 4 PASS not_null_orders_order_id ................................. [PASS in 0.08s]
18:42:10 | 2 of 4 START test unique_orders_order_id ............................. [RUN]
18:42:10 | 2 of 4 PASS unique_orders_order_id ................................... [PASS in 0.10s]

Failing Tests

18:42:11 | 3 of 4 START test accepted_values_orders_status_completed__shipped__returned ... [RUN]
18:42:11 | 3 of 4 FAIL accepted_values_orders_status_completed__shipped__returned ... [FAIL in 0.15s]
18:42:11 | 4 of 4 START test relationships_orders_customer_id__customer_id__ref_customers_ ... [RUN]
18:42:11 | 4 of 4 FAIL relationships_orders_customer_id__customer_id__ref_customers_ ... [FAIL in 0.19s]

For failing tests, dbt shows details about the failures:

Failure in test relationships_orders_customer_id__customer_id__ref_customers_
Got 2 results, expected 0.

compiled SQL at target/compiled/.../relationships_orders_customer_id__customer_id__ref_customers_.sql

Best Practices for dbt Testing

Test Coverage Strategy

  • Test primary keys with unique and not_null

  • Test foreign keys with relationships

  • Test categorical fields with accepted_values

  • Test business logic with singular tests

  • Focus on testing critical data first

Test Organization

  • Use a consistent naming convention for singular tests

  • Group related tests together

  • Document what each test validates

Test Execution

  • Run tests before finalizing model changes

  • Include tests in CI/CD pipelines

  • Alert on test failures in production

Test Maintainability

  • Prefer generic tests for common validations

  • Create custom generic tests for repeated patterns

  • Use macros to generate complex test logic {% endhint %}

By implementing a robust testing strategy in dbt, you can ensure your data transformations maintain high quality and reliability, building trust in your analytics data among stakeholders.

PreviousWorking with SourcesNextModels and Transformations

Last updated 2 months ago

Was this helpful?

🔍