Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • Understanding dbt Unit Tests
  • Why Unit Testing Matters
  • When to Use Unit Tests
  • Unit Test Structure
  • Creating Unit Tests
  • Example: Testing a Customer Classification Model
  • Running Unit Tests
  • Testing Special Cases
  • Limitations & Best Practices
  • Test-Driven Development with dbt

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Unit Testing

Unit testing in dbt allows you to validate your SQL transformation logic using controlled input data. Unlike traditional data tests that verify the quality of existing data, unit tests help you catch logical errors during development, bringing the test-driven development approach to data transformations.


Understanding dbt Unit Tests

Unit tests in dbt help you verify that your transformations produce expected outputs given specific input data. This ensures your business logic is correct before you deploy it to production.

Unit testing is available in dbt Core v1.8+ and above


Why Unit Testing Matters

Traditional data tests (like not_null, unique) validate the quality of data that already exists. Unit tests serve a different and complementary purpose:

Unit Tests
Data Tests

Validate transformation logic

Validate data quality

Run before building models

Run after models are built

Use controlled test data

Use actual production data

Focus on business logic correctness

Focus on data integrity

Help with test-driven development

Help with data quality assurance

Unit tests provide several benefits:

  • Test before building: Validate logic without materializing models

  • Verify transformations: Ensure your SQL logic handles edge cases correctly

  • Support test-driven development: Write tests first, then implement the model

  • Catch errors early: Find bugs before they reach production

  • Improve code reliability: Maintain confidence during refactoring


When to Use Unit Tests

Unit tests are particularly valuable when your models include:

  • Complex transformations: Regular expressions, date calculations, window functions

  • Critical business logic: Calculations that impact important metrics

  • Known edge cases: Scenarios that have caused issues in the past

  • Models undergoing refactoring: Changes to existing transformation logic

  • Frequently used models: Core models that many others depend on


Unit Test Structure

A dbt unit test consists of three essential parts:

  1. Mock inputs: Sample data for source tables and referenced models

  2. Model under test: The model whose logic you want to validate

  3. Expected outputs: The exact results you expect after transformation

Basic Example

Here's a simple unit test defined in YAML:

unit_tests:
  - name: test_orders_status_counts
    model: order_status_summary
    given:
      # Define input data for the model's dependencies
      - input: ref('stg_orders')
        rows:
          - {order_id: 1, status: 'completed'}
          - {order_id: 2, status: 'pending'}
          - {order_id: 3, status: 'completed'}
          - {order_id: 4, status: 'shipped'}
    expect:
      # Define expected output data
      rows:
        - {status: 'completed', count: 2}
        - {status: 'pending', count: 1}
        - {status: 'shipped', count: 1}

Creating Unit Tests

Unit tests are defined in YAML files within your models directory, typically alongside the model they're testing.

Defining Input Data

Mock input data can be provided in several formats:

Dictionary Format (default)

given:
  - input: ref('stg_customers')
    rows:
      - {customer_id: 1, name: 'John Doe', status: 'active'}
      - {customer_id: 2, name: 'Jane Smith', status: 'inactive'}

CSV Format

given:
  - input: ref('stg_customers')
    format: csv
    rows: |
      customer_id,name,status
      1,"John Doe",active
      2,"Jane Smith",inactive

SQL Format

given:
  - input: ref('stg_customers')
    format: sql
    rows: |
      select 1 as customer_id, 'John Doe' as name, 'active' as status
      union all
      select 2 as customer_id, 'Jane Smith' as name, 'inactive' as status

Defining Expected Output

You can specify expected outputs in different ways:

Rows (most common)

expect:
  rows:
    - {customer_id: 1, status: 'active'}
    - {customer_id: 2, status: 'inactive'}

Column Values

expect:
  columns:
    - name: status
      values: ['active', 'inactive']

Row Count

expect:
  row_count: 2

Example: Testing a Customer Classification Model

Let's see a complete example for a model that classifies customers based on spending:

The model being tested:

-- models/customer_segments.sql
SELECT
  customer_id,
  name,
  total_spend,
  CASE
    WHEN total_spend >= 1000 THEN 'high'
    WHEN total_spend >= 500 THEN 'medium'
    ELSE 'low'
  END as spending_segment
FROM {{ ref('stg_customers') }}

The unit test:

# models/customer_segments_tests.yml
unit_tests:
  - name: test_customer_segments_classification
    model: customer_segments
    given:
      - input: ref('stg_customers')
        rows:
          - {customer_id: 1, name: 'Customer A', total_spend: 1200}
          - {customer_id: 2, name: 'Customer B', total_spend: 750} 
          - {customer_id: 3, name: 'Customer C', total_spend: 300}
    expect:
      rows:
        - {customer_id: 1, name: 'Customer A', total_spend: 1200, spending_segment: 'high'}
        - {customer_id: 2, name: 'Customer B', total_spend: 750, spending_segment: 'medium'}
        - {customer_id: 3, name: 'Customer C', total_spend: 300, spending_segment: 'low'}

Running Unit Tests

To run unit tests, use the dbt test command with appropriate selectors:

# Run all unit tests
dbt test --select test_type:unit

# Run unit tests for a specific model
dbt test --select my_model,test_type:unit

# Run a specific unit test
dbt test --select my_specific_unit_test

Testing Special Cases

Incremental Models

When testing incremental models, you can override the is_incremental() macro to test both full refresh and incremental scenarios:

unit_tests:
  - name: test_incremental_full_refresh
    model: my_incremental_model
    overrides:
      macros:
        is_incremental: false
    # Test data here...

  - name: test_incremental_update
    model: my_incremental_model
    overrides:
      macros:
        is_incremental: true
    # Test data here including 'this' input...

For the incremental update test, you need to provide mock data for the existing table:

given:
  - input: this  # Special reference to the current model
    rows:
      - {id: 1, value: 'existing', updated_at: '2023-01-01'}
      - {id: 2, value: 'existing', updated_at: '2023-01-01'}

Ephemeral Models

To test models that depend on ephemeral models, use SQL format for the input:

unit_tests:
  - name: test_model_with_ephemeral_dependency
    model: my_model
    given:
      - input: ref('ephemeral_model')
        format: sql
        rows: |
          select 1 as id, 'test' as name
    # Expected output here...

Testing Macros

You can override macro implementations for testing:

unit_tests:
  - name: test_with_custom_macro
    model: my_model
    overrides:
      macros:
        get_current_timestamp: return('2023-09-15')
    # Test data here...

Limitations & Best Practices

dbt's unit testing framework has some limitations to be aware of:

{% hint style="warning" %}

Current Limitations

  • Only supports SQL models (not Python models)

  • Can only test models in your current project

  • Doesn't support materialized views

  • Doesn't support recursive SQL or introspective queries

  • Requires all table names to be aliased in join logic {% endhint %}

Best Practices for Effective Unit Testing

Practice
Description

Focus on logic, not functions

Test your business logic rather than built-in database functions

Use descriptive test names

Clearly explain what each test is verifying

Test edge cases

Include unusual scenarios your logic needs to handle

Only mock what's needed

Define only the columns relevant to your test

Run in development

Use unit tests during development, not in production

Use in CI/CD

Integrate unit tests into your CI/CD pipeline


Test-Driven Development with dbt

Unit tests enable a test-driven development (TDD) workflow for your data transformations:

  1. Write a test: Define the expected behavior

  2. Run the test: It should fail because the model doesn't exist or doesn't handle the case yet

  3. Implement the model: Create or modify the model to pass the test

  4. Verify: Run the test again to confirm it passes

  5. Refactor: Clean up your implementation while keeping the tests passing

This approach helps ensure your models correctly implement business requirements from the start.

By adopting unit testing in your dbt workflow, you can catch issues earlier, document model behavior, and build more confidence in your data transformations.

PreviousTesting Source FreshnessNextWorking with Tags

Last updated 2 months ago

Was this helpful?

🔍