Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
      • GitHub Branch Protection Guide: Preventing Direct Commits to Main
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • Understanding dbt Variables
  • Defining Variables in dbt_project.yml
  • Using Variables in Models
  • Working with Environment Variables
  • Advanced Variable Techniques
  • Common Use Cases

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Variables and Parameters

Variables allow you to make your dbt project more dynamic and configurable by passing values at runtime or setting them in configuration files. They enable you to create flexible data transformations that can adapt to different environments, use cases, and scenarios.

Understanding dbt Variables

Variables in dbt serve two primary purposes:

  1. Make code reusable - Define values once and reference them throughout your project

  2. Enable flexibility - Change behavior without modifying code

There are several ways to define and use variables in dbt:

  • Project variables - Defined in dbt_project.yml

  • Command-line variables - Passed at runtime

  • Environment variables - Accessed via Jinja macros


Defining Variables in dbt_project.yml

The simplest way to define variables is in your dbt_project.yml file:

vars:
  # Simple scalar values
  start_date: '2020-01-01'
  end_date: '2022-12-31'
  
  # Lists
  excluded_countries: ['test', 'demo', 'internal']
  
  # Dictionaries
  partner_sales_targets: {
    'tier1': 1000000,
    'tier2': 500000,
    'tier3': 100000
  }
  
  # Environment-specific variables
  dev:
    row_limit: 100
    debug_mode: true
  prod:
    row_limit: null
    debug_mode: false

These variables become available throughout your project via the var() function.


Using Variables in Models

Once defined, you can reference variables in your models using the var() function:

-- models/reporting/monthly_sales.sql
SELECT
  date_trunc('month', order_date) as month,
  SUM(amount) as monthly_sales
FROM {{ ref('stg_orders') }}
WHERE 
  order_date >= '{{ var("start_date") }}'
  AND order_date <= '{{ var("end_date") }}'
  {% if var('row_limit') %}
  LIMIT {{ var('row_limit') }}
  {% endif %}

The var() function has two parameters:

  1. The variable name

  2. An optional default value that's used if the variable isn't defined

sqlCopy-- Using a default value
SELECT * FROM {{ ref('stg_users') }}
WHERE status = '{{ var("user_status", "active") }}'

Variable Behavior

When you use the var() function:

  • It will use the variable from dbt_project.yml if defined

  • Command-line variables override values from dbt_project.yml

  • If no variable is found and no default is specified, dbt will raise an error

  • Environment-specific variables (dev, prod) are only used when running in that environment


Passing Variables at Runtime

For maximum flexibility, pass variables at runtime using the --vars flag:

dbt run --vars '{"start_date": "2023-01-01", "end_date": "2023-03-31"}'

You can pass complex structures too:

dbt run --vars '{"regions": ["north", "south"], "include_test_data": false}'

Runtime variables override any variables defined in dbt_project.yml.


Working with Environment Variables

You can access environment variables using the env_var Jinja function:

-- Configuring a model to use environment variables
{{ 
  config(
    schema=env_var('DBT_SCHEMA', 'analytics')
  ) 
}}

SELECT * FROM {{ ref('stg_orders') }}

This is particularly useful for sensitive information (like API keys) or values that vary by environment.

Security Note

Never use env_var() for credentials that should remain secret. These values could be exposed in compiled SQL or logs. Instead, use your platform's secure environment variable handling for credentials.


Advanced Variable Techniques

Conditional Logic with Variables

Variables allow you to implement conditional logic in your models:

{% if var('data_source') == 'api' %}
  SELECT * FROM {{ ref('stg_api_data') }}
{% else %}
  SELECT * FROM {{ ref('stg_warehouse_data') }}
{% endif %}

Dynamic Filtering

Create flexible filtering based on variable values:

SELECT
  *
FROM {{ ref('stg_transactions') }}
WHERE 1=1
  {% if var('filter_by_date', false) %}
  AND transaction_date BETWEEN '{{ var("start_date") }}' AND '{{ var("end_date") }}'
  {% endif %}
  
  {% if var('filter_by_country', false) %}
  AND country IN (
    {% for country in var('countries', []) %}
      '{{ country }}'{% if not loop.last %},{% endif %}
    {% endfor %}
  )
  {% endif %}

Date/Time Variables

A common pattern for incremental models is using variables for date ranges:

{% set run_date = var('run_date', modules.datetime.date.today().strftime('%Y-%m-%d')) %}

SELECT 
  *
FROM {{ source('events', 'daily_events') }}
WHERE 
  event_date = '{{ run_date }}'

Best Practices for Variables

Practice
Description

Set meaningful defaults

Provide sensible default values to make your code more robust

Use descriptive names

Choose clear, explicit variable names that explain purpose

Document variables

Add comments in dbt_project.yml to explain each variable's purpose

Consistent formatting

Maintain consistent casing and naming conventions

Avoid hardcoding

Use variables instead of hardcoding values that might change

Example: Well-Structured Variables

vars:
  # Analysis date range - used for filtering transaction data
  # Format: YYYY-MM-DD
  analysis_start_date: '2023-01-01'  # Inclusive
  analysis_end_date: '2023-12-31'    # Inclusive
  
  # Revenue recognition settings
  rev_rec_delay_days: 14             # Days to delay revenue recognition
  include_refunds: false             # Set to true to include refunded transactions
  
  # Environment-specific settings
  dev:
    debug_mode: true                 # Enables additional logging
    data_sample_pct: 10              # Only process 10% of data in dev
  prod:
    debug_mode: false
    data_sample_pct: 100             # Process all data in prod

Common Use Cases

Environment-Specific Configuration

Define different behavior based on your deployment environment:

# dbt_project.yml
vars:
  dev:
    schema_prefix: 'dev_'
    row_limit: 1000
  prod:
    schema_prefix: ''
    row_limit: null
-- models/model.sql
{{ 
  config(
    schema=var('schema_prefix', 'dev_') ~ 'marketing'
  ) 
}}

SELECT * FROM {{ ref('stg_data') }}
{% if var('row_limit') %}
LIMIT {{ var('row_limit') }}
{% endif %}

Parameterized Reporting

Create reports with customizable parameters:

-- models/daily_sales_report.sql
{% set date_column = var('date_column', 'order_date') %}
{% set granularity = var('granularity', 'day') %}

SELECT
  DATE_TRUNC('{{ granularity }}', {{ date_column }}) as period,
  SUM(amount) as sales
FROM {{ ref('fct_orders') }}
GROUP BY 1
ORDER BY 1

Then run with different settings:

dbt run --select daily_sales_report --vars '{"granularity": "month", "date_column": "shipped_date"}'

By effectively using variables in your dbt project, you create more flexible, maintainable, and reusable data transformations that can easily adapt to different needs and environments without code changes.

PreviousEnvironment ManagementNextMacros

Last updated 2 months ago

Was this helpful?

🔍