Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
      • GitHub Branch Protection Guide: Preventing Direct Commits to Main
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • .dinoprompts
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • What Are Macros?
  • Creating Your First Macro
  • Advanced Macro Techniques
  • Working with dbt Packages
  • Real-World Examples

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Macros

Learn how to use dbt™ macros to create reusable SQL logic and build modular transformations. This guide covers creating macros, implementing advanced techniques, and leveraging dbt packages to ex

Macros are powerful features that allow you to create reusable code patterns and implement dynamic SQL generation in your dbt projects. This guide will help you understand how to use macros to make your dbt projects more maintainable, consistent, and flexible.

What Are Macros?

Macros are reusable pieces of code that let you eliminate repetition, create project-wide standards, and abstract complex logic. Think of macros as functions in traditional programming languages that can be called from other macros, models, or schema files.

Macros enable you to:

  • Abstract complex SQL logic into reusable functions

  • Create project-wide standards for common calculations

  • Implement conditional logic in your SQL code

  • Generate SQL dynamically based on parameters


Creating Your First Macro

Macros are defined in .sql files within the macros directory of your dbt project. A basic macro follows this structure:

{% macro macro_name(parameter1, parameter2, ...) %}
    
    -- SQL code and Jinja logic goes here
    
    {% if parameter1 > 0 %}
        SELECT {{ parameter1 }} + {{ parameter2 }}
    {% else %}
        SELECT {{ parameter2 }}
    {% endif %}
    
{% endmacro %}

Example: Creating a Date Dimension Macro

Here's a practical example of a macro that generates a date dimension table:

{% macro generate_date_dimension(start_date, end_date) %}

    WITH date_spine AS (
        {{ dbt_utils.date_spine(
            datepart="day",
            start_date="cast('" ~ start_date ~ "' as date)",
            end_date="cast('" ~ end_date ~ "' as date)"
        ) }}
    ),
    
    dates AS (
        SELECT
            cast(date_day as date) as date_day,
            extract(year from date_day) as year,
            extract(month from date_day) as month,
            extract(day from date_day) as day_of_month,
            extract(dayofweek from date_day) as day_of_week,
            extract(quarter from date_day) as quarter
        FROM date_spine
    )
    
    SELECT * FROM dates

{% endmacro %}

This macro leverages the date_spine utility from dbt_utils to create a complete date dimension table with various date attributes.

Using Macros in Your Models

To use a macro in a model, you simply call it using the Jinja templating syntax:

-- models/dim_date.sql
{{
    config(
        materialized='table'
    )
}}

{{ generate_date_dimension('2020-01-01', '2025-12-31') }}

When dbt runs this model, it will replace the macro call with the SQL generated by the macro, creating a date dimension table for the specified date range.


Advanced Macro Techniques

Macro Organization

For larger projects, organizing macros in subdirectories helps maintain a clean structure:

macros/
  ├── date_utils/
  │   ├── generate_date_dimension.sql
  │   └── fiscal_year_dates.sql
  ├── string_utils/
  │   ├── clean_string.sql
  │   └── standardize_phone.sql
  └── metrics/
      ├── calculate_revenue.sql
      └── customer_lifetime_value.sql

Using Control Structures

Macros support Jinja's control structures for advanced logic:

{% macro dynamic_pivot(table_name, group_by_columns, pivot_column, value_column) %}

    {% set group_by_str = group_by_columns | join(', ') %}
    
    {% set query %}
        SELECT DISTINCT {{ pivot_column }} 
        FROM {{ table_name }}
        ORDER BY 1
    {% endset %}
    
    {% set results = run_query(query) %}
    
    {% if execute %}
        {% set pivot_values = results.columns[0].values() %}
    {% else %}
        {% set pivot_values = [] %}
    {% endif %}
    
    SELECT
        {{ group_by_str }},
        {% for value in pivot_values %}
            SUM(CASE WHEN {{ pivot_column }} = '{{ value }}' THEN {{ value_column }} ELSE 0 END) AS "{{ value }}"
            {% if not loop.last %},{% endif %}
        {% endfor %}
    FROM {{ table_name }}
    GROUP BY {{ group_by_str }}
    
{% endmacro %}

This advanced macro dynamically creates a pivot table based on values found in your data at runtime.

Using Macros Effectively

  • Keep macros focused – Each macro should do one thing well

  • Document your macros – Add comments explaining parameters and usage

  • Use Jinja's execute flag – The code within {% if execute %} only runs during compilation, not during preview or rendering

  • Test macros thoroughly – Create models specifically for testing macro functionality

  • Use default parameters – Make macros flexible while providing sensible defaults


Working with dbt Packages

dbt packages let you leverage pre-built macros created by the community. They're an excellent way to avoid reinventing the wheel.

Installing Packages

To use packages, define them in a packages.yml file in your project root:

packages:
  - package: dbt-labs/dbt_utils
    version: 1.1.1
  - package: calogica/dbt_expectations
    version: 0.8.5

Then install the packages using:

dbt deps

Popular Packages for Macros

Package
Purpose
Key Features

dbt-utils

General utility macros

String operations, date handling, cross-database functions

dbt-date

Date and calendar functionality

Date spines, fiscal periods, date utilities

dbt-ml

Machine learning functionality

Feature engineering, model scoring

dbt-codegen

Code generation tools

Auto-generate models, sources, base models

Using Package Functions

Once installed, you can use package functions in your models:

-- Example using dbt_utils generate_surrogate_key
SELECT
    {{ dbt_utils.generate_surrogate_key(['customer_id', 'order_date']) }} as order_key,
    customer_id,
    order_date,
    amount
FROM {{ ref('stg_orders') }}

Real-World Examples

Financial Calculations

{% macro calculate_margin(revenue, cost) %}
    ({{ revenue }} - {{ cost }}) / NULLIF({{ revenue }}, 0)
{% endmacro %}

Dynamic Table Generation

{% macro generate_surrogate_key(field_list) %}
    {% set fields = [] %}
    {% for field in field_list %}
        {% do fields.append("coalesce(cast(" ~ field ~ " as string), '')") %}
    {% endfor %}
    md5({{ fields|join(" || '-' || ") }})
{% endmacro %}

Environment-Based Configuration

{% macro get_schema_prefix() %}
    {% if target.name == 'prod' %}
        prod_
    {% elif target.name == 'dev' %}
        dev_{{ target.schema }}_
    {% else %}
        {{ target.schema }}_
    {% endif %}
{% endmacro %}

Best Practices for Macros

Best Practice
Description

Keep Macros Focused

Each macro should do one thing well. Avoid overly complex macros with many responsibilities.

Document Your Macros

Add comments explaining purpose, parameters, and return values. Include examples of how to use the macro.

Test Your Macros

Create models specifically for testing macro functionality. Use assertions to verify macro outputs.

Handle Edge Cases

Ensure your macros handle null values appropriately. Account for empty tables and edge conditions.

Use Default Parameters

Make macros flexible with sensible defaults. Allow override of defaults when needed.

Leverage Return Values

Use return() to pass values back from macros. Chain macros together for complex operations.

Pro Tip: Debugging Macros

When troubleshooting macros:

  1. Use dbt compile to see the generated SQL without running it

  2. Check the compiled SQL in the target/compiled/ directory

  3. Add {{ log("Debug message") }} within macros for debugging

  4. Use {% if execute %} to handle compile-time vs. run-time logic

By mastering macros, you can create more maintainable, consistent, and flexible data transformations throughout your dbt project.

PreviousVariables and ParametersNextCustom Tests

Last updated 3 months ago

Was this helpful?

🔍