Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • What Are Models?
  • Creating Your First Model
  • Using Common Table Expressions (CTEs)
  • Model Dependencies with ref()
  • Model Configuration with config()
  • Using Jinja for Dynamic SQL
  • Model Organization Best Practices
  • Troubleshooting Models
  • Advanced Model Techniques

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Getting started with dbt™

Models and Transformations

Models are the core building blocks of your dbt project. They define the transformations that turn raw data into analytics-ready datasets using SQL. This guide covers how to create models, manage dependencies, and leverage dbt's templating capabilities.


What Are Models?

In dbt, a model is a SQL file that defines a transformation. When you run dbt, it compiles these SQL files into executable queries and runs them against your data warehouse, creating views or tables.

Models serve three key purposes:

  1. Transform data into useful analytics structures

  2. Document transformations with clear SQL

  3. Create dependencies between different data assets

Each model typically results in a single table or view in your data warehouse.


Creating Your First Model

A model is simply a .sql file in your models/ directory. Let's start with a basic example:

-- models/staging/stg_customers.sql
SELECT
  id as customer_id,
  first_name,
  last_name,
  email,
  date_joined
FROM {{ source('jaffle_shop', 'customers') }}

When you run dbt run, this SQL gets compiled and executed in your data warehouse, creating a view called stg_customers with transformed customer data.


Using Common Table Expressions (CTEs)

CTEs make your models more readable and maintainable by breaking complex queries into logical building blocks. They're a powerful way to structure your transformations:

-- models/marts/customer_orders.sql
WITH customers AS (
    SELECT * FROM {{ ref('stg_customers') }}
),

orders AS (
    SELECT * FROM {{ ref('stg_orders') }}
),

customer_orders AS (
    SELECT
        customer_id,
        COUNT(order_id) as order_count,
        SUM(amount) as total_spent
    FROM orders
    GROUP BY customer_id
)

SELECT
    customers.customer_id,
    customers.first_name,
    customers.last_name,
    customers.email,
    COALESCE(customer_orders.order_count, 0) as order_count,
    COALESCE(customer_orders.total_spent, 0) as total_spent
FROM customers
LEFT JOIN customer_orders USING (customer_id)

CTEs offer several benefits:

  • Improve readability by breaking complex logic into named sections

  • Allow you to reuse intermediate calculations

  • Make troubleshooting easier by separating transformation steps


Model Dependencies with ref()

The ref() function is one of dbt's most powerful features. It allows you to reference other models, automatically creating dependencies:

-- models/marts/customer_lifetime_value.sql
WITH customer_orders AS (
    SELECT * FROM {{ ref('customer_orders') }}
)

SELECT
    customer_id,
    total_spent,
    total_spent * 0.15 as estimated_future_value,
    total_spent * 1.15 as lifetime_value
FROM customer_orders

When you use ref():

  1. dbt automatically determines the correct schema and table name

  2. dbt builds a dependency graph, ensuring models run in the correct order

  3. dbt creates lineage documentation for your project

A key benefit is that if you rename models or change schemas, dbt handles all the references for you.


Model Configuration with config()

The config() function lets you control how a model is materialized and other settings:

-- models/marts/large_summary_table.sql
{{ 
  config(
    materialized='table',
    sort='date_day',
    dist='customer_id'
  ) 
}}

SELECT
  date_trunc('day', created_at) as date_day,
  customer_id,
  sum(amount) as total_amount
FROM {{ ref('stg_orders') }}
GROUP BY 1, 2

Common configuration options include:

  • materialized: How the model should be created ('view', 'table', 'incremental', 'ephemeral')

  • schema: Which schema the model should be created in

  • tags: Labels to organize and select models

  • Database-specific options (like sort, dist, cluster_by, etc.)


Using Jinja for Dynamic SQL

Jinja is a templating language that allows you to generate dynamic SQL. dbt uses Jinja to make your transformations more flexible and reusable.

Conditional Logic

Use if/else statements to adapt your SQL based on conditions:

SELECT
  order_id,
  order_date,
  {% if target.name == 'prod' %}
    amount
  {% else %}
    amount * 100 as amount_in_cents
  {% endif %}
FROM {{ ref('stg_orders') }}

Looping

Generate repetitive SQL using for loops:

SELECT
  order_id,
  {% for i in range(1, 5) %}
    item_{{ i }}_id,
    item_{{ i }}_quantity,
    {% if not loop.last %},{% endif %}
  {% endfor %}
FROM {{ ref('stg_order_items') }}

Variables

Use variables to make your models configurable:

-- Using a variable defined in dbt_project.yml or passed via --vars
SELECT *
FROM {{ ref('stg_orders') }}
WHERE order_date >= '{{ var("start_date", "2020-01-01") }}'

Model Organization Best Practices

Organize your models to reflect their purpose in your analytics pipeline:

Staging Models

Staging models clean and standardize source data:

  • Naming and datatype standardization

  • Simple filtering

  • One-to-one relationship with source tables

  • Typically materialized as views

-- models/staging/stg_customers.sql
SELECT
  id as customer_id,
  first_name,
  last_name,
  email,
  -- Convert to ISO date format
  PARSE_DATE('%Y-%m-%d', date_joined) as date_joined
FROM {{ source('jaffle_shop', 'customers') }}
WHERE id IS NOT NULL

Intermediate Models

Intermediate models combine and transform staging models:

  • Join related data sources

  • Apply business logic

  • Create reusable building blocks

  • Typically materialized as views

-- models/intermediate/int_customer_orders.sql
SELECT
  o.order_id,
  o.customer_id,
  c.email,
  o.order_date,
  o.status,
  o.amount
FROM {{ ref('stg_orders') }} o
JOIN {{ ref('stg_customers') }} c ON o.customer_id = c.customer_id

Mart Models

Mart models prepare data for business consumption:

  • Oriented around business entities (customers, products, etc.)

  • Optimized for specific use cases

  • Include calculated metrics

  • Often materialized as tables for performance

-- models/marts/finance/order_payment_summary.sql
{{
  config(
    materialized='table'
  )
}}

SELECT
  date_trunc('month', o.order_date) as order_month,
  p.payment_method,
  count(distinct o.order_id) as order_count,
  sum(o.amount) as total_amount
FROM {{ ref('int_customer_orders') }} o
JOIN {{ ref('stg_payments') }} p ON o.order_id = p.order_id
GROUP BY 1, 2

Troubleshooting Models

When your models have issues, use these strategies to troubleshoot:

Compiling Without Running

Use dbt compile to see the generated SQL without running it:

dbt compile --models customer_lifetime_value

Then check the compiled SQL in target/compiled/[project_name]/models/...

Execute Specific Models

Run only the model you're working on:

dbt run --models staging.stg_customers

Or run a model and everything that depends on it:

dbt run --models +customer_lifetime_value

Check Logs

Detailed logs are available in the logs/ directory and often contain helpful error information.


Advanced Model Techniques

Custom Schemas

Generate custom schemas to separate models by team or environment:

{{ 
  config(
    schema='marketing_' ~ target.name
  ) 
}}

SELECT * FROM {{ ref('stg_marketing_campaigns') }}

Post-Hooks

Execute SQL after a model is created, such as granting permissions:

{{ 
  config(
    post_hook='GRANT SELECT ON {{ this }} TO ROLE analytics_readers'
  ) 
}}

SELECT * FROM {{ ref('stg_customers') }}

Documentation in Models

Add descriptions to your models using YAML files:

version: 2

models:
  - name: customer_orders
    description: "One record per customer with order summary data"
    columns:
      - name: customer_id
        description: "The primary key of the customers table"
      - name: order_count
        description: "Count of orders placed by this customer"
        tests:
          - not_null
      - name: total_spent
        description: "Total amount spent on all orders"

These descriptions will appear in your automatically generated documentation.

By mastering models and transformations in dbt, you can build a reliable, maintainable analytics pipeline that transforms raw data into valuable business insights.

PreviousTesting Data QualityNextConfiguring your dbt™ Project

Last updated 2 months ago

Was this helpful?

🔍