Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
      • GitHub Branch Protection Guide: Preventing Direct Commits to Main
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • .dinoprompts
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • Why dbt_project.yml Matters
  • Core Components of dbt_project.yml
  • Complete Example
  • Best Practices for dbt_project.yml
  • Common Issues and Solutions

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Setting up your dbt_project.yml

The dbt_project.yml file is the core configuration file for any dbt project. It defines essential settings such as the project name, version, and model configurations, ensuring your project runs correctly and consistently.


Why dbt_project.yml Matters

The dbt_project.yml file serves several important functions:

  • Identifies the root of your dbt project

  • Configures project-wide settings

  • Sets default materializations for your models

  • Defines model-specific configurations

  • Controls directory paths and behaviors

A well-configured project file ensures consistent behavior across environments and team members.


Core Components of dbt_project.yml

Here's a breakdown of the key sections and their purposes:

Project Metadata

name: 'my_dbt_project'  # The unique name of your dbt project
version: '1.0.0'        # Optional versioning for project tracking
config-version: 2       # The version of dbt's config schema

This section defines:

  • name: A unique identifier for your project (used in compiled SQL)

  • version: Optional versioning for tracking project changes

  • config-version: The version of dbt's configuration schema (should be 2 for current projects)

Profile Configuration

profile: 'my_profile'  # Specifies the profile to use from profiles.yml

This tells dbt which profile to use from your profiles.yml file. Profiles define database connections and credentials.

Setting
Purpose
Example

profile

Specifies which connection profile to use

profile: 'snowflake_analytics'

Directory Paths

# Paths for different dbt components
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]

These settings define where dbt should look for different types of files:

Path Setting
Default
Purpose

model-paths

["models"]

Where your SQL models are stored

seed-paths

["seeds"]

Where your CSV files are stored

test-paths

["tests"]

Where singular tests are stored

analysis-paths

["analyses"]

Where analytical queries are stored

macro-paths

["macros"]

Where macros are stored

snapshot-paths

["snapshots"]

Where snapshot definitions are stored

Model Configuration

This section defines how your models should be materialized and configured:

models:
  my_dbt_project:  # Must match your project name
    +materialized: view   # Default materialization for all models
    
    # Configure specific directories
    staging:
      +materialized: view  # Staging models as views
    
    marts:
      +materialized: table # Mart models as tables
      
      # Configure specific subdirectories
      marketing:
        +schema: marketing_schema  # Custom schema

Key points about model configuration:

  • Configuration is hierarchical - lower levels inherit from higher levels

  • The top-level project name must match your name value

  • The + prefix indicates a dbt configuration property

  • You can override configurations at any level

Seed Configuration

For controlling how CSV files are loaded into your database:

seeds:
  my_dbt_project:
    +schema: raw_data     # Default schema for seed files
    +quote_columns: false # Whether to quote column names
    
    # Configuration for specific seeds
    country_codes:
      +column_types:
        country_code: varchar(2)

Variables

Define project-wide variables that can be used in models:

vars:
  start_date: '2020-01-01'  # Available as {{ var('start_date') }}
  countries: ['US', 'CA', 'UK']
  # Environment-specific variables
  dev:
    debug_mode: true
  prod:
    debug_mode: false

On-Run Hooks

Execute SQL before or after your dbt runs:

on-run-start:
  - "create schema if not exists {{ target.schema }}_staging"
  
on-run-end:
  - "grant usage on schema {{ target.schema }} to role reporter"
  - "grant select on all tables in schema {{ target.schema }} to role reporter"

Cleaning Up Artifacts

Define which directories should be cleaned by dbt clean:

clean-targets:
  - "target"
  - "dbt_packages"
  - "logs"

Complete Example

Here's a complete example of a dbt_project.yml file:

name: 'ecommerce'
version: '1.0.0'
config-version: 2

profile: 'snowflake_analytics'

model-paths: ["models"]
seed-paths: ["seeds"]
test-paths: ["tests"]
analysis-paths: ["analyses"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
docs-paths: ["docs"]

target-path: "target"
clean-targets:
  - "target"
  - "dbt_packages"
  - "logs"

vars:
  start_date: '2020-01-01'
  include_test_accounts: false

models:
  ecommerce:
    +materialized: view
    
    staging:
      +materialized: view
      +schema: staging
      
    intermediate:
      +materialized: view
      +schema: intermediate
    
    marts:
      +materialized: table
      +schema: analytics
      
      finance:
        +schema: analytics_finance
        +tags: ["finance", "daily"]
      
      marketing:
        +schema: analytics_marketing
        +tags: ["marketing"]

seeds:
  ecommerce:
    +schema: reference_data

snapshots:
  ecommerce:
    +target_schema: snapshots

on-run-end:
  - "grant select on all tables in schema {{ target.schema }} to role analyst"

Best Practices for dbt_project.yml

Category
Best Practices

Use Meaningful Names and Structure

✅ Group models logically by function or business domain ✅ Use consistent naming patterns for schemas ✅ Document non-obvious configurations with comments

Set Sensible Defaults

✅ Define default materializations for different model types ✅ Use views for staging/intermediate models and tables for final models ✅ Configure schemas to match your data warehouse organization

Optimize for Team Collaboration

✅ Use environment-specific variables where needed ✅ Set appropriate permissions with on-run hooks ✅ Document variables and their purposes

Maintain and Evolve

✅ Review your project configuration regularly ✅ Update as your project grows and changes ✅ Document changes to configuration in version control


Common Issues and Solutions

Issue
Solution

Models building in wrong schema

Check schema configuration and target profile

Incorrect materialization

Verify hierarchy of materialization settings

Variable not available

Ensure variable is defined at the correct level

Path not found

Verify directory paths match actual project structure

Your dbt_project.yml file is a living document that will evolve with your project. Taking the time to configure it correctly will lead to a more maintainable and consistent dbt implementation.

PreviousConfiguring your dbt™ ProjectNextDefining Your Sources in sources.yml

Last updated 3 months ago

Was this helpful?

🔍