Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • What Are Tags?
  • How to Apply Tags
  • Tag Inheritance
  • Applying Tags to Different Resource Types
  • Using Tags for Selection
  • Best Practices for Using Tags
  • Common Use Cases for Tags
  • Troubleshooting Tag Issues

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Configuring your dbt™ Project

Working with Tags

Tags in dbt are powerful metadata labels that can be applied to various resources in your project. They enable flexible model selection, improved workflow management, and better project organization.

What Are Tags?

Tags are simple text labels you can assign to models, sources, snapshots, and other dbt resources. They help you organize, categorize, and select specific subsets of your project for execution or documentation.

By leveraging tags, you can:

  • Group models logically – Categorize models based on refresh schedule, function, or ownership

  • Control execution – Run or exclude specific sets of models

  • Optimize CI/CD pipelines – Target models for incremental builds and tests

  • Improve project maintainability – Standardize workflows across teams


How to Apply Tags

Tags can be applied in two primary ways: directly in model files or in your project configuration file.

1. Defining Tags in a Model File

Tags can be assigned directly within SQL models using the config() function:

{{ config(
    tags=["finance", "daily_refresh"]
) }}

SELECT *
FROM {{ ref('stg_transactions') }}

This assigns both the "finance" and "daily_refresh" tags to this specific model.

2. Defining Tags in dbt_project.yml

Tags can also be applied at the project level, affecting entire folders or groups of models:

models:
  my_project:
    +tags: "core"  # Assigns a single tag

    staging:
      +tags: ["staging", "raw_data"]  # Multiple tags

    marts:
      +tags:
        - "mart"
        - "business_logic"

Tag Inheritance

Models inside a folder inherit the parent folder's tags unless overridden. This creates a hierarchical tagging system that is easy to maintain.

Example:

models/
  staging/
    customers.sql   → Tags: ["staging", "raw_data"]
    orders.sql      → Tags: ["staging", "raw_data"]
  marts/
    dim_customer.sql → Tags: ["mart", "business_logic"]

Individual models can add to inherited tags:

-- models/staging/special_orders.sql
{{ config(
    tags=["critical"]  # Adds to inherited ["staging", "raw_data"]
) }}

SELECT * FROM {{ ref('stg_orders') }}

This model would have tags: ["staging", "raw_data", "critical"]


Applying Tags to Different Resource Types

Tags can be applied to various dbt resource types:

Snapshots

snapshots:
  my_project:
    +tags: ["historical_data"]

Seeds

seeds:
  my_project:
    +tags: ["seed_data"]

Sources

sources:
  - name: external_source
    tags: ['external']

Using Tags for Selection

Once tags are defined, you can use them with dbt commands to select specific resources.

Selection Examples

Command
Description

dbt run --select tag:daily_refresh

Run only models with the daily_refresh tag

dbt run --select tag:daily_refresh tag:critical

Run models with either daily_refresh OR critical tags

dbt run --select tag:daily_refresh --exclude tag:deprecated

Run models with daily_refresh but exclude those with deprecated tag

dbt run --select staging,tag:finance

Run all models tagged finance in the staging folder

dbt run --select tag:critical+

Run critical models and their downstream dependencies

Tag Selection Patterns

Selection Pattern
What It Selects
Example

tag:name

All resources with this tag

dbt run --select tag:nightly

tag:name1 tag:name2

Resources with either tag

dbt run --select tag:nightly tag:critical

tag:name+

Tagged resources and downstream dependencies

dbt run --select tag:base+

+tag:name

Tagged resources and upstream dependencies

dbt run --select +tag:reporting

--exclude tag:name

Everything except resources with this tag

dbt run --exclude tag:deprecated


Best Practices for Using Tags

Use Consistent Naming Conventions

Standardized naming improves clarity and prevents confusion.

# Good
+tags: ["daily_refresh", "finance_data"]

# Avoid inconsistent casing or spacing
+tags: ["Daily", "Finance", "financial-data"]

Document Your Tagging Strategy

Clearly define tag meanings in your project's documentation.

# Tag Definitions
- `daily_refresh`: Models refreshed daily.
- `finance_data`: Contains financial-related tables.
- `pii`: Includes personally identifiable information.

Use Granular Tags

Avoid broad, generic tags. Instead, use precise labels for better control.

# Good
+tags: ["customer_metrics", "daily_refresh"]

# Too broad
+tags: ["metrics", "regular"]

Tag Models by Layer

Use tags to represent data modeling layers in your project.

models:
  staging:
    +tags: ["bronze_layer"]
  intermediate:
    +tags: ["silver_layer"]
  marts:
    +tags: ["gold_layer"]

Common Use Cases for Tags

Refresh Scheduling

Define tags based on refresh frequency for better execution control:

models:
  my_project:
    hourly:
      +tags: ["hourly_refresh"]
    daily:
      +tags: ["daily_refresh"]

Then in your orchestration tool, schedule different runs:

# Morning run for daily models
dbt run --select tag:daily_refresh

# Every hour for hourly models
dbt run --select tag:hourly_refresh

Data Classification

Differentiate datasets based on sensitivity or access level:

models:
  my_project:
    +tags: ["contains_pii"]
    public:
      +tags: ["public_data"]

This helps implement appropriate security controls and auditing.

Testing Strategy

Prioritize critical models in testing workflows:

models:
  my_project:
    critical:
      +tags: ["critical_path", "requires_alert"]

Run critical tests more frequently:

dbt test --select tag:critical_path

Troubleshooting Tag Issues

If dbt isn't selecting resources correctly based on tags, consider these troubleshooting steps:

Tag Inheritance Issues

Verify parent folder configurations in dbt_project.yml:

# List all models with their tags
dbt ls -m "my_project.staging.*" --output path

Selection Syntax Errors

Ensure tag names match exactly (case-sensitive):

# Try with exact case
dbt run --select tag:daily_refresh  # Not tag:Daily_Refresh

Using dbt ls to Validate Tags

Use the dbt ls command to check which models have specific tags:

# List all models with the "finance" tag
dbt ls --select tag:finance

By effectively implementing a tagging strategy, you can organize your dbt project more efficiently, streamline your workflows, and gain better control over how your transformations are executed.

PreviousUnit TestingNextManaging Seeds

Last updated 2 months ago

Was this helpful?

🔍