Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • "tag" Method
  • "source" Method
  • "resource_type" method
  • "path" method
  • "file" method
  • "fqn" method
  • "package" method
  • "config" method
  • "test_type" method
  • "test_name" method
  • "state" method
  • "exposure" method
  • "metric" method
  • "results" method
  • "source_status" method
  • "group" method
  • "access" Method
  • "version" Method
  • "semantic_model" Method
  • "saved_query" method
  • "unit_test" method

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Running dbt™
  4. Mastering the dbt™ CLI

Methods

Selector methods allow you to filter resources based on specific properties using the method:value syntax. While it's advisable to explicitly denote the method, you can omit it, and the default will be one of path, file, or fqn.

Most selector methods below support unix-style wildcards:

Wildcard
Description
Example

*

matches any number of characters (including none)

dbt list --select "*.folder_name.*"

?

matches any single character

dbt list --select "model_?.sql"

[abc]

matches one character listed in the bracket

dbt list --select "model_[abc].sql"

[a-z]

matches one character from the specified range in the bracket

dbt list --select "model_[a-z].sql"

Below are examples of several popular selector methods:

"tag" Method

Use the tag: method select models with a specified tag.

# Run all models with the 'hourly' tag
dbt run --select "tag:hourly"    

"source" Method

Use the source: method to select models that reference a specified source.

# Runs all models that reference the fivetran source
dbt run --select "source:fivetran+"

"resource_type" method

Use the resource_type method to select nodes of a specific type (ex. model, test, exposure, etc.)

# Runs all models and tasks related to exposures
dbt run --select "resource_type:exposure"  
# Lists all tests in your project
dbt list --select "resource_type:test"  

"path" method

Use path method to select models/sources defined at or under a specific path.

# Runs all models in the "models/marts" path
dbt run --select "path:models/marts"
# Runs a specific model, "customers.sql", in the "models/marts" path. 
dbt run --select "path:models/marts/customers.sql"

"file" method

Use file method to select a model by filename.

# Runs the model defined in 'model_name.sql'
dbt run --select "file:model_name.sql"

# Note: Adding the file extension (e.g., ".sql") is optional.
dbt run --select "file:model_name"

"fqn" method

Use 'fqn' to select nodes based off their "fully qualified name" (FQN). The default FQM format includes the dbt project name, subdirectories, and the file name

# Runs the model named 'example_model'
dbt run --select "fqn:example_model"

# Runs the model 'model_one' in 'example_model'
dbt run --select "fqn:project_name.example_model"

# Runs 'example_model' in 'package_name'
dbt run --select "fqn:package_name.example_model"

# Runs 'example_model' in 'example_path'
dbt run --select "fqn:example_path.example_model"

# Runs 'example_model' in 'example_path' within 'project_name'
dbt run --select "fqn:project_name.example_path.example_model"

"package" method

Use package method to select models defined within the root project or an installed dbt package.

# Runs all models in the 'fivetran' package
dbt run --select "package:fivetran"

# Note: Adding "package" prefix is optional. The following commands are equivalent:
dbt run --select "fivetran"
dbt run --select "fivetran.*"

"config" method

Use config to select models that match a specified node config.

# Runs all models that are materialized as tables
dbt run --select "config.materialized:table"

# Runs all models that are created in the 'stagins' schema
dbt run --select "config.schema:staging"

# Runs all models clustered by 'zip_code'
dbt run --select "config.cluster_by:zip_code"

Note: config method work for non-string values, such as: booleans, dictionary keys, values in arrays, etc.

Suppose you have a model with the following configurations:

{{ config(
  materialized = 'view',
  unique_key = ['customer_id', 'order_id'],
  grants = {'insert': ['sales_team', 'marketing_team']},
  transient = false
) }}

select ...

You can use config method to select the following:

# Lists all models materialized as views
dbt ls -s config.materialized:view

# Lists all models with 'customer_id' as a unique key
dbt ls -s config.unique_key:customer_id

# Lists all models with insert grants for the sales team
dbt ls -s config.grants.insert:sales_team

# Lists all models that are not transient
dbt ls -s config.transient:false

"test_type" method

Use test_type to select tests based on type (singular or generic)

# Runs all generic tests
dbt test --select "test_type:generic"

# Runs all singular tests
dbt test --select "test_type:singular"

"test_name" method

Use test_name method to select tests based on the name of the test defined.

# Runs all instances of the 'not null' test
dbt test --select "test_name:not null"

# Runs all instances of the 'dbt_utils.not_accepted_values' test
dbt test --select "test_name:not_accepted_values"

"state" method

When using the "state" method in a Bolt schedule of type Deferred or Turbo-CI, you don't need to pass the --state path/to/project/artifacts to your dbt command.

  • Deferred schedule

  • Last run type

The state method selects nodes by comparing them against a previous version of the same project, represented by a manifest. The file path of the comparison manifest must be specified using the --state flag or the DBT_STATE environment variable.

  • state:new: Indicates there is no node with the same unique_id in the comparison manifest.

  • state:modified: Includes all new nodes and any changes to existing nodes.

# run all tests on new models + and new tests on old models
dbt test --select "state:new" --state path/to/artifacts

# run all models that have been modified
dbt run --select "state:modified" --state path/to/artifacts

# list all modified nodes (not just models)
dbt ls --select "state:modified" --state path/to/artifacts

Because state comparison is complex, and everyone's project is different, dbt supports subselectors that include a subset of the full modified criteria:

  • state:modified.body: Changes to node body (e.g. model SQL, seed values)

  • state:modified.configs: Changes to any node configs, excluding database/schema/alias

  • state:modified.relation: Changes to database/schema/alias (the database representation of this node), irrespective of target values or generate_x_name macros

  • state:modified.persisted_descriptions: Changes to relation- or column-level description, if and only if persist_docs is enabled at each level

  • state:modified.macros: Changes to upstream macros (whether called directly or indirectly by another macro)

  • state:modified.contract: Changes to a model's contract, which currently include the name and data_type of columns. Removing or changing the type of an existing column is considered a breaking change, and will raise an error.

Remember that state:modified includes all of the criteria above, as well as some extra resource-specific criteria, such as modifying a source's freshness or quoting rules or an exposure's maturity property.

There are two additional state selectors that complement state:new and state:modified by representing the inverse of those functions:

  • state:old — A node with the same unique_id exists in the comparison manifest

  • state:unmodified — All existing nodes with no changes

These selectors can help you shorten run times by excluding unchanged nodes. Currently, no subselectors are available at this time, but that might change as use cases evolve.

"exposure" method

Use exposure method to select the parent resources of an exposure.

# tests all models that feed into the monthly_reports exposure
dbt test --select "exposure:monthly_reports" 

# Runs all upstream resources of all exposures
dbt run --select "+exposure:*"

# Lists all upstream models of all exposures
dbt ls --select "+exposure:*" --resource-type model  

"metric" method

Use metric method to select parent resources of a metric.

# Runs all upstream resources of the monthly_qualified_leads metric
dbt run --select "+metric:monthly_qualified_leads"

# Builds all upstream models of all metrics
dbt build --select "+metric:*" --resource-type model

"results" method

When using the "results" method in a Bolt schedule of type Deferred or Turbo-CI, you don't need to pass the --state path/to/project/artifacts to your dbt command.

  • Deferred schedule

  • Last run type

Use result method to select resources based on their results status from a previous execution.

# Runs all models that successfully ran on the previous execution of dbt run
dbt run --select "result:success" --state path/to/project/artifacts

# Runs all tests that issued warnings on the previous execution of dbt test
dbt test --select "result:warn" --state /path/to/project/artifacts

# Runs all seeds that failed on the previous execution of dbt seed
dbt seed --select "result:fail" --state /path/to/project/artifacts

Note: This method only works if a dbt command (ex. seed, test, run, build.) was performed prior.

"source_status" method

When using the "source_status" method in a Bolt schedule of type Deferred or Turbo-CI, you don't need to pass the --state path/to/project/artifacts to your dbt command.

  • Deferred schedule

  • Last run type

Another element of job state is the source_status from a prior dbt invocation. For instance, after running dbt source freshness, dbt generates the sources.json artifact, which includes execution times and max_loaded_at dates for dbt sources.

The following dbt commands produce sources.json artifacts whose results can be referenced in subsequent dbt invocations:

  • dbt source freshness

After running one of the above commands, you can reference the source freshness results by adding a selector to a subsequent command as follows:

# You can also set the DBT_STATE environment variable instead of the --state flag.
# must be run again to compare current to previous state
dbt source freshness 
dbt build --select "source_status:fresher+" --state path/to/prod/artifacts

"group" method

Use group method to select models defined within a specified group.

# Runs all models that belong to the marketing group
dbt run --select "group:marketing"

"access" Method

Use access method to select models based on their access property.

# List all public models
dbt list --select "access:public"

"version" Method

Use version to select versioned models based on the following:

  • Version Identifier: A specific version label or number (old, prerelease, latest)

  • Latest Version: The most recent version of a model.

# lists versios older than the 'latest' version
dbt list --select "version:old"  

# Lists versions new than the 'latest' version. 
dbt list --select "version:prerelease"  

# lists the 'latest'version
dbt list --select "version:latest"  

"semantic_model" Method

Use semantic_model method to selects semantic models.

# Runs the semantic model named "sales" and all its dependencies
dbt run --select "semantic_model:sales"

# Builds the semantic model "customer_orders", as well as all upstream resources
dbt build --select "+semantic_model:customer_orders"  

# Lists all resources semantic models
dbt ls --select "semantic_model:*"  

"saved_query" method

Use saved_query method to selects saved queries.

# Lists all saved queries
dbt list --select "saved_query:*"                    

# Lists your saved query named "customers_queries" and all upstream resources
dbt list --select "+saved_query:customers_queries"  

"unit_test" method

Use unit_test method to selects dbt™️ unit tests.

# list all unit tests 
dbt list --select "unit_test:*"                        

# list your unit test named "orders_with_zero_items" and all upstream resources
dbt list --select "+unit_test:orders_with_zero_items"  
PreviousCommandsNextSelector Methods

Last updated 5 months ago

Was this helpful?

Paradime will point to the artifacts based on the :

Note: State-based selection is a powerful and complex feature. Make sure to read about the of state comparison.

Paradime will point to the artifacts based on the :

Paradime will point to the artifacts based on the :

🔍
Bolt schedule configurations
known caveats and limitations
Bolt schedule configurations
Bolt schedule configurations