Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • What Are Sources?
  • Defining Sources in YAML
  • Using Sources in Models
  • Best Practices for Source Organization
  • Source Freshness
  • Advanced Source Configurations
  • Generating Source Definitions
  • Common Source Patterns

Was this helpful?

  1. Concepts
  2. dbt™ fundamentals
  3. Getting started with dbt™

Working with Sources

Sources in dbt represent the raw data tables in your warehouse that serve as the foundation for your transformations. Rather than referencing raw tables directly, dbt allows you to define sources in a centralized way, improving maintainability and enabling powerful features like freshness checking.

What Are Sources?

In dbt, sources represent raw data tables from external systems, such as an operational database, CRM, or third-party APIs. Instead of referencing raw tables directly in models, dbt allows you to define sources in a centralized file (sources.yml) for better organization, maintainability, and documentation.

The sources.yml file is a crucial component in dbt projects, centralizing metadata about raw data tables. This ensures consistency, maintainability, and automatic documentation.

Why Use Sources?

  • Centralizes raw table definitions – Avoids hardcoded table names across multiple models.

  • Improves maintainability – If raw table locations change, you only need to update sources.yml.

  • Enables freshness checks – dbt can monitor source data latency.

  • Enhances documentation – Automatically generates lineage graphs and model dependencies.


Defining Sources in YAML

Sources are defined in .yml files under the sources: key. Here's a typical example:

version: 2

sources:
  - name: jaffle_shop  # Logical name of the source
    database: raw      # The database where the source is stored (optional)
    schema: jaffle_shop  # Schema containing the source tables
    tables:
      - name: orders
        columns:
          - name: id
            tests:
              - unique
              - not_null
          - name: status
            tests:
              - accepted_values:
                  values: ['placed', 'shipped', 'completed', 'returned']
      - name: customers

In this example:

  • We've defined a source named jaffle_shop that points to tables in the raw.jaffle_shop schema

  • We've defined two tables: orders and customers

  • We've added column-level tests to the orders table


Using Sources in Models

Once sources are defined, you can reference them using the source() function in your dbt models:

-- models/staging/stg_orders.sql
SELECT
  order_id,
  customer_id,
  order_date,
  status,
  amount
FROM {{ source('jaffle_shop', 'orders') }}

This offers several advantages:

  • Consistency: Source references are standardized across your project

  • Refactoring: If a source table moves, you only need to update one place

  • Documentation: dbt automatically builds lineage from sources to models

  • Testing: You can apply tests to sources for early validation


Best Practices for Source Organization

Group Related Sources

Organize sources by system or domain:

sources:
  - name: stripe       # Payment processing
    schema: raw_stripe
    tables:
      - name: charges
      - name: customers

  - name: shopify      # E-commerce platform
    schema: raw_shopify
    tables:
      - name: orders
      - name: products

Document Your Sources

Add descriptions to help your team understand the data:

sources:
  - name: google_analytics
    description: "Web analytics data from our marketing site"
    tables:
      - name: sessions
        description: "User sessions with UTM parameters"
        columns:
          - name: session_id
            description: "Unique identifier for the session"

Apply Tests to Sources

Find data quality issues early by testing your sources:

sources:
  - name: crm
    tables:
      - name: customers
        columns:
          - name: customer_id
            tests:
              - unique
              - not_null
          - name: email
            tests:
              - unique
              - not_null

Source Freshness

One of the most powerful features of sources is the ability to check data freshness - ensuring your source data is up-to-date before you build models on top of it.

Configuring Freshness Checks

Add a freshness block and specify a loaded_at_field in your sources definition:

sources:
  - name: sales_data
    schema: raw_sales
    freshness:
      warn_after: {count: 12, period: hour}
      error_after: {count: 24, period: hour}
    loaded_at_field: updated_at
    tables:
      - name: transactions

This configuration:

  • Uses the updated_at column to determine when data was last loaded

  • Warns if data is more than 12 hours old

  • Errors if data is more than 24 hours old

Running Freshness Checks

Check freshness with:

dbt source freshness

The output will show the status of each source:

16:35:31 | Freshness of jaffle_shop.orders: PASS (0 seconds)
16:35:32 | Freshness of jaffle_shop.customers: WARN (13 hours)

Table-Specific Freshness

You can override source-level freshness settings for specific tables:

sources:
  - name: inventory
    freshness:
      warn_after: {count: 12, period: hour}
      error_after: {count: 24, period: hour}
    loaded_at_field: last_updated
    tables:
      - name: daily_stock
      - name: real_time_stock
        freshness:
          warn_after: {count: 15, period: minute}
          error_after: {count: 30, period: minute}
        loaded_at_field: timestamp

In this example, real_time_stock has stricter freshness requirements than other tables in the source.


Advanced Source Configurations

Source Overrides by Environment

You can override source details for different environments by using custom schemas:

sources:
  - name: marketing
    database: "{% if target.name == 'prod' %}analytics{% else %}raw_data{% endif %}"
    schema: "{% if target.name == 'prod' %}production{% else %}{{ target.schema }}{% endif %}"
    tables:
      - name: ad_campaigns

Filtering Source Data

For large source tables, you can define filter conditions:

sources:
  - name: logs
    tables:
      - name: application_logs
        external:
          location: "s3://my-bucket/logs/"
          options:
            format: parquet
        freshness:
          filter: "date_column >= dateadd('day', -3, current_date)"

Generating Source Definitions

If you're using Paradime, you can automatically generate source definitions:

paradime sources generate

This command:

  • Scans your data warehouse

  • Identifies tables in raw schemas

  • Generates sources.yml files with the correct structure

Paradime Source Generation Benefits

✅ Scans your data warehouse and auto-generates the correct table definitions. ✅ Prevents manual errors in source definitions. ✅ Keeps sources up to date with your evolving data warehouse schema.


Common Source Patterns

Staging Models for Sources

A common pattern is to create staging models that select from sources. These provide a clean interface between raw data and your transformations:

-- models/staging/stg_customers.sql
SELECT
  customer_id,
  first_name,
  last_name,
  email,
  created_at,
  updated_at
FROM {{ source('crm', 'customers') }}

Testing Complex Source Relationships

You can test relationships between source tables:

sources:
  - name: application
    tables:
      - name: users
        columns:
          - name: user_id
            tests:
              - unique
              - not_null
      - name: orders
        columns:
          - name: user_id
            tests:
              - relationships:
                  to: source('application', 'users')
                  field: user_id

By effectively managing sources in dbt, you build a strong foundation for your analytics pipeline, making it easier to maintain, test, and document the origin of your data.

PreviousProject StrucutureNextTesting Data Quality

Last updated 2 months ago

Was this helpful?

🔍