Migrating dbt™ jobs from Github Actions to Paradime Bolt
Overview
This guide walks you through migrating your dbt™ jobs from GitHub Actions to Paradime's Bolt orchestration platform. Paradime offers a purpose-built solution for dbt™ orchestration with features like deferred runs, smart scheduling, and integrated monitoring.
Part 1: Understanding Your Current GitHub Actions Setup
Example GitHub Actions Workflow
Here's a typical GitHub Actions workflow for running dbt™ jobs:
# .github/workflows/dbt_production.yml
name: dbt Production Run
on:
schedule:
# Run daily at 6 AM UTC
- cron: '0 6 * * *'
workflow_dispatch: # Allow manual triggers
push:
branches:
- main
jobs:
dbt_run:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dbt
run: |
pip install dbt-core dbt-snowflake==1.7.0
- name: Install dependencies
run: |
dbt deps
env:
DBT_PROFILES_DIR: .
- name: Run dbt seed
run: dbt seed --target prod
env:
DBT_SNOWFLAKE_ACCOUNT: ${{ secrets.SNOWFLAKE_ACCOUNT }}
DBT_SNOWFLAKE_USER: ${{ secrets.SNOWFLAKE_USER }}
DBT_SNOWFLAKE_PASSWORD: ${{ secrets.SNOWFLAKE_PASSWORD }}
DBT_SNOWFLAKE_ROLE: ${{ secrets.SNOWFLAKE_ROLE }}
DBT_SNOWFLAKE_WAREHOUSE: ${{ secrets.SNOWFLAKE_WAREHOUSE }}
DBT_SNOWFLAKE_DATABASE: ${{ secrets.SNOWFLAKE_DATABASE }}
DBT_PROFILES_DIR: .
- name: Run dbt models
run: dbt run --target prod
env:
DBT_SNOWFLAKE_ACCOUNT: ${{ secrets.SNOWFLAKE_ACCOUNT }}
DBT_SNOWFLAKE_USER: ${{ secrets.SNOWFLAKE_USER }}
DBT_SNOWFLAKE_PASSWORD: ${{ secrets.SNOWFLAKE_PASSWORD }}
DBT_SNOWFLAKE_ROLE: ${{ secrets.SNOWFLAKE_ROLE }}
DBT_SNOWFLAKE_WAREHOUSE: ${{ secrets.SNOWFLAKE_WAREHOUSE }}
DBT_SNOWFLAKE_DATABASE: ${{ secrets.SNOWFLAKE_DATABASE }}
DBT_PROFILES_DIR: .
- name: Run dbt tests
run: dbt test --target prod
env:
DBT_SNOWFLAKE_ACCOUNT: ${{ secrets.SNOWFLAKE_ACCOUNT }}
DBT_SNOWFLAKE_USER: ${{ secrets.SNOWFLAKE_USER }}
DBT_SNOWFLAKE_PASSWORD: ${{ secrets.SNOWFLAKE_PASSWORD }}
DBT_SNOWFLAKE_ROLE: ${{ secrets.SNOWFLAKE_ROLE }}
DBT_SNOWFLAKE_WAREHOUSE: ${{ secrets.SNOWFLAKE_WAREHOUSE }}
DBT_SNOWFLAKE_DATABASE: ${{ secrets.SNOWFLAKE_DATABASE }}
DBT_PROFILES_DIR: .
- name: Notify on failure
if: failure()
run: echo "Job failed - send notification"
What This Workflow Does
Triggers on a daily schedule, manual dispatch, or push to main
Installs dbt™ and dependencies
Runs dbt™ seed, run, and test commands
Uses GitHub Secrets for warehouse credentials
Notifies on failure (basic)
Part 2: Prerequisites for Paradime Migration
Before migrating, ensure you have:
1. Paradime Workspace Setup
Active Paradime account
Workspace created and configured
Access to the Bolt application
2. Data Warehouse Connection
Production connection configured in Paradime
This connection should have the same permissions as your GitHub Actions credentials
Navigate to: Settings → Connections to set this up
3. Git Repository Connected
Your dbt™ project repository connected to Paradime
Git credentials configured
Navigate to: Settings → Git Integration
4. GitHub App Integration (for Native CI/CD)
Paradime GitHub App installed in your GitHub organization
This enables native Turbo CI and Continuous Deployment without manual GitHub Actions
Installation Guide: docs.paradime.io/app-help/documentation/integrations/ci-cd/github
5. dbt™ Project
Your dbt™ project accessible in Paradime IDE
Models materialized and tested in development
Part 3: Creating Your First Paradime Schedule
Method 1: Using the Bolt UI (Recommended for Beginners)
Step 1: Access Bolt
Log into your Paradime workspace
Navigate to Bolt from the left sidebar
Click Create Schedule
Step 2: Configure Schedule Settings
Fill in the basic settings:
Type: Choose Standard (equivalent to a basic GitHub Actions job)
Name:
daily_production_run
(descriptive name for your schedule)Git Branch:
main
(or your production branch)Owner Email: Your email address
Step 3: Add Commands
Add your dbt™ commands in sequence (equivalent to the steps in your GitHub Action):
dbt seed --target prod
dbt run --target prod
dbt test --target prod
Pro Tip: Each command runs sequentially. If one fails, subsequent commands won't execute.
Step 4: Configure Trigger
Select your trigger type:
Scheduled Run: For time-based execution (like cron)
Choose Cron Schedule:
0 6 * * *
(daily at 6 AM UTC)Or use presets like
@daily
,@hourly
,@weekly
On Merge: Trigger when PR is merged to specified branch
On Run Completion: Chain jobs together
Step 5: Set Up Notifications
Configure alerts to know when jobs succeed or fail:
Slack Notifications:
Toggle Slack Notify On: Select
failed
and/orpassed
Enter channel:
#data-team-alerts
Email Notifications:
Toggle Email Notify On: Select
failed
and/orpassed
Enter email addresses
Step 6: Deploy
Click Create Schedule to deploy your job.
Method 2: Using Schedules as Code (YAML)
For teams that prefer infrastructure-as-code, Paradime supports YAML-based schedules.
Step 1: Create paradime_schedules.yml
paradime_schedules.yml
In your dbt™ project root, create or edit paradime_schedules.yml
:
# paradime_schedules.yml
- name: daily_production_run
schedule: "0 6 * * *" # Daily at 6 AM UTC
environment: production
git_branch: main
commands:
- dbt seed --target prod
- dbt run --target prod
- dbt test --target prod
slack_on:
- failed
- passed
slack_notify:
- "#data-team-alerts"
email_on:
- failed
email_notify:
- "[email protected]"
Step 2: Commit and Push
git add paradime_schedules.yml
git commit -m "Add production schedule"
git push
Step 3: Sync in Paradime
Paradime will automatically detect and sync your schedule configuration.
Part 4: Migration Mapping Guide
Here's how GitHub Actions concepts map to Paradime Bolt:
on.schedule.cron
Cron Schedule trigger
Same cron syntax
on.push.branches
On Merge trigger
Triggers on merge to branch
on.workflow_dispatch
Manual run button
Available in Bolt UI
jobs.steps
Commands list
Sequential execution
env
secrets
Connection credentials
Managed in Paradime settings
Notifications
Slack/Email notifications
Built into Bolt
runs-on
Paradime environment
Managed infrastructure
uses: actions/checkout
Automatic
Paradime handles git checkout
Part 5: Advanced Features in Paradime
Native GitHub Integration Benefits
With the Paradime GitHub App installed, you get:
Automatic PR checks: Turbo CI runs automatically on every pull request
Merge triggers: Deploy changes instantly when PRs are merged
Lineage Diff comments: Automated comments showing downstream impact on Looker, Tableau, ThoughtSpot, and dbt™ mesh
No GitHub Actions needed: Paradime handles triggering and status reporting
Centralized logs: All CI/CD runs visible in Bolt dashboard
Installation: Follow the guide at docs.paradime.io/app-help/documentation/integrations/ci-cd/github
Deferred Runs (Optimization)
Paradime offers Deferred Schedules that only run changed models:
- name: incremental_run
schedule: "@hourly"
environment: production
git_branch: main
deferred_schedule:
enabled: true
deferred_schedule_name: daily_production_run
successful_run_only: true
commands:
- dbt run -s state:modified+ --target prod
- dbt test -s state:modified+ --target prod
Benefits:
Faster execution (only modified models)
Lower compute costs
Smart state comparison
Turbo CI for Pull Requests
🎉 Paradime supports native GitHub integration for Turbo CI! No need to write GitHub Actions workflows manually.
Option 1: Native GitHub App Integration (Recommended)
The easiest way to enable Turbo CI is through Paradime's native GitHub app, which automatically triggers CI checks when you open a pull request.
Setup Steps:
Install the Paradime GitHub App:
Navigate to Settings > Integrations in Paradime
Click Connect next to GitHub Integration
Follow the authentication flow and select your repositories
Click Install and authorize
Complete the user-level OAuth by going to Profile > Profile Settings
Create a Turbo CI Schedule in Bolt:
- name: turbo_ci_run schedule: "OFF" # Only runs on PR environment: development git_branch: main deferred_schedule: enabled: true deferred_schedule_name: daily_production_run commands: - dbt build -s state:modified+ --target ci slack_on: - failed slack_notify: - "#dev-team"
That's it! When you open a pull request, Paradime will automatically:
Trigger the Turbo CI schedule
Build modified models in a temporary schema (
paradime_turbo_ci_pr_<commit_sha>
)Run tests on changed models
Post status checks directly to your PR
📚 Documentation: GitHub Integration Setup | Turbo CI Guide
Option 2: Manual GitHub Actions (Alternative)
If you prefer to manage your own GitHub Actions workflow or need custom logic:
# .github/workflows/paradime_turbo_ci.yml
name: Paradime Turbo CI
on:
pull_request:
branches: [main]
jobs:
turbo_ci:
runs-on: ubuntu-latest
steps:
- name: Run Paradime Turbo CI
run: |
pip install paradime-io
paradime bolt run "turbo_ci_run" --branch ${{ github.sha }} --wait
env:
PARADIME_API_KEY: ${{ secrets.PARADIME_API_KEY }}
PARADIME_API_SECRET: ${{ secrets.PARADIME_API_SECRET }}
PARADIME_API_ENDPOINT: ${{ secrets.PARADIME_API_ENDPOINT }}
Continuous Deployment on Merge
🎉 Paradime supports native GitHub integration for Continuous Deployment! Automatically deploy when PRs are merged.
Option 1: Native GitHub App Integration (Recommended)
With the Paradime GitHub app installed (see Turbo CI setup above), you can enable automatic deployments on merge without any GitHub Actions configuration.
Setup in Bolt UI:
Create or edit a schedule in Bolt
Set Schedule Type to Deferred
Enable Trigger on Merge
Select your production branch (e.g.,
main
)Configure your commands:
Or via YAML:
- name: continuous_deployment
schedule: "OFF"
environment: production
git_branch: main
trigger_on_merge: true
deferred_schedule:
enabled: true
deferred_schedule_name: daily_production_run
successful_run_only: true
commands:
- dbt run -s state:modified+ --target prod
- dbt test -s state:modified+ --target prod
slack_on:
- failed
- passed
slack_notify:
- "#deployment-alerts"
How it works:
When a PR is merged to your production branch, Paradime automatically triggers the schedule
Only modified models are deployed (using state comparison)
Status updates are posted to Slack
No GitHub Actions workflow needed!
📚 Documentation: GitHub Native CD Guide
Option 2: Manual GitHub Actions (Alternative)
If you need custom deployment logic or prefer managing workflows yourself:
# .github/workflows/paradime_cd.yml
name: Paradime Continuous Deployment
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to Production
run: |
pip install paradime-io
paradime bolt run "continuous_deployment" --wait
env:
PARADIME_API_KEY: ${{ secrets.PARADIME_API_KEY }}
PARADIME_API_SECRET: ${{ secrets.PARADIME_API_SECRET }}
PARADIME_API_ENDPOINT: ${{ secrets.PARADIME_API_ENDPOINT }}
Part 6: Step-by-Step Migration Process
Phase 1: Parallel Run (Week 1)
Keep GitHub Actions running as-is
Create equivalent schedules in Paradime Bolt
Monitor both for consistency
Compare results and timing
Phase 2: Validation (Week 2)
Verify all jobs run successfully in Paradime
Test notifications (Slack/email)
Check monitoring and logs
Validate data quality remains consistent
Phase 3: Cutover (Week 3)
Disable GitHub Actions schedules (comment out cron)
Keep GitHub Actions files for emergency fallback
Monitor Paradime closely for first few days
Update documentation and runbooks
Phase 4: Cleanup (Week 4+)
Archive GitHub Actions workflows
Update team documentation
Train team on Bolt interface
Optimize schedules using deferred runs
Part 7: Troubleshooting Common Issues
Issue 1: Schedule Not Running
Check:
Schedule is not paused (look for pause icon)
Cron syntax is correct
Branch exists and is accessible
Production connection is active
Issue 2: Commands Failing
Check:
Connection credentials are valid
Target profile exists in
profiles.yml
Models exist in specified branch
Check logs in Bolt → Run History
Issue 3: Notifications Not Sending
Check:
Slack workspace is connected (Settings → Integrations)
Channel names are correct (include
#
)Email addresses are valid
Notification toggles are enabled
Issue 4: Git Sync Issues
Check:
Git credentials are valid
Branch exists
Repository is accessible
Try manual sync in Settings
Part 8: Best Practices
1. Naming Conventions
Use descriptive, consistent names:
✅
daily_full_refresh
✅
hourly_incremental_models
❌
schedule1
❌
test
2. Start Simple
Begin with Standard schedules, then adopt:
Deferred runs for optimization
Turbo CI for PR validation
On Run Completion for dependencies
3. Monitor Proactively
Set up Slack notifications for all production jobs
Review Run History weekly
Check SLA compliance in analytics
4. Use Version Control
Keep
paradime_schedules.yml
in gitReview changes in PRs
Document major changes
5. Leverage Deferred Runs
After establishing baseline schedules:
Identify frequently-modified models
Create deferred schedules for efficiency
Monitor compute savings
Part 9: Quick Reference
Common Cron Schedules
@hourly # Every hour at minute 0
@daily # Every day at midnight UTC
@weekly # Every Sunday at midnight UTC
0 6 * * * # Daily at 6 AM UTC
0 */4 * * * # Every 4 hours
0 9 * * 1-5 # Weekdays at 9 AM UTC
Essential Commands
# Run all models
dbt run
# Run specific model
dbt run -s my_model
# Run modified models only
dbt run -s state:modified+
# Test all models
dbt test
# Full refresh
dbt run --full-refresh
Useful Links
Conclusion
Migrating from GitHub Actions to Paradime Bolt offers:
Simplified management: No infrastructure to maintain
Better observability: Built-in monitoring and analytics
Cost optimization: Deferred runs and smart scheduling
Native dbt™ integration: Purpose-built for dbt™ workflows
Team collaboration: Centralized scheduling and monitoring
Start with a simple schedule migration, validate in parallel, then gradually adopt advanced features like deferred runs and Turbo CI for maximum efficiency.
Questions or Issues?
Check the Paradime Help Center
Review the Bolt Documentation
Contact Paradime support for assistance
Last updated
Was this helpful?