Paradime Help Docs
Get Started
  • 🚀Introduction
  • 📃Guides
    • Paradime 101
      • Getting Started with your Paradime Workspace
        • Creating a Workspace
        • Setting Up Data Warehouse Connections
        • Managing workspace configurations
        • Managing Users in the Workspace
      • Getting Started with the Paradime IDE
        • Setting Up a dbt™ Project
        • Creating a dbt™ Model
        • Data Exploration in the Code IDE
        • DinoAI: Accelerating Your Analytics Engineering Workflow
          • DinoAI Agent
            • Creating dbt Sources from Data Warehouse
            • Generating Base Models
            • Building Intermediate/Marts Models
            • Documentation Generation
            • Data Pipeline Configuration
            • Using .dinorules to Tailor Your AI Experience
          • Accelerating GitOps
          • Accelerating Data Governance
          • Accelerating dbt™ Development
        • Utilizing Advanced Developer Features
          • Visualize Data Lineage
          • Auto-generated Data Documentation
          • Enforce SQL and YAML Best Practices
          • Working with CSV Files
      • Managing dbt™ Schedules with Bolt
        • Creating Bolt Schedules
        • Understanding schedule types and triggers
        • Viewing Run History and Analytics
        • Setting Up Notifications
        • Debugging Failed Runs
    • Migrating from dbt™ cloud to Paradime
  • 🔍Concepts
    • Working with Git
      • Git Lite
      • Git Advanced
      • Read Only Branches
      • Delete Branches
      • Merge Conflicts
      • Configuring Signed Commits on Paradime with SSH Keys
      • GitHub Branch Protection Guide: Preventing Direct Commits to Main
    • dbt™ fundamentals
      • Getting started with dbt™
        • Introduction
        • Project Strucuture
        • Working with Sources
        • Testing Data Quality
        • Models and Transformations
      • Configuring your dbt™ Project
        • Setting up your dbt_project.yml
        • Defining Your Sources in sources.yml
        • Testing Source Freshness
        • Unit Testing
        • Working with Tags
        • Managing Seeds
        • Environment Management
        • Variables and Parameters
        • Macros
        • Custom Tests
        • Hooks & Operational Tasks
        • Packages
      • Model Materializations
        • Table Materialization
        • View​ Materialization
        • Incremental Materialization
          • Using Merge for Incremental Models
          • Using Delete+Insert for Incremental Models
          • Using Append for Incremental Models
          • Using Microbatch for Incremental Models
        • Ephemeral Materialization
        • Snapshots
      • Running dbt™
        • Mastering the dbt™ CLI
          • Commands
          • Methods
          • Selector Methods
          • Graph Operators
    • Paradime fundamentals
      • Global Search
        • Paradime Apps Navigation
        • Invite users to your workspace
        • Search and preview Bolt schedules status
      • Using --defer in Paradime
      • Workspaces and data mesh
    • Data Warehouse essentials
      • BigQuery Multi-Project Service Account
  • 📖Documentation
    • DinoAI
      • Agent Mode
        • Use Cases
          • Creating Sources from your Warehouse
          • Generating dbt™ models
          • Fixing Errors with Jira
          • Researching with Perplexity
          • Providing Additional Context Using PDFs
      • Context
        • File Context
        • Directory Context
      • Tools and Features
        • Warehouse Tool
        • File System Tool
        • PDF Tool
        • Jira Tool
        • Perplexity Tool
        • Terminal Tool
        • Coming Soon Tools...
      • .dinorules
      • Ask Mode
      • Version Control
      • Production Pipelines
      • Data Documentation
    • Code IDE
      • User interface
        • Autocompletion
        • Context Menu
        • Flexible layout
        • "Peek" and "Go To" Definition
        • IDE preferences
        • Shortcuts
      • Left Panel
        • DinoAI Coplot
        • Search, Find, and Replace
        • Git Lite
        • Bookmarks
      • Command Panel
        • Data Explorer
        • Lineage
        • Catalog
        • Lint
      • Terminal
        • Running dbt™
        • Paradime CLI
      • Additional Features
        • Scratchpad
    • Bolt
      • Creating Schedules
        • 1. Schedule Settings
        • 2. Command Settings
          • dbt™ Commands
          • Python Scripts
          • Elementary Commands
          • Lightdash Commands
          • Tableau Workbook Refresh
          • Power BI Dataset Refresh
          • Paradime Bolt Schedule Toggle Commands
          • Monte Carlo Commands
        • 3. Trigger Types
        • 4. Notification Settings
        • Templates
          • Run and Test all your dbt™ Models
          • Snapshot Source Data Freshness
          • Build and Test Models with New Source Data
          • Test Code Changes On Pull Requests
          • Re-executes the last dbt™ command from the point of failure
          • Deploy Code Changes On Merge
          • Create Jira Tickets
          • Trigger Census Syncs
          • Trigger Hex Projects
          • Create Linear Issues
          • Create New Relic Incidents
          • Create Azure DevOps Items
        • Schedules as Code
      • Managing Schedules
        • Schedule Configurations
        • Viewing Run Log History
        • Analyzing Individual Run Details
          • Configuring Source Freshness
      • Bolt API
      • Special Environment Variables
        • Audit environment variables
        • Runtime environment variables
      • Integrations
        • Reverse ETL
          • Hightouch
        • Orchestration
          • Airflow
          • Azure Data Factory (ADF)
      • CI/CD
        • Turbo CI
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
          • Paradime Turbo CI Schema Cleanup
        • Continuous Deployment with Bolt
          • GitHub Native Continuous Deployment
          • Using Azure Pipelines
          • Using BitBucket Pipelines
          • Using GitLab Pipelines
        • Column-Level Lineage Diff
          • dbt™ mesh
          • Looker
          • Tableau
          • Thoughtspot
    • Radar
      • Get Started
      • Cost Management
        • Snowflake Cost Optimization
        • Snowflake Cost Monitoring
        • BigQuery Cost Monitoring
      • dbt™ Monitoring
        • Schedules Dashboard
        • Models Dashboard
        • Sources Dashboard
        • Tests Dashboard
      • Team Efficiency Tracking
      • Real-time Alerting
      • Looker Monitoring
    • Data Catalog
      • Data Assets
        • Looker assets
        • Tableau assets
        • Power BI assets
        • Sigma assets
        • ThoughtSpot assets
        • Fivetran assets
        • dbt™️ assets
      • Lineage
        • Search and Discovery
        • Filters and Nodes interaction
        • Nodes navigation
        • Canvas interactions
        • Compare Lineage version
    • Integrations
      • Dashboards
        • Sigma
        • ThoughtSpot (Beta)
        • Lightdash
        • Tableau
        • Looker
        • Power BI
        • Streamlit
      • Code IDE
        • Cube CLI
        • dbt™️ generator
        • Prettier
        • Harlequin
        • SQLFluff
        • Rainbow CSV
        • Mermaid
          • Architecture Diagrams
          • Block Diagrams Documentation
          • Class Diagrams
          • Entity Relationship Diagrams
          • Gantt Diagrams
          • GitGraph Diagrams
          • Mindmaps
          • Pie Chart Diagrams
          • Quadrant Charts
          • Requirement Diagrams
          • Sankey Diagrams
          • Sequence Diagrams
          • State Diagrams
          • Timeline Diagrams
          • User Journey Diagrams
          • XY Chart
          • ZenUML
        • pre-commit
          • Paradime Setup and Configuration
          • dbt™️-checkpoint hooks
            • dbt™️ Model checks
            • dbt™️ Script checks
            • dbt™️ Source checks
            • dbt™️ Macro checks
            • dbt™️ Modifiers
            • dbt™️ commands
            • dbt™️ checks
          • SQLFluff hooks
          • Prettier hooks
      • Observability
        • Elementary Data
          • Anomaly Detection Tests
            • Anomaly tests parameters
            • Volume anomalies
            • Freshness anomalies
            • Event freshness anomalies
            • Dimension anomalies
            • All columns anomalies
            • Column anomalies
          • Schema Tests
            • Schema changes
            • Schema changes from baseline
          • Sending alerts
            • Slack alerts
            • Microsoft Teams alerts
            • Alerts Configuration and Customization
          • Generate observability report
          • CLI commands and usage
        • Monte Carlo
      • Storage
        • Amazon S3
        • Snowflake Storage
      • Reverse ETL
        • Hightouch
      • CI/CD
        • GitHub
        • Spectacles
      • Notifications
        • Microsoft Teams
        • Slack
      • ETL
        • Fivetran
    • Security
      • Single Sign On (SSO)
        • Okta SSO
        • Azure AD SSO
        • Google SAML SSO
        • Google Workspace SSO
        • JumpCloud SSO
      • Audit Logs
      • Security model
      • Privacy model
      • FAQs
      • Trust Center
      • Security
    • Settings
      • Workspaces
      • Git Repositories
        • Importing a repository
          • Azure DevOps
          • BitBucket
          • GitHub
          • GitLab
        • Update connected git repository
      • Connections
        • Code IDE environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Scheduler environment
          • Amazon Athena
          • BigQuery
          • Clickhouse
          • Databricks
          • Dremio
          • DuckDB
          • Firebolt
          • Microsoft Fabric
          • Microsoft SQL Server
          • MotherDuck
          • PostgreSQL
          • Redshift
          • Snowflake
          • Starburst/Trino
        • Manage connections
          • Set alternative default connection
          • Delete connections
        • Cost connection
          • BigQuery cost connection
          • Snowflake cost connection
        • Connection Security
          • AWS PrivateLink
            • Snowflake PrivateLink
            • Redshift PrivateLink
          • BigQuery OAuth
          • Snowflake OAuth
        • Optional connection attributes
      • Notifications
      • dbt™
        • Upgrade dbt Core™ version
      • Users
        • Invite users
        • Manage Users
        • Enable Auto-join
        • Users and licences
        • Default Roles and Permissions
        • Role-based access control
      • Environment Variables
        • Bolt Schedules Environment Variables
        • Code IDE Environment Variables
  • 💻Developers
    • GraphQL API
      • Authentication
      • Examples
        • Audit Logs API
        • Bolt API
        • User Management API
        • Workspace Management API
    • Python SDK
      • Getting Started
      • Modules
        • Audit Log
        • Bolt
        • Lineage Diff
        • Custom Integration
        • User Management
        • Workspace Management
    • Paradime CLI
      • Getting Started
      • Bolt CLI
    • Webhooks
      • Getting Started
      • Custom Webhook Guides
        • Create an Azure DevOps Work item when a Bolt run complete with errors
        • Create a Linear Issue when a Bolt run complete with errors
        • Create a Jira Issue when a Bolt run complete with errors
        • Trigger a Slack notification when a Bolt run is overrunning
    • Virtual Environments
      • Using Poetry
      • Troubleshooting
    • API Keys
    • IP Restrictions in Paradime
    • Company & Workspace token
  • 🙌Best Practices
    • Data Mesh Setup
      • Configure Project dependencies
      • Model access
      • Model groups
  • ‼️Troubleshooting
    • Errors
    • Error List
    • Restart Code IDE
  • 🔗Other Links
    • Terms of Service
    • Privacy Policy
    • Paradime Blog
Powered by GitBook
On this page
  • What is a workspace?
  • Why did we build this?
  • How does this work?
  • Use-cases
  • Comparison with dbt Cloud™*

Was this helpful?

  1. Concepts
  2. Paradime fundamentals

Workspaces and data mesh

A Paradime Workspace is a self-contained unit that comes with its own repo, users, data warehouse connections, production schedules, alerting, and notifications

PreviousUsing --defer in ParadimeNextData Warehouse essentials

Last updated 3 months ago

Was this helpful?

What is a workspace?

A Paradime Workspace is a self-contained unit where teams can do their analytics work. Each workspace comes with its own repo, users, data warehouse connections, production schedules, alerting, notifications, dbt-version, and integrations.

A workspace maps loosely to an analytics team and their daily work. Everything that someone will do inside a workspace will remain ring-fenced within the workspace. Each workspace can have its own set of users, but users can also be common across workspaces with different permission levels. An example is someone can be an admin in one workspace, a developer in another, and a read-only business user in a third workspace.

All workspaces within a company account have the same data residency. So if a company has chosen London (eu-west-2) as their data residency, then all their workspaces and associated data will be located in London.In the future, we will scope all our products on the Paradime platform at a workspace level.

Why did we build this?

At Paradime, we work backwards from the customer or shape the product so that we can fit seamlessly within our user's daily work.

From an organizational perspective, we have seen many of our customers moving or considering moving to a distributed analytics team from a monolith. Many people call this team structure, hub-and-spoke model or domain-based team model. As companies grow, this is also the abstraction or how BI teams get organized. So, if we have to draw the organization structure around analytics with say, two business units - Sales and Product, it would look something like below.

As a result, analytics teams tend to get broken down to better align with the needs of business stakeholders.

Analytics work includes providing dashboard-level insights, add/remove/update metrics, dbt™* models, data sources, running jobs and the list goes on. To align with this shift in how teams are getting re-organized and how analytics work is being carried out, we built Paradime Workspaces.

Workspaces allow companies to go from a single / central analytics team to distributed analytics team where the work is distributed too. An equivalent software engineering analogy would be going from single monolith to micro-services architecture.

Customers are then able to configure each of their workspaces independently, be it users, dbt-repo and models, production schedules, warehouse connections and integrations.

Having this level of flexibility is a game-changer for companies looking to deploy analytics platform that is future proof and aligned to business goals. Data leaders are able to deploy distributed analytics teams faster than ever with minimal resources and no maintenance overhead.

How does this work?

As a customer, when you first create your account, you will have a default workspace or your first workspace to start with.If your plan allows multiple workspaces, then you can see all the workspaces you are a member of from the drop-down in the navigation bar.

If you are an admin, then you can add / manage all the workspaces by clicking on the Manage Workspaces menu item.

Use-cases

With Workspaces, we are unlocking multiple approaches to how analytics teams can work. We have outlined some of the use cases we see among our customers today and we would love to learn more if you think we are missing something.

Enterprise operating in multiple continents like EU and US or medium-sized/mid-market company adopting data mesh

  • Enterprises with operations in multiple countries can setup workspaces located in the EU and US to meet security and local privacy laws, e.g. GDPR. With Paradime workspaces, organizations can have their accounts setup in the EU and US. Customers now get infinite flexibility to organize their dbt™* projects, warehouse connections, users etc. in line with how teams and business stakeholders are structured.

  • Each workspace can be independently setup and tuned, providing data leaders with the freedom and flexibility to build a global analytics platform.

  • A Paradime workspace can be connected to multiple data warehouse connections for development and production compared to dbt Cloud™*.

Connected data pipelines across workspaces

  • In a data mesh, people often refer to data products, which are outputs of one workspace that are consumed as inputs in another workspace. Organizations will need to model the dependency between workspaces within their scheduling and orchestration pipelines.Each Paradime workspace can have their own production dbt™* schedules.

  • Each workspace also comes with its own API key and API secret. With this abstraction in place, data engineers can build the dependency between workspaces as DAGs in their favorite orchestration platform. Then use each DAG and its API-key and API-secret to trigger a schedule run in one workspace when an upstream workspace has completed.

  • Platform teams can control how the pipeline between connected workspaces will function while analytics teams can setup production dbt™* schedules within their own workspace.

Multiple environments for test/staging and live

  • For organizations with many dbt™* projects, upgrading dbt™* versions without any impact on live projects is sometimes a priority for platform teams. Platform teams also like to test out different data warehouse configurations, dbt™* versions, macros, and packages before rolling them out to the rest of the analytics teams.

  • With Paradime Workspaces, it's possible to have one workspace where analytics teams are working day-to-day. And another workspace where platform teams can test changes before rolling them out to the live workspace.

Migrating from a monolith to data-mesh architecture

  • There are a lot of organizations who are now considering moving from a monolith to a data mesh architecture. During the migration process, they want zero impact / downtime for their data analytics teams.

  • A Paradime Workspace is dedicated for the monolith project where the majority analytics team will be working. Without disrupting their flow, the platform team can then spin up additional workspaces and migrate to a data-mesh. Once migration is complete, they can deprecate the monolith workspace and move the entire team to their mesh workspaces with zero down-time.

Package development

Private package: Customers can be doing their daily work on a live dbt™* project while also building and maintaining private packages. With Paradime Workspace, it's possible to have one workspace for analytics and another for dbt™* packages.

Open source package development: If you are an OSS enthusiast and want to develop open source dbt-packages for the community, you likely have many dbt™* repos. With Paradime Workspaces, each dbt™* package can live inside a workspace. Since workspaces support multiple warehouse connections, developers can test their package against each warehouse before every release. With a unified development environment, package developers will be able to bring even more utility on top of dbt™*.

Comparison with dbt Cloud™*

The primary differences with dbt Cloud™ are as follows:

Warehouses connections:

  • In dbt Cloud™*, the fundamental unit is a project. Each project has its own dbt™* repo linked to a single data warehouse connection. This setup works quite well for a single repo and single warehouse connection - typically when teams are just getting started in their dbt™* journey. But we increasingly see modern teams using warehouses with different computes in Snowflake to support small and large data sets and optimize cost. In such a scenario, in dbt Cloud™* customers will have to keep creating projects and pay for Enterprise pricing. This platform limitation also leads to unnecessary project duplication even though projects have the same git repo.

  • In Paradime, the fundamental unit is a workspace. Each workspace supports multiple warehouse connections making Paradime workspaces much more feature-rich and versatile. Team don't have to have unnecessary duplicate projects.

Alerting and Notifications

  • In dbt-Cloud, all alerts and notifications for schedules are set at the project level. Notifications from all schedules in a project go to the same Slack channel which leads to alert-fatigue, confusion, and improper triaging of issues.

  • In Paradime, alerting and notifications are set on a per schedule basis offering much more granular control and faster actioning. The alerts from schedules go to the relevant Slack channel to relevant people leading to more effective actions.

Integrations

  • In dbt Cloud™*, customers have to use the administrative and metadata API to build integrations. dbt Cloud™* does not offer out-of-the box integration with MDS apps like Looker, Tableau, Fivetran, Hightouch, etc.

  • In Paradime, each workspace has their own integrations. For example, a finance analytics workspace can have end-to-end lineage across Fivetran, dbt™*, and Looker, giving those teams all the tools to work independently. Through Paradime workspaces, analytics teams can function end-to-end on their own.

Pricing

  • In dbt Cloud™*, only Enterprise customers can have more than one dbt™* project. This forces cash and resource strapped teams to move to local VSCode setups that are hard to build, develop and maintain. For teams without the resources to maintain a local setup, it increases their dbt Cloud™* cost by more than 4-6 times. This is unfair.

  • In Paradime, workspace limits across pricing tiers is rational, fair and reasonable and in line with how typical teams are structured.

Conclusion

Paradime Workspaces unlock a wealth of use cases for our customers. Organizations of any size can now implement data mesh and reach a very high level of technical maturity in their analytics platform with realtively much less effort.

Sample team structure aligned to business units
Monolith vs distributed multi-workspace
Creating a workspace
Adding a new workspace
Data mesh in an enterprise or mid-market business
Connected data pipeline across workspaces
Multiple environments for production and testing / staging
Migrating from a monolith to a data mesh
Developing internal dbt™ packages
Developing open source package
🔍