# Run and Test all your dbt™ Models

This template creates a schedule to execute `dbt run` and `dbt test` across your entire dbt™ project, ensuring comprehensive model execution and validation. It serves as a foundational production schedule, automatically updating and validating all your models on a regular basis to maintain data freshness and quality.

{% hint style="success" %}

### Key Benefits

* Ensures comprehensive model execution and validation for downstream usage (ex. business reporting and dashboards that need current data.)
* Automatically updates and validates all models on a regular basis
* Maintains data freshness and quality across your dbt™ data pipeline
  {% endhint %}

{% hint style="warning" %}

### Prerequisites

* [Scheduler Environment](https://docs.paradime.io/app-help/documentation/settings/connections/scheduler-environment) is connected to your data warehouse provider.&#x20;
  {% endhint %}

{% @arcade/embed flowId="5gPkeoXWy5IS0RKCSfaV" url="<https://app.arcade.software/share/5gPkeoXWy5IS0RKCSfaV>" %}

### Default Configuration

#### Schedule Settings

<table><thead><tr><th>Setting</th><th width="262">Value</th><th>Explanation</th></tr></thead><tbody><tr><td><strong>Schedule Type</strong></td><td><code>Standard</code></td><td>Ensures consistent execution for production workloads in a single environment. Best for regular data pipeline runs</td></tr><tr><td><strong>Schedule Name</strong></td><td><code>run all models</code> </td><td>Descriptive name that indicates purpose</td></tr><tr><td><strong>Git Branch</strong></td><td><code>main</code></td><td>Uses your default production branch to ensure you're always running the latest approved code</td></tr></tbody></table>

#### Command Settings

The template uses two sequential commands that work together to build and validate your data pipeline:

* `dbt run`: Executes SQL transformations to build or update all models in your dbt™ project, following your defined model dependencies
* `dbt test`: After models are built, runs your configured data tests to ensure data quality, including schema tests (unique, not null) and custom data quality tests

This sequence ensures that all your models are not only built but also validated before being used by downstream consumers.

{% hint style="info" %}
For custom command configurations, see [Command Settings](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/command-settings) documentation.
{% endhint %}

#### Trigger Type

* **Type**: Scheduled Run (Cron)
* **Cron Schedule**: `0 */2 * * *` (Every 2 hours, starting at minute 0 to balance frequent data updates and reasonable resource usage)

{% hint style="info" %}
For custom Trigger configurations, see [Trigger Types](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/trigger-types) documentation.&#x20;
{% endhint %}

#### Notification Settings

* **Email Alerts**:
  * **Success**: Confirms all models were built and tested successfully, letting you know your data pipeline is healthy
  * **Failure**: Immediately alerts you when models fail to build or tests fail, allowing quick response to issues
  * **SLA Breach**: Alerts when runs take longer than the set duration (default: 2 hours), helping identify performance degradation

{% hint style="info" %}
For custom notification configurations, see [Notification Settings](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/notification-settings) documentation.
{% endhint %}

### Use Cases

#### Primary Use Cases

* **Regular Production Updates**: Keep production data fresh by regularly rebuilding models based on upstream changes. Essential for business reporting and dashboards that need current data.
* **Continuous Data Validation**: Catch data quality issues early by running tests after every model build. Prevents bad data from flowing to downstream consumers.
* **Initial Project Setup**: Get started quickly with a proven production schedule configuration that follows dbt™ best practices.

### When to Customize

Tailor this template to your specific needs:

* **Adjust trigger type** based on data freshness requirements:
  * Hourly updates for critical models (`0 * * * *`)
  * Daily updates for standard reporting (`0 0 * * *`)
  * Weekly updates for historical analysis (`0 0 * * 0`)
* **Modify command settings** to control what gets built and tested:
  * Build specific models:
    * `dbt run --select finance.*+` (finance models and dependencies)
    * `dbt run --select state:modified+` (changed models and dependents)
  * Test specific models:
    * `dbt test --select tag:critical` (test critical models)
    * `dbt test --select config.severity:error` (run error-level tests)
* **Add notification destinations** (Slack, MS Teams) for team collaboration
