# Re-executes the last dbt™ command from the point of failure

This template outlines how to configure Paradime's scheduler to retry failed dbt™ models or tests. By implementing this solution, you can create a resilient data pipeline that allows you to reruns failed models and test from previous executions.

{% hint style="success" %}

### Key Benefits

* Ensures your data pipeline continues functioning despite occasional failures
* Less time spent checking for and manually rerunning failed models
* Quick recovery from transient issues that cause model failures
  {% endhint %}

{% hint style="warning" %}

### Prerequisites

* [Scheduler Environment](https://docs.paradime.io/app-help/documentation/settings/connections/scheduler-environment) is connected to your data warehouse provider.&#x20;
  {% endhint %}

{% @arcade/embed flowId="ugVw2rvvUBWa2RpNRMRf" url="<https://app.arcade.software/share/ugVw2rvvUBWa2RpNRMRf>" %}

### Default Configuration

#### Schedule Settings

<table><thead><tr><th width="243">Setting</th><th width="262">Value</th><th>Explanation</th></tr></thead><tbody><tr><td><strong>Schedule Type</strong></td><td><code>Deferred</code></td><td>Ensures consistent execution for production workloads in a single environment. Best for regular data pipeline runs</td></tr><tr><td><strong>Schedule Name</strong></td><td><code>dbt retry</code></td><td>Descriptive name that indicates purpose</td></tr><tr><td><strong>Deferred schedule</strong></td><td><code>hourly run</code></td><td>Specifies the production schedule you want to re-run from last point of failure.</td></tr><tr><td><strong>Git Branch</strong></td><td><code>main</code></td><td>Uses your default production branch to ensure you're always running the latest approved code</td></tr><tr><td><strong>Last run type (for comparison)</strong></td><td><code>Last Run</code></td><td>Ensure the schedule defers to Last run that completed with errors.</td></tr></tbody></table>

#### Command Settings

The template uses a single command to re-run models and tests from the last point of failure:

* `dbt retry`: Executes dbt™ models and test from the last point of failures for the deferred schedule name set in the configuration.

This command ensures that all your models and tests that failed in the last run are re-executed.

{% hint style="info" %}
For custom command configurations, see [Command Settings](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/command-settings) documentation.
{% endhint %}

**Trigger Type**

* **Type**: Scheduled Run (Cron)
* **Cron Schedule**: `OFF` (This schedule will be triggered by a new pull request event, not a cron schedule)

{% hint style="info" %}
For custom Trigger configurations, see [Trigger Types](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/trigger-types) documentation.&#x20;
{% endhint %}

#### Notification Settings

* **Email Alerts**:
  * **Success**: Confirms all models were re-built and tested successfully, letting you know your data pipeline is healthy
  * **Failure**: Immediately alerts you when models fail to build or tests fail, allowing quick response to issues
  * **SLA Breach**: Alerts when runs take longer than the set duration (default: 2 hours), helping identify performance degradation

{% hint style="info" %}
For custom notification configurations, see [Notification Settings](https://docs.paradime.io/app-help/documentation/bolt/creating-schedules/notification-settings) documentation.
{% endhint %}

### Use Cases

#### Primary Use Cases

* **Handling Intermittent Connection Issues**: Automatically retry models that failed due to temporary connection problems with your data warehouse
* **Recovering from Resource Constraints**: Retry models that failed due to memory limitations or query timeouts during peak usage times
* **Addressing Data Availability Delays**: Recover from failures caused by source data not being available at the expected time
* **Managing Dependencies**: Ensure downstream models get rebuilt after their dependencies are successfully retried
* **Production Recovery**: Quickly restore production data pipelines after unexpected failures without manual intervention
