# Copy of Datadog alerts

{% hint style="info" %}
Paradime integrates natively with Elementary CLI to enable you to generate reports and/or send alerts using the Bolt scheduler out of the box. **No additional installation required.**
{% endhint %}

Elementary sends alerts to Datadog by creating incidents with detailed information about data quality issues, test failures, and model errors. Each alert becomes a structured incident in your Datadog dashboard with appropriate severity levels and metadata.

***

#### 1. Get Datadog API Credentials

To send incidents to Datadog, you'll need both an API key and an Application key.

**Get your API Key**

1. Log in to your Datadog account
2. Navigate to **Organization Settings** → **API Keys**
3. Click **+ New Key** to create a new API key
4. Give it a descriptive name like "Elementary Integration"
5. Copy the API key — you'll need this later

**Get your Application Key**

1. In **Organization Settings**, go to **Application Keys**
2. Click **+ New Key** to create a new application key
3. Give it a descriptive name like "Elementary Integration"
4. Copy the application key — you'll need this later

**Ensure Application Key has the below permissions**

1. `incident_notification_settings_read`
2. `incident_read`
3. `incident_write`
4. `teams_read`
5. `user_access_read`

**Identify your Datadog Site**

Your Datadog site depends on your region. You can check your site by looking at your Datadog URL when logged in.

| Region        | Site                |
| ------------- | ------------------- |
| US1 (default) | `datadoghq.com`     |
| US3           | `us3.datadoghq.com` |
| US5           | `us5.datadoghq.com` |
| EU1           | `datadoghq.eu`      |
| AP1           | `ap1.datadoghq.com` |
| GOV           | `ddog-gov.com`      |

***

#### 2. Configure the Integration

Pass your credentials directly when running `edr monitor`. You should use environment variables in the Bolt command, as [described here](https://docs.paradime.io/app-help/documentation/bolt/special-environment-variables/runtime-environment-variables) for secrets.

```shell
edr monitor \
  --datadog-api-key <your_api_key> \
  --datadog-application-key <your_application_key> \
  --datadog-site <your_site> \
  --datadog-default-severity SEV-3
```

**Available CLI options:**

| Option                       | Short flag | Description                                    |
| ---------------------------- | ---------- | ---------------------------------------------- |
| `--datadog-api-key`          | `-dak`     | Your Datadog API key                           |
| `--datadog-application-key`  | `-dapp`    | Your Datadog Application key                   |
| `--datadog-site`             | `-ds`      | Your Datadog site (e.g., `datadoghq.com`)      |
| `--datadog-default-severity` | `-dsev`    | Default incident severity (`SEV-1` to `SEV-5`) |

***

#### 3. Test your Integration

Run the following command to create a test incident in your Datadog account and verify the integration is configured correctly:

```shell
edr monitor --test-datadog
```

If successful, you'll see a test incident created in your Datadog dashboard under **Incidents**, including sample error details, metadata, and all configured notification settings.

***

#### 4. Execute the CLI

Once configured, run the following command after your dbt runs and tests:

```shell
edr monitor \
  --datadog-api-key <your_api_key> \
  --datadog-application-key <your_app_key> \
  --datadog-site <your_site> \
  --group-by [table | alert]
```

***

#### 5. Per-Alert Customization via dbt YAML

{% hint style="info" %}
You can override Datadog incident settings on a per-model or per-test basis directly in your dbt project YAML files. These settings take precedence over the global CLI defaults.
{% endhint %}

**Where to add these settings**

Per-alert Datadog settings live inside the `alerts_config` block under `meta` in your dbt YAML files. They can be applied at:

* **Model level** — affects all alerts from that model's tests
* **Test level** — affects only that specific test (overrides model-level if both are set)

```yaml
# models/schema.yml

models:
  - name: my_model
    meta:
      alerts_config:
        datadog_severity: "SEV-2"               # model-level default
        datadog_notification_handle: "@team-data-quality"

    columns:
      - name: user_id
        tests:
          - not_null:
              meta:
                alerts_config:
                  datadog_severity: "SEV-1"      # test-level override — takes precedence
                  datadog_commander_uuid: "abc123-uuid-here"
```

**Available per-alert parameters**

| Parameter                     | Type   | Description                                                                                                                                                                                                                                                             |
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `datadog_severity`            | string | Overrides the incident severity for this alert. Accepted values: `SEV-1`, `SEV-2`, `SEV-3`, `SEV-4`, `SEV-5`. Takes precedence over the CLI `--datadog-default-severity` flag and any status-based severity mapping.                                                    |
| `datadog_notification_handle` | string | Adds a Datadog notification handle to the incident (e.g. `@team-first-response` or `@user@example.com`). This sends a notification to the handle but does **not** assign them as a responder. The `@` prefix is optional — Elementary adds it automatically if missing. |
| `datadog_commander_uuid`      | string | Sets the incident commander for this alert. Must be a valid Datadog **user UUID** (not a handle). To find a user's UUID, go to **Organization Settings** → **Users** in Datadog. Overrides the global commander configured via the CLI.                                 |
| `datadog_incident_type_uuid`  | string | Sets a custom incident type for this alert. Must be a valid Datadog **incident type UUID**. To find incident type UUIDs, go to **Incidents** → **Settings** → **Incident Types** in Datadog.                                                                            |

{% hint style="warning" %}
`datadog_commander_uuid` and `datadog_incident_type_uuid` require **UUIDs**, not handles or display names. Using the wrong format will cause the incident creation to fail.
{% endhint %}

**How to find the required UUIDs**

**User UUID (`datadog_commander_uuid`)**

1. In Datadog, navigate to **Organization Settings** → **Users**
2. Click on the user you want to assign as commander
3. The UUID is visible in the URL: `app.datadoghq.com/organization-settings/users/<uuid>`

**Incident Type UUID (`datadog_incident_type_uuid`)**

1. In Datadog, navigate to **Incidents** → **Settings** → **Incident Types**
2. Click on the incident type you want to use
3. The UUID is visible in the URL or in the incident type details panel

**Full example**

```yaml
# models/schema.yml

models:
  - name: payments
    meta:
      alerts_config:
        datadog_severity: "SEV-2"
        datadog_notification_handle: "@team-payments"
        datadog_commander_uuid: "f47ac10b-58cc-4372-a567-0e02b2c3d479"
        datadog_incident_type_uuid: "f0524b6b-9328-403a-a533-f701a175dff5"

    tests:
      - elementary.volume_anomaly
      - elementary.freshness_anomaly:
          meta:
            alerts_config:
              datadog_severity: "SEV-1"   # escalate freshness issues specifically
```

**Severity precedence order**

When determining the severity of a Datadog incident, Elementary applies the following precedence (highest to lowest):

1. **Test-level** `datadog_severity` in `meta.alerts_config`
2. **Model-level** `datadog_severity` in `meta.alerts_config`
3. **Tag-based severity rules** (if configured via CLI)
4. **Status-based mapping** (`error` → SEV-1, `fail` → SEV-2, `warn` → SEV-3)
5. **CLI default** `--datadog-default-severity` (fallback, default: `SEV-3`)

***

#### Continuous Alerting

To monitor continuously, use your orchestrator to run `edr monitor` on a regular schedule. We recommend running it right after your dbt job ends to catch the latest data updates as quickly as possible.

***

#### Deduplication

Elementary automatically deduplicates Datadog incidents. Before creating a new incident, it checks whether an active incident already exists for the same alert. If one is found, no duplicate is created.

This means:

* Re-running `edr monitor` without resolving the underlying issue will **not** create duplicate incidents.
* Once an incident is resolved in Datadog, the next failing run will create a fresh incident.

\-----

Elementary sends alerts to Datadog by creating incidents with detailed information about data quality issues, test failures, and model errors. Each alert becomes a structured incident in your Datadog dashboard with appropriate severity levels and metadata.

### 1. Get Datadog API Credentials

To send incidents to Datadog, you'll need both an API key and an Application key.

#### **Get your API Key**

1. Log in to your Datadog account
2. Navigate to **Organization Settings** → **API Keys**
3. Click **+ New Key** to create a new API key
4. Give it a descriptive name like "Elementary Integration"
5. Copy the API key — you'll need this later

#### **Get your Application Key**

1. In **Organization Settings**, go to **Application Keys**
2. Click **+ New Key** to create a new application key
3. Give it a descriptive name like "Elementary Integration"
4. Copy the application key — you'll need this later

#### **Ensure Application Key has the below permissions**

1. `incident_notification_settings_read`
2. `incident_read`
3. `incident_write`
4. `teams_read`
5. `user_access_read`

#### **Identify your Datadog Site**

Your Datadog site depends on your region. You can check your site by looking at your Datadog URL when logged in.

| Region        | Site                |
| ------------- | ------------------- |
| US1 (default) | `datadoghq.com`     |
| US3           | `us3.datadoghq.com` |
| US5           | `us5.datadoghq.com` |
| EU1           | `datadoghq.eu`      |
| AP1           | `ap1.datadoghq.com` |
| GOV           | `ddog-gov.com`      |

***

### 2. Configure the Integration

Pass your credentials directly when running `edr monitor.` You should use environment variables in the Bolt command, as [describe here](https://docs.paradime.io/app-help/documentation/bolt/special-environment-variables/runtime-environment-variables) for secrets.

```shell
edr monitor \
  --datadog-api-key <your_api_key> \
  --datadog-application-key <your_application_key> \
  --datadog-site <your_site> \
  --datadog-default-severity SEV-3
```

**Available CLI options:**

| Option                       | Short flag | Description                                    |
| ---------------------------- | ---------- | ---------------------------------------------- |
| `--datadog-api-key`          | `-dak`     | Your Datadog API key                           |
| `--datadog-application-key`  | `-dapp`    | Your Datadog Application key                   |
| `--datadog-site`             | `-ds`      | Your Datadog site (e.g., `datadoghq.com`)      |
| `--datadog-default-severity` | `-dsev`    | Default incident severity (`SEV-1` to `SEV-5`) |

***

### 3. Test your Integration

Run the following command to create a test incident in your Datadog account and verify the integration is configured correctly:

```shell
edr monitor --test-datadog
```

If successful, you'll see a test incident created in your Datadog dashboard under **Incidents**, including sample error details, metadata, and all configured notification settings.

***

### 4. Execute the CLI

Once configured, run the following command after your dbt runs and tests:

```shell
edr monitor \
  --datadog-api-key <your_api_key> \
  --datadog-application-key <your_app_key> \
  --datadog-site <your_site> \
  --group-by [table | alert]
```

***

### Alert on Source Freshness Failures

Not supported in dbt Cloud.

To alert on source freshness failures, run `edr run-operation upload-source-freshness` immediately after each execution of `dbt source freshness`. This operation uploads the results to a table, and the subsequent `edr monitor` execution will send the alert as a Datadog incident.

Keep the following in mind:

* `dbt source freshness` and `upload-source-freshness` must run from the same machine.
* `upload-source-freshness` requires the `--project-dir` argument to be passed.

***

### Continuous Alerting

To monitor continuously, use your orchestrator to run `edr monitor` on a regular schedule. We recommend running it right after your dbt job ends to catch the latest data updates as quickly as possible.
