# SQL Execution Tool

{% hint style="warning" icon="unlock-keyhole" %}
**Currently in Private Preview.** To get access reach out to the Pradime Team at <support@paradime.io>
{% endhint %}

The SQL Execution Tool allows DinoAI to run SQL queries directly against your connected data warehouse, making it the fastest way to validate assumptions, inspect results, and debug issues without leaving the Paradime IDE.

This tool bridges the gap between writing code and verifying it, enabling DinoAI to execute queries and reason over the results — from profiling a dataset to generating dbt tests based on what the data actually contains.

{% hint style="info" %}
**This tool is for query execution.** If you only need to explore warehouse metadata such as schemas, tables, or column definitions, use the [Warehouse Tools](https://docs.paradime.io/app-help/documentation/dino-ai/tools-and-features/warehouse-tool) instead.
{% endhint %}

### Capabilities

The SQL Execution Tool runs SQL statements against your connected data warehouse and returns the results directly to DinoAI. Specifically, it:

* Runs any SQL statement against your connected data warehouse
* Auto-limits result sets to a maximum of 1,000 rows

### Using the SQL Execution Tool

1. Open DinoAI in the right panel of the Code IDE
2. Provide the SQL query you want to run, either directly or as part of a broader prompt
3. Add any additional instructions for how DinoAI should handle or interpret the results
4. Grant permission when DinoAI asks to execute the query
5. Review the results and implement DinoAI's suggested actions

{% @arcade/embed flowId="Di2wyCkR9FuUScjrqT3G" url="<https://app.arcade.software/share/Di2wyCkR9FuUScjrqT3G>" %}

### Example Use Cases

#### Profiling a Dataset

**Prompt**

```
Profile analytics.orders — check row count, null rates, and min/max for numeric columns.
```

**Result:** DinoAI writes and executes the necessary queries to produce a data profile of the table, summarizing key statistics per column so you can quickly assess data quality before building models on top of it.

#### Finding Distinct Values and Configuring a Test

**Prompt:**

{% code overflow="wrap" %}

```
Find all distinct values in the status column of analytics.orders and configure an accepted values test for it.
```

{% endcode %}

**Result:** DinoAI queries the column for all unique values, presents the results, and generates the corresponding `accepted_values` test in your `schema.yml` based on what it finds in the data.

#### Analyzing Query Results

**Prompt**

{% code overflow="wrap" %}

```
Run this query and analyze the output — flag any anomalies or unexpected patterns.
```

{% endcode %}

**Result:** DinoAI executes the SQL, interprets the returned data, and highlights anything unusual such as unexpected nulls, outlier values, or distributions that don't match common assumptions — giving you a starting point for investigation.

### Working with Other Tools

The SQL Execution Tool works well alongside DinoAI's other capabilities to support your full development workflow:

* Combine with the **Warehouse Tool** to explore table and column metadata before writing and executing a query
* Combine with the **Terminal Tool** to run dbt commands and then validate the output by querying the resulting models directly
* Use alongside **Git Lite** to commit model changes after validating results look correct through query execution

### Best Practices

* **Validate early and often** — Run queries against your models as you build to catch issues before they reach production
* **Let DinoAI interpret results** — Ask DinoAI to summarize or analyze query output rather than just returning raw rows, especially for large or complex result sets
* **Use it to drive test generation** — Query for distinct values, null rates, or value ranges and ask DinoAI to turn the findings directly into dbt tests
* **Be mindful of row limits** — Results are capped at 1,000 rows. If your query returns exactly 1,000 rows, there may be additional data not shown — add filters or aggregations to work within the limit effectively
