# BigQuery Query Analysis Tool

The BigQuery Query Analysis Tool allows DinoAI to deep-dive into a specific BigQuery job by its ID, giving you detailed performance and execution statistics to diagnose slow queries and slot contention without leaving the Paradime IDE.

This tool bridges the gap between noticing a performance issue and understanding its root cause, enabling DinoAI to analyze execution metrics, surface bottlenecks, and suggest concrete fixes — from query rewrites to reservation configuration changes.

{% hint style="info" %}
This tool is built for query-level debugging. If you need to explore dataset metadata or run exploratory SQL, use the [Warehouse Tool](https://docs.paradime.io/app-help/documentation/dino-ai/tools-and-features/warehouse-tool/bigquery-tools) or [SQL Query Tool](https://docs.paradime.io/app-help/documentation/dino-ai/tools-and-features/warehouse-tool/sql-execution-tool) instead.
{% endhint %}

### Capabilities

The BigQuery Query Analysis Tool fetches detailed execution data for a given BigQuery job ID from `INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION` and returns it as a structured report DinoAI can reason over. Specifically, it:

* Breaks down query duration and phase timings
* Reports slot consumption and resource usage
* Surfaces cache efficiency and partition usage
* Includes bytes processed and bytes billed
* Highlights queue time vs. execution time
* Defaults to a 7-day search window, narrowable with explicit start and end dates

### Using the BigQuery Query Analysis Tool

1. Open DinoAI in the right panel of the Code IDE
2. Provide the BigQuery job ID you want to analyze (e.g. `bquxjob_28b5b82e_19c066dea29`)
3. Add your prompt describing what you want DinoAI to focus on — bottlenecks, slot usage, cache efficiency, partition pruning, and so on
4. Grant permission when DinoAI asks to access the query data
5. Review DinoAI's analysis and apply the suggested optimizations

### Example Use Cases

#### Diagnosing a Slow Query

**Prompt**

```
Analyze BigQuery job bquxjob_28b5b82e_19c066dea29. Suggest optimizations.
```

**Result:** DinoAI fetches the performance statistics for the job, identifies the most expensive phase, and explains what is causing the slowdown in plain language — along with concrete suggestions such as query rewrites, clustering key changes, or partition pruning improvements.

***

#### Investigating Slot Contention

**Prompt**

```
Compare this job's queue time vs execution time. Suggest reservation changes.
```

**Result:** DinoAI analyzes the queue and execution time breakdown, assesses whether the query is being held back by slot contention or under-provisioning, and recommends specific configuration changes such as reservation adjustments or a switch between on-demand and capacity pricing.

***

#### Checking Cache Efficiency

**Prompt**

```
Was this query hitting the cache? How can I improve cache reuse?
```

**Result:** DinoAI checks the cache hit rate for the job and explains whether the query was able to reuse cached results. If not, it surfaces the likely reasons — such as non-deterministic functions or volatile filters — and suggests changes to improve cache efficiency on subsequent runs.

### Working with Other Tools

The BigQuery Query Analysis Tool works well alongside DinoAI's other capabilities to support your full debugging and optimization workflow:

* Combine with the **SQL Execution Tool** to re-run an optimized version of the query and validate the performance improvement against a new job ID
* Combine with the **Warehouse Tool** to cross-reference query performance with table metadata, clustering keys, and partition information
* Use alongside **Git Lite** to commit query or model changes made as a result of the performance analysis

### Best Practices

**Use the exact BigQuery job ID** — Find the job ID in the BigQuery console under **Job history** or from your query logs. Copy it directly to ensure DinoAI analyzes the right execution.

**Be specific about what to focus on** — Tell DinoAI whether you're investigating slow execution, high bytes billed, long queue times, or poor cache efficiency to get the most targeted analysis rather than a generic summary.

**Iterate on fixes** — After applying a suggested change, re-run the query and analyze the new job ID to validate the improvement and confirm the optimization had the intended effect.

**Narrow the date window for large organizations** — If your organization runs many jobs, provide explicit start and end dates in your prompt. Wider windows are slower and may surface more results than needed.
