BigQuery Query Analysis Tool
The BigQuery Query Analysis Tool allows DinoAI to deep-dive into a specific BigQuery job by its ID, giving you detailed performance and execution statistics to diagnose slow queries and slot contention without leaving the Paradime IDE.
This tool bridges the gap between noticing a performance issue and understanding its root cause, enabling DinoAI to analyze execution metrics, surface bottlenecks, and suggest concrete fixes β from query rewrites to reservation configuration changes.
This tool is built for query-level debugging. If you need to explore dataset metadata or run exploratory SQL, use the Warehouse Tool or SQL Query Tool instead.
Capabilities
The BigQuery Query Analysis Tool fetches detailed execution data for a given BigQuery job ID from INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION and returns it as a structured report DinoAI can reason over. Specifically, it:
Breaks down query duration and phase timings
Reports slot consumption and resource usage
Surfaces cache efficiency and partition usage
Includes bytes processed and bytes billed
Highlights queue time vs. execution time
Defaults to a 7-day search window, narrowable with explicit start and end dates
Using the BigQuery Query Analysis Tool
Open DinoAI in the right panel of the Code IDE
Provide the BigQuery job ID you want to analyze (e.g.
bquxjob_28b5b82e_19c066dea29)Add your prompt describing what you want DinoAI to focus on β bottlenecks, slot usage, cache efficiency, partition pruning, and so on
Grant permission when DinoAI asks to access the query data
Review DinoAI's analysis and apply the suggested optimizations
Example Use Cases
Diagnosing a Slow Query
Prompt
Result: DinoAI fetches the performance statistics for the job, identifies the most expensive phase, and explains what is causing the slowdown in plain language β along with concrete suggestions such as query rewrites, clustering key changes, or partition pruning improvements.
Investigating Slot Contention
Prompt
Result: DinoAI analyzes the queue and execution time breakdown, assesses whether the query is being held back by slot contention or under-provisioning, and recommends specific configuration changes such as reservation adjustments or a switch between on-demand and capacity pricing.
Checking Cache Efficiency
Prompt
Result: DinoAI checks the cache hit rate for the job and explains whether the query was able to reuse cached results. If not, it surfaces the likely reasons β such as non-deterministic functions or volatile filters β and suggests changes to improve cache efficiency on subsequent runs.
Working with Other Tools
The BigQuery Query Analysis Tool works well alongside DinoAI's other capabilities to support your full debugging and optimization workflow:
Combine with the SQL Execution Tool to re-run an optimized version of the query and validate the performance improvement against a new job ID
Combine with the Warehouse Tool to cross-reference query performance with table metadata, clustering keys, and partition information
Use alongside Git Lite to commit query or model changes made as a result of the performance analysis
Best Practices
Use the exact BigQuery job ID β Find the job ID in the BigQuery console under Job history or from your query logs. Copy it directly to ensure DinoAI analyzes the right execution.
Be specific about what to focus on β Tell DinoAI whether you're investigating slow execution, high bytes billed, long queue times, or poor cache efficiency to get the most targeted analysis rather than a generic summary.
Iterate on fixes β After applying a suggested change, re-run the query and analyze the new job ID to validate the improvement and confirm the optimization had the intended effect.
Narrow the date window for large organizations β If your organization runs many jobs, provide explicit start and end dates in your prompt. Wider windows are slower and may surface more results than needed.
Last updated
Was this helpful?