Tests Dashboard
Last updated
Last updated
The Tests Dashboard, part of Paradime's Radar suite, offers comprehensive insights into your dbt™ test outcomes. This tool enables teams to monitor, analyze, and improve their data testing practices, ensuring the integrity and reliability of their data pipelines.
Prerequisites
Completed dbt™ Monitoring setup in Radar's Get Started guide.
The Tests dashboard is divided into two main sections:
Overview: Provides a high-level summary of all test results. [link]
Detailed: Offers in-depth analytics for individual models and their tests. [link]
The Overview section gives you a broad perspective on your dbt™ test results, allowing you to uncover key insights, including:
Value: Monitor the current status and trends of your dbt™ tests.
How to use:
Track the number of tests in each status category (passed, warn, errored, failed).
Monitor daily test results to identify trends or recurring issues.
Investigate days with higher rates of non-passing tests to address potential data quality problems.
Value: Identify specific models and tests that are failing or raising warnings.
How to use:
Review details of non-passing tests, including associated models and impacted row counts.
Prioritize addressing tests with the highest impact or those affecting critical models.
Use this information to guide your data quality improvement efforts.
Value: Identify dbt™ models with the highest number of problematic tests.
How to use:
Focus on models with the most warn or errored tests for immediate attention.
Investigate common causes of test failures across problematic models.
Prioritize these models for code review, refactoring, or additional testing.
The Detailed section allows you to dive deep into test results for individual models, providing comprehensive insights such as:
Value: Analyze test outcomes for a specific dbt™ model.
How to use:
Select a specific model to view its test results composition.
Monitor the pass rate to gauge the overall health of the model's tests.
Use this information to prioritize models for quality improvement efforts.
Value: Understand how test results for a model have evolved over time.
How to use:
Observe trends in test pass rates and execution status over time.
Identify any recurring patterns or improvements in test outcomes.
Correlate changes in test results with code updates or data source changes.
Value: Gain insights into test results at the column level.
How to use:
Identify columns with the highest number of failing or warning tests.
Prioritize data quality efforts on problematic columns.
Consider reviewing or adjusting tests for columns with consistent failures.
Value: Understand the scope of impact from failing or warning tests.
How to use:
Monitor the number of rows impacted by non-passing tests over time.
Assess the potential downstream effects of test failures on data consumers.
Prioritize addressing tests with the highest number of impacted rows.
In the Overview section, use the "Select date range" dropdown to choose your desired time frame.
In the Detailed section, use both the "Select date range" and "Choose a model" dropdowns to focus on specific time periods and models.
The dashboard will automatically update to reflect your selections, allowing for focused analysis of test results.
By leveraging the Tests Dashboard and following these guidelines, you can significantly enhance the reliability and quality of your data through effective testing practices and targeted improvements.