Custom Rules for DinoAI

.dinorules is a configuration file that allows teams to define custom instructions and development standards for the DinoAI Copilot.

The .dinorules file allows you to define custom instructions and development standards to tailor how DinoAI responds to your prompts and questions. These rules are specific to your project and will apply to all DinoAI interactions within your environment.

These file is gitignored to keep your repository clean and remains available across sessions, so you can continue using it after logging out and back in.

Key Benefits

  • Define Project-Specific Rules: Customize DinoAI's behavior to match your team's unique needs and workflows.

  • Set Technical Standards: Specify coding patterns, architecture guidelines, and best practices for your project.

  • Control AI Assistance: Ensure DinoAI provides responses that align with your preferred methods and practices.

  • Adapt Over Time: Easily modify the rules as your project evolves and requirements change.

  • Enhance Team Consistency: Establish consistent development practices across your entire analytics engineering team.

Setting Up .dinorules

  1. From the Code IDE, Click the DinoAI icon (🪄) icon on the left-side panel.

  2. Click the settings icon (⚙️) at the top right of the DinoAI panel. This automatically creates a new file names .dinorules.

Make sure the .dinorules file is placed in the root directory of your repository

your-repository
├── dbt_project/
│   ├── staging/
│   └── marts/
├── macros/
├── seeds/
├── .dinorules              # .dinorules file location
├── README.md
  1. Add your custom instructions to the file. These can be general or highly specific - there's no set syntax required.


Example .dinorules file configuration
General Project Context
- Domain: E-commerce analytics
- Data Stack: dbt project using Snowflake
- Primary Goal: Provide accurate, reusable, and scalable answers to analytics engineering-related questions.


Modeling Guidelines
- Naming: Use `snake_case` for all model and column names.
- Structure: Assume models follow a `staging → marts` hierarchy.


Materialization Standards
- Staging Models: Typically materialized as views.
- Mart Models: Typically materialized as incremental.
- Performance Exceptions: Assume exceptions may exist for performance reasons.


SQL and dbt Best Practices
1. Trailing Commas: Always use trailing commas in `SELECT` statements for clean diffs.
2. SQL Keywords: Write all SQL keywords (`SELECT`, `FROM`, etc.) in uppercase.
3. snake_case: Consistently use `snake_case` for names.
4. CTEs: Use Common Table Expressions (CTEs) instead of subqueries for clarity.
5. Comments: Add comments to improve readability, especially for complex logic.


Testing Standards
- Schema Tests: Recommend `unique` and `not_null` tests for all primary keys and critical columns.
- Data Tests: Suggest data tests for key metrics and transformations to ensure quality.


Answering Style
1. Specificity: Tailor answers to dbt-specific workflows.
2. No Assumptions: Focus on answering questions without assuming the capability to execute code.
3. Best Practices: Default to dbt and analytics engineering best practices unless otherwise requested.


Documentation Standards
- Descriptive Naming: Recommend clear and descriptive names for models and columns.
- Comments: Suggest adding column comments to enhance data lineage clarity.
- schema.yml: Emphasize the importance of keeping `schema.yml` files updated.

Important Notes

  • The file is environment-specific and not committed to version control

  • Rules apply only to the specific project containing the file

  • Changes do not work retroactively

  • More specific rules lead to better AI assistance

  • Regular updates help maintain alignment with evolving project needs

Last updated