Get Started Playbook
What “Context Engineering” Means in Wren AI
In Wren AI, you shape the AI’s behavior mainly through:
- Semantics (data modeling & MDL) – how tables, columns, joins, and metrics are described and categorized so the AI understands what the data means and how to join/aggregate it correctly.
- Instructions – reusable rules that tell the AI how to think about SQL generation, charts, and summaries (e.g., which statuses to exclude, how to format money, how to define “late delivery”).
- Question-SQL Pairs – “gold-standard” examples that pin specific natural-language questions to exact SQL, for complex metrics and error-prone logic.
Think of it like this:
- Semantics = dictionary + schema
- Instructions = house rules
- Question-SQL Pairs = worked examples
Your job as a context engineer is to push as much knowledge as possible into these three layers, so ad-hoc questions become safe, consistent, and predictable.
Project Scoping & Domain Boundaries (Avoid the “One Giant Graph” Trap)
Create separate Wren AI projects per business use case / domain, instead of building one huge, mixed graph for the entire company.
Why
- Different domains often have different definitions for the same term
- e.g. “Revenue” in Finance vs “Revenue” in Marketing vs “GMV” in Ops
- A single giant graph with mixed semantics makes it much easier for the AI to:
- Pick the wrong tables/metrics
- Mix conflicting definitions
- Produce more hallucinations or subtly wrong answers
- Smaller, focused projects give you:
- Cleaner semantic layers
- Clearer Instructions and Question-SQL Pairs per domain
- Easier governance and testing
How to scope projects
Use separate projects for things like:
- Marketing & Growth analytics (campaigns, channels, attribution, CAC, ROAS)
- Product analytics (events, feature usage, funnels, retention)
- Sales & Revenue (pipeline, bookings, MRR, ARR)
- Operations / Supply chain / Manufacturing (lead times, yield, utilization, defects)
If a domain:
- Has its own owners/stakeholders, and
- Has its own metric definitions / dashboards,
...it almost always deserves its own Wren AI project.
Rule of thumb
If you start seeing:
- Long lists of tables no one understands,
- Multiple “Revenue” or “LTV” definitions colliding,
- Instructions that only apply to some teams,
...you’ve probably made the project too big.
Split by business use case, keep each project coherent and opinionated, and the AI will hallucinate less and behave more like a domain expert.
First-Time User Checklist
Use this as a mini playbook for your first Wren AI project:
Day 1 –2: Connect & Model
- Connect your main warehouse (Snowflake / BigQuery / etc.).
- Select 5–10 key tables (orders, customers, subscriptions, events, marketing, etc.).
- Use the Modeling AI Assistant to generate initial semantics and relationships.
- Manually refine descriptions for:
- Targets vs actuals,
- Key metrics and entity IDs,
- All main date fields.
Learn more Best Practice of Build a Semantic Layer That Speaks Your Business Language
Day 3: Configure Instructions
- Add Global Instructions for:
- Status exclusions,
- Default date field and time ranges,
- Numeric and percentage formatting,
- Basic chart preferences,
- Summary phrasing.
- Add 2–3 Question-Matching Instructions (e.g., “late delivery”, “YoY comparison”).
Learn more Best Practice of Add Instructions as Your “Universal Rules."
Day 4–5: Seed Question-SQL Pairs
- Identify 5–10 critical metrics:
- AOV, LTV, MRR, churn, retention, utilization, yield rate, etc.
- Import their “gold” SQL from your existing reports.
- Map each metric to a straightforward natural-language question.
Learn more Best Practice of Capture Gold-Standard Queries as Question-SQL Pairs
Day 6: Iterate with Real Questions (Human-in-the-Loop)
Now let real users drive context improvement.
- Ask actual questions from stakeholders (from Slack, Wren AI UI).
- For each Wren AI answer:
- Correct & useful
- Save as a Question-SQL Pair (if not already).
- Tag/note it as a “golden question” for this customer.
- Correct but not ideal UX (too many columns, odd chart, awkward summary)
- Add or refine Instructions (chart defaults, summary style, grouping rules).
- Wrong or incomplete
- Check where it broke: semantics vs. instructions vs. missing example.
- Fix the root cause:
- Semantics for table/column meaning or relationships,
- Instructions for rules/filters/definitions,
- New Question-SQL Pair if it’s a complex metric.
- Correct & useful
- Encourage users to give lightweight feedback:
- “This is correct” → save it.
- “This should exclude X / group by Y” → turn that into an Instruction.
- Treat Day 6 as the first feedback loop day: content in Knowledge should change based on what users actually ask.
Learn more Best Practice of Capture Gold-Standard Queries as Question-SQL Pairs