Get Started Playbook
What “Context Engineering” Means in Wren AI
In Wren AI, you shape the AI’s behavior mainly through:
- Semantics (data modeling & MDL) – how tables, columns, joins, and metrics are described and categorized so the AI understands what the data means and how to join/aggregate it correctly.
- Instructions – reusable rules that tell the AI how to think about SQL generation, charts, and summaries (e.g., which statuses to exclude, how to format money, how to define “late delivery”).
- Question-SQL Pairs – “gold-standard” examples that pin specific natural-language questions to exact SQL, for complex metrics and error-prone logic.
Think of it like this:
- Semantics = dictionary + schema
- Instructions = house rules
- Question-SQL Pairs = worked examples
Your job as a context engineer is to push as much knowledge as possible into these three layers, so ad-hoc questions become safe, consistent, and predictable.
First-Time User Checklist
Use this as a mini playbook for your first Wren AI project:
Day 1–2: Connect & Model
- Connect your main warehouse (Snowflake / BigQuery / etc.).
- Select 5–10 key tables (orders, customers, subscriptions, events, marketing, etc.).
- Use the Modeling AI Assistant to generate initial semantics and relationships.
- Manually refine descriptions for:
- Targets vs actuals,
- Key metrics and entity IDs,
- All main date fields.
Learn more Best Practice of Build a Semantic Layer That Speaks Your Business Language
Day 3: Configure Instructions
- Add Global Instructions for:
- Status exclusions,
- Default date field and time ranges,
- Numeric and percentage formatting,
- Basic chart preferences,
- Summary phrasing.
- Add 2–3 Question-Matching Instructions (e.g., “late delivery”, “YoY comparison”).
Learn more Best Practice of Add Instructions as Your “Universal Rules."
Day 4–5: Seed Question-SQL Pairs
- Identify 5–10 critical metrics:
- AOV, LTV, MRR, churn, retention, utilization, yield rate, etc.
- Import their “gold” SQL from your existing reports.
- Map each metric to a straightforward natural-language question.
Learn more Best Practice of Capture Gold-Standard Queries as Question-SQL Pairs
Day 6: Iterate with Real Questions (Human-in-the-Loop)
Now let real users drive context improvement.
- Ask actual questions from stakeholders (from Slack, Wren AI UI).
- For each Wren AI answer:
- Correct & useful
- Save as a Question-SQL Pair (if not already).
- Tag/note it as a “golden question” for this customer.
- Correct but not ideal UX (too many columns, odd chart, awkward summary)
- Add or refine Instructions (chart defaults, summary style, grouping rules).
- Wrong or incomplete
- Check where it broke: semantics vs. instructions vs. missing example.
- Fix the root cause:
- Semantics for table/column meaning or relationships,
- Instructions for rules/filters/definitions,
- New Question-SQL Pair if it’s a complex metric.
- Correct & useful
- Encourage users to give lightweight feedback:
- “This is correct” → save it.
- “This should exclude X / group by Y” → turn that into an Instruction.
- Treat Day 6 as the first feedback loop day: content in Knowledge should change based on what users actually ask.
Learn more Best Practice of Capture Gold-Standard Queries as Question-SQL Pairs