Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
e5af62e
feat: plan agent refinement, feature discovery, and telemetry instrum…
anandgupta42 Mar 29, 2026
57a2e78
fix: address CodeRabbit review comments
anandgupta42 Mar 29, 2026
7384fe2
feat: e2e tests, performance benchmarks, and UX gap fixes
anandgupta42 Mar 29, 2026
1fc5c05
test: plan layer safety e2e tests (68 tests)
anandgupta42 Mar 29, 2026
a5b4e44
merge: resolve conflicts with main (skill followups + sql findings)
anandgupta42 Mar 29, 2026
bc4287b
fix: add mongodb to devDependencies for typecheck resolution
anandgupta42 Mar 29, 2026
77dae71
Merge branch 'main' into feat/plan-agent-and-feature-discovery
anandgupta42 Mar 29, 2026
b24d84e
Merge branch 'main' into feat/plan-agent-and-feature-discovery
anandgupta42 Mar 29, 2026
17f4a19
fix: track suggestion failures in warehouse-add telemetry
anandgupta42 Mar 29, 2026
3b78d42
test: 125 simulated user scenarios for plan + suggestions
anandgupta42 Mar 29, 2026
adb6c7e
test: 40 real tool execution simulations with mocked Dispatcher
anandgupta42 Mar 29, 2026
3f5fcb3
docs: document plan refinement, feature discovery, and new telemetry …
anandgupta42 Mar 29, 2026
24234ff
fix: replace mock.module() with spyOn to prevent cross-file test poll…
anandgupta42 Mar 29, 2026
734d173
fix: reset dedup state in tests + replace passwords with fake values
anandgupta42 Mar 29, 2026
63b1dbd
Merge branch 'main' into feat/plan-agent-and-feature-discovery
anandgupta42 Mar 29, 2026
11bb99d
fix: replace all credential-like test values for GitGuardian
anandgupta42 Mar 29, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 36 additions & 9 deletions bun.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

13 changes: 13 additions & 0 deletions docs/docs/configure/warehouses.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,3 +365,16 @@ Testing connection to prod-snowflake (snowflake)...
Warehouse: COMPUTE_WH
Database: ANALYTICS
```

## Post-Connection Suggestions

After you successfully connect a warehouse, altimate suggests next steps to help you get the most out of your connection. Suggestions are shown progressively based on what you've already done:

1. **Index your schemas** — populate the schema cache for autocomplete and context-aware analysis
2. **Run SQL analysis** — scan your query history for anti-patterns and optimization opportunities
3. **Inspect schema structure** — review tables, columns, and relationships
4. **Check lineage** — trace column-level data flow across your models

If altimate detects a dbt project in your workspace, it also recommends relevant dbt skills (`/dbt-develop`, `/dbt-troubleshoot`, `/dbt-analyze`).

Each suggestion is shown **once per session** — dismissing or acting on a suggestion removes it from the queue. You can also run a suggested action later via its corresponding tool or slash command.
45 changes: 45 additions & 0 deletions docs/docs/data-engineering/agent-modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,51 @@ altimate --agent plan

Plan mode restricts the agent to reading files and editing plan files only. No SQL, no bash, no file modifications. Use this to outline an approach before switching to builder to execute it.

### Two-step workflow

Plan mode uses a two-step approach to keep you in control:

1. **Outline** — The agent produces a short outline (3-5 bullet points) summarizing what it intends to do.
2. **Expand** — After you confirm, the agent expands the outline into a full, detailed plan.

This prevents wasted effort on plans that are heading in the wrong direction.

### Refinement loop

You don't have to start over if the plan isn't quite right. After the agent presents its outline or full plan, you can:

- **Approve** — Accept the plan as-is (e.g., "looks good", "approved", "let's go").
- **Refine** — Edit specific parts and ask the agent to revise (e.g., "change step 3 to use incremental instead of table").
- **Reject** — Discard the plan entirely (e.g., "scrap this", "start over", "reject").

Refinements are capped at **5 revisions per session** to avoid endless loops. After the cap is reached, the agent asks you to approve or reject the current version.

### Example conversation

```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language tag to the fenced example block.

The code fence at Line 166 is missing a language identifier (MD040). Please annotate it (for example, ```text) to satisfy markdownlint consistently.

Suggested fix
-```
+```text
 You: Plan a migration of our raw_events table from a view to an incremental model
 ...
-```
+```
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 166-166: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/docs/data-engineering/agent-modes.md` at line 166, Add a language tag to
the fenced code block that starts with the lines "You: Plan a migration of our
raw_events table from a view to an incremental model" so markdownlint rule MD040
is satisfied; change the opening fence from ``` to ```text (or another
appropriate language like ```bash) so the block is annotated, leaving the
closing ``` as-is and not altering the block contents.

You: Plan a migration of our raw_events table from a view to an incremental model

Plan: Here's my proposed approach:
• Inspect the current raw_events view and its downstream dependencies
• Design an incremental model with a timestamp-based strategy
• Add schema tests for the new model
• Create a cutover checklist for swapping the view to the model
• Update the dbt DAG documentation

You: Change the second bullet — use insert_overwrite instead of timestamp-based

Plan: Updated outline:
• Inspect the current raw_events view and its downstream dependencies
• Design an incremental model with an insert_overwrite strategy
• Add schema tests for the new model
• Create a cutover checklist for swapping the view to the model
• Update the dbt DAG documentation

You: Approved

Plan: [expands into a full detailed plan]
```

---

## SQL Write Access Control
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Set up warehouse connections so altimate can query your data platform. You have
4. **Offers to configure connections** and walks you through adding and testing each discovered warehouse
5. **Indexes schemas** to populate the schema cache for autocomplete and context-aware analysis

Once complete, altimate indexes your schemas and detects your tooling, enabling schema-aware autocomplete and context-rich analysis.
Once complete, altimate indexes your schemas and detects your tooling, enabling schema-aware autocomplete and context-rich analysis. After connecting, you'll see feature suggestions tailored to your warehouse type — such as indexing schemas, running SQL analysis, or checking lineage. These appear progressively and each is shown once per session. See [Post-Connection Suggestions](configure/warehouses.md#post-connection-suggestions) for details.

### Option B: Manual configuration

Expand Down
4 changes: 3 additions & 1 deletion docs/docs/reference/telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ We collect the following categories of events:
| `error_recovered` | Successful recovery from a transient error (error type, strategy, attempt count) |
| `mcp_server_census` | MCP server capabilities after connect (tool and resource counts, but no tool names) |
| `context_overflow_recovered` | Context overflow is handled (strategy) |
| `skill_used` | A skill is loaded (skill name and source — `builtin`, `global`, or `project` — no skill content) |
| `skill_used` | A skill is loaded (skill name, source — `builtin`, `global`, or `project`, and trigger — `user`, `auto`, or `suggestion` — no skill content) |
| `plan_revision` | A plan revision occurs in Plan mode (revision_number, action: `refine`, `approve`, `reject`, or `cap_reached`) |
| `feature_suggestion` | A post-connection feature suggestion is shown (suggestion_type, suggestions_shown, warehouse_type — no user input) |
| `sql_execute_failure` | A SQL execution fails (warehouse type, query type, error message, PII-masked SQL — no raw values) |
| `core_failure` | An internal tool error occurs (tool name, category, error class, truncated error message, PII-safe input signature, and optionally masked arguments — no raw values or credentials) |
| `first_launch` | Fired once on first CLI run after installation. Contains version and is_upgrade flag. No PII. |
Expand Down
3 changes: 3 additions & 0 deletions packages/drivers/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@
"./*": "./src/*.ts"
},
"files": ["src"],
"devDependencies": {
"mongodb": "^6.0.0"
},
"optionalDependencies": {
"pg": "^8.0.0",
"snowflake-sdk": "^2.0.3",
Expand Down
Loading
Loading