Skip to content

Commit d19a3ff

Browse files
committed
docs(ai-chat): add user-initiated compaction pattern
Show how to wire a "Summarize conversation" button or slash command via actionSchema + onAction. The backend summarizes and replaces history with chat.history.set(); run() short-circuits when trigger === "action" so no LLM response is generated. Includes a progress-feedback variant using chat.stream.append(). Resolves TRI-8268.
1 parent 530126e commit d19a3ff

1 file changed

Lines changed: 106 additions & 0 deletions

File tree

docs/ai-chat/compaction.mdx

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,112 @@ export const myChat = chat.agent({
190190
});
191191
```
192192

193+
## User-initiated compaction
194+
195+
Sometimes you want the user to decide when to compact — a "Summarize conversation" button, a `/compact` slash command, or a settings toggle. Wire this up with [actions](/ai-chat/backend#actions): the frontend sends a typed action, `onAction` runs the summary, and `chat.history.set()` replaces the conversation.
196+
197+
### Backend
198+
199+
Define a `compact` action that reuses your existing `summarize` function:
200+
201+
```ts
202+
import { chat } from "@trigger.dev/sdk/ai";
203+
import { streamText, generateText, generateId, convertToModelMessages } from "ai";
204+
import { openai } from "@ai-sdk/openai";
205+
import { z } from "zod";
206+
207+
// Reusable summarize fn — also used by the automatic compaction config.
208+
async function summarize(messages: ModelMessage[]) {
209+
const result = await generateText({
210+
model: openai("gpt-4o-mini"),
211+
messages: [...messages, { role: "user", content: "Summarize this conversation concisely." }],
212+
});
213+
return result.text;
214+
}
215+
216+
export const myChat = chat.agent({
217+
id: "my-chat",
218+
219+
// Automatic compaction still runs on threshold.
220+
compaction: {
221+
shouldCompact: ({ totalTokens }) => (totalTokens ?? 0) > 80_000,
222+
summarize: async ({ messages }) => summarize(messages),
223+
},
224+
225+
// User-initiated: the frontend sends { type: "compact" }.
226+
actionSchema: z.discriminatedUnion("type", [
227+
z.object({ type: z.literal("compact") }),
228+
]),
229+
230+
onAction: async ({ action, uiMessages }) => {
231+
if (action.type !== "compact") return;
232+
233+
const summary = await summarize(convertToModelMessages(uiMessages));
234+
235+
// Replace the full history with a single summary message.
236+
chat.history.set([
237+
{
238+
id: generateId(),
239+
role: "assistant",
240+
parts: [{ type: "text", text: `[Conversation summary]\n\n${summary}` }],
241+
},
242+
]);
243+
},
244+
245+
run: async ({ messages, trigger, signal }) => {
246+
// Compact action doesn't need an LLM response — just exit.
247+
if (trigger === "action") return;
248+
249+
return streamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
250+
},
251+
});
252+
```
253+
254+
Actions fire `onAction`, apply any `chat.history.*` mutations, then call `run()`. For compaction there's no new user message to respond to, so `run()` returns early when `trigger === "action"`. `onTurnComplete` still fires with the compacted `uiMessages` — use it to persist the new state.
255+
256+
### Frontend
257+
258+
Call `transport.sendAction()` from a button or slash command:
259+
260+
```tsx
261+
import { useTriggerChatTransport } from "@trigger.dev/react-hooks";
262+
import { useChat } from "@ai-sdk/react";
263+
264+
function ChatView({ chatId, accessToken }: { chatId: string; accessToken: string }) {
265+
const transport = useTriggerChatTransport({ task: "my-chat", accessToken });
266+
const { messages } = useChat({ id: chatId, transport });
267+
268+
return (
269+
<>
270+
<button onClick={() => transport.sendAction(chatId, { type: "compact" })}>
271+
Summarize conversation
272+
</button>
273+
{messages.map(/* ... */)}
274+
</>
275+
);
276+
}
277+
```
278+
279+
The call returns as soon as the backend accepts the action. Because `onTurnComplete` replaces the `uiMessages` with the summary, `useChat` receives the new state via the normal turn-complete flow — the UI updates automatically.
280+
281+
### Indicating compaction in the UI
282+
283+
For "Compacting..." feedback while the summary generates, append a transient data part from `onAction` via `chat.stream.append()`:
284+
285+
```ts
286+
onAction: async ({ action, uiMessages }) => {
287+
if (action.type !== "compact") return;
288+
289+
chat.stream.append({ type: "data-compaction", data: { status: "compacting" } });
290+
const summary = await summarize(convertToModelMessages(uiMessages));
291+
chat.stream.append({ type: "data-compaction", data: { status: "complete" } });
292+
293+
chat.history.set([ /* ... */ ]);
294+
},
295+
```
296+
297+
See [Raw streaming with chat.stream](/ai-chat/features#raw-streaming-with-chatstream) for the full API.
298+
193299
## Using with chat.createSession()
194300

195301
Pass the same `compaction` config to `chat.createSession()`. The session handles outer-loop compaction automatically inside `turn.complete()`:

0 commit comments

Comments
 (0)