Chat API

The Chat API powers Stagent’s conversational interface. Conversations are scoped to agent runtimes and optionally linked to projects. User messages are streamed back as Server-Sent Events with real-time deltas, permission requests, and structured questions from the executing agent.

Quick Start

Create a conversation, send a message with SSE streaming, handle a permission request, and discover available models:

// 1. Create a conversation linked to a project
const conversation: Conversation = await fetch('/api/chat/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
  runtimeId: 'claude-code',
  projectId: 'proj-8f3a-4b2c',
  title: 'Debug auth flow',
  modelId: 'sonnet',
}),
}).then(r => r.json());
// → { id: "conv-d4e2-7b1a", runtimeId: "claude-code", status: "active", ... }

// 2. Send a message and stream the response via SSE
const res: Response = await fetch(`/api/chat/conversations/${conversation.id}/messages`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: 'Why is the login endpoint returning 403?' }),
});

const reader = res.body!.getReader();
const decoder = new TextDecoder();
let pendingPermission: string | null = null;

while (true) {
const { done, value } = await reader.read();
if (done) break;

for (const line of decoder.decode(value).split('\n')) {
  if (!line.startsWith('data: ')) continue;
  const event = JSON.parse(line.slice(6));

  switch (event.type) {
    case 'delta':
      process.stdout.write(event.content);
      break;
    case 'permission_request':
      // Agent wants to read a file — save the requestId to respond
      pendingPermission = event.requestId;
      console.log(`\nPermission: ${event.toolName}(${JSON.stringify(event.toolInput)})`);
      break;
    case 'done':
      console.log(`\nMessage ID: ${event.messageId}`);
      break;
  }
}
}

// 3. If the agent requested permission, approve it
if (pendingPermission) {
await fetch(`/api/chat/conversations/${conversation.id}/respond`, {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    requestId: pendingPermission,
    behavior: 'allow',
    alwaysAllow: true,
  }),
});
}

// 4. Discover available models for the model picker
const models: ChatModelOption[] = await fetch('/api/chat/models').then(r => r.json());
models.forEach(m => console.log(`${m.label} (${m.provider}) — ${m.costLabel}`));

Base URL

/api/chat

Endpoints

List Conversations

GET /api/chat/conversations

Retrieve conversations with optional filtering by status, project, and result limit. Results are ordered by most recent first.

Query Parameters

Param Type Req Description
status enum Filter by conversation status: active or archived
projectId string Filter conversations by project UUID
limit number Maximum number of conversations to return

Response 200 — Array of conversation objects

Conversation Object

FieldTypeReqDescription
idstring (UUID)*Conversation identifier
projectIdstring (UUID)Associated project
titlestringConversation title
runtimeIdenum*Agent runtime: claude-code or openai-codex-app-server
modelIdstringModel used for responses (e.g., haiku, sonnet, gpt-5.4)
statusenum*active or archived
createdAtISO 8601*Creation timestamp
updatedAtISO 8601*Last modification timestamp

Fetch recent active conversations for a project — useful for displaying a conversation sidebar:

// Fetch active conversations for a project
const conversations: Conversation[] = await fetch(
'/api/chat/conversations?status=active&projectId=proj-8f3a-4b2c&limit=20'
).then(r => r.json());

conversations.forEach(c => {
const age: number = Math.round((Date.now() - new Date(c.updatedAt).getTime()) / 3600000);
console.log(`${c.title || 'Untitled'} (${c.runtimeId}) — ${age}h ago`);
});

Example response:

[
  {
    "id": "conv-d4e2-7b1a",
    "projectId": "proj-8f3a-4b2c",
    "title": "Debug auth flow",
    "runtimeId": "claude-code",
    "modelId": "sonnet",
    "status": "active",
    "createdAt": "2026-04-03T10:00:00.000Z",
    "updatedAt": "2026-04-03T10:45:00.000Z"
  }
]

Create Conversation

POST /api/chat/conversations

Start a new conversation. Requires an agent runtime. Optionally link to a project (triggers an automatic environment scan) and select a model.

Request Body

FieldTypeReqDescription
runtimeIdenum*Agent runtime: claude-code or openai-codex-app-server
projectIdstring (UUID)Project to associate with (triggers auto environment scan)
titlestringConversation title
modelIdstringModel ID for responses

Response 201 Created — The created conversation object

Errors: 400 — Missing or invalid runtimeId

Start a conversation linked to a project — the agent receives the project’s working directory and environment context automatically:

// Create a conversation with a specific model
const conversation: Conversation = await fetch('/api/chat/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
  runtimeId: 'claude-code',
  projectId: 'proj-8f3a-4b2c',
  title: 'Debug auth flow',
  modelId: 'sonnet',
}),
}).then(r => r.json());

console.log(conversation.id);     // "conv-d4e2-7b1a"
console.log(conversation.status); // "active"

Example response:

{
  "id": "conv-d4e2-7b1a",
  "projectId": "proj-8f3a-4b2c",
  "title": "Debug auth flow",
  "runtimeId": "claude-code",
  "modelId": "sonnet",
  "status": "active",
  "createdAt": "2026-04-03T10:00:00.000Z",
  "updatedAt": "2026-04-03T10:00:00.000Z"
}

Get Conversation

GET /api/chat/conversations/{id}

Retrieve a single conversation with its message count.

Response 200 — Conversation object with messageCount field

Additional Fields

FieldTypeReqDescription
messageCountnumber*Total messages in the conversation

Errors: 404 — Conversation not found

// Get conversation with message count
const conv: Conversation & { messageCount: number } = await fetch('/api/chat/conversations/conv-d4e2-7b1a')
.then(r => r.json());

console.log(`${conv.title}: ${conv.messageCount} messages`);

Update Conversation

PATCH /api/chat/conversations/{id}

Update conversation title, status, model, or runtime.

Request Body (all fields optional)

FieldTypeReqDescription
titlestringUpdated conversation title
statusenumNew status: active or archived
modelIdstringChange model for future messages
runtimeIdstringChange agent runtime

Errors: 400 — Invalid status value, 404 — Not found

Archive a completed conversation or switch to a different model mid-conversation:

// Switch to a faster model for quick follow-up questions
await fetch('/api/chat/conversations/conv-d4e2-7b1a', {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ modelId: 'haiku' }),
});

Delete Conversation

DELETE /api/chat/conversations/{id}

Permanently delete a conversation and all its messages.

Response 204 No Content

Errors: 404 — Conversation not found

// Permanently delete a conversation and its messages
await fetch('/api/chat/conversations/conv-d4e2-7b1a', { method: 'DELETE' });

Get Messages

GET /api/chat/conversations/{id}/messages

Fetch message history for a conversation. Supports cursor-based pagination for reconnection scenarios.

Query Parameters

Param Type Req Description
after string Message ID cursor — return messages after this ID
limit number Maximum number of messages to return

Response 200 — Array of message objects

Errors: 404 — Conversation not found

Fetch message history with pagination — use the after cursor to resume from where you left off:

// Fetch the last 50 messages
const messages: Message[] = await fetch(
'/api/chat/conversations/conv-d4e2-7b1a/messages?limit=50'
).then(r => r.json());

// Use cursor-based pagination to load more
if (messages.length === 50) {
const lastId: string = messages[messages.length - 1].id;
const older: Message[] = await fetch(
  `/api/chat/conversations/conv-d4e2-7b1a/messages?after=${lastId}&limit=50`
).then(r => r.json());
}

Send Message (SSE Stream)

SSE /api/chat/conversations/{id}/messages

Send a user message and receive the assistant response as a Server-Sent Events stream. Supports @-mentions to inject entity context. The stream emits deltas, status updates, permission requests, and a final done event.

Request Body (POST)

FieldTypeReqDescription
contentstring*User message text
mentionsobject[]Array of @-mention references to inject as context

Responsetext/event-stream with JSON event objects

Stream Event Types

FieldTypeReqDescription
deltaeventIncremental text content from the assistant
statuseventPhase update (e.g., thinking, tool_use) with a human-readable message
permission_requesteventAgent is requesting permission to use a tool — includes requestId, toolName, toolInput
questioneventAgent is asking structured questions — includes requestId and questions array
screenshoteventScreenshot attachment with documentId, thumbnailUrl, dimensions
doneeventStream complete — includes final messageId and quickAccess entity links
erroreventError message — stream terminates after this event

Errors: 400 — Missing or invalid content, 404 — Conversation not found

Send a message and handle all SSE event types — the stream contains text deltas, status updates, permission requests, and a final completion event:

// Send a message and process the SSE stream
const res: Response = await fetch('/api/chat/conversations/conv-d4e2-7b1a/messages', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: 'What tasks are running right now?' }),
});

const reader = res.body!.getReader();
const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;

for (const line of decoder.decode(value).split('\n')) {
  if (!line.startsWith('data: ')) continue;
  const event = JSON.parse(line.slice(6));

  switch (event.type) {
    case 'delta':
      process.stdout.write(event.content);
      break;
    case 'status':
      console.log(`[status] ${event.phase}: ${event.message}`);
      break;
    case 'permission_request':
      console.log(`[permission] ${event.toolName}: ${JSON.stringify(event.toolInput)}`);
      // Respond via /respond endpoint
      break;
    case 'done':
      console.log(`\nComplete: ${event.messageId}`);
      break;
    case 'error':
      console.error(`Error: ${event.message}`);
      break;
  }
}
}

Example stream events:

data: {"type":"status","phase":"thinking","message":"Analyzing the request..."}

data: {"type":"delta","content":"Let me check the running tasks. "}

data: {"type":"status","phase":"tool_use","message":"Querying tasks API..."}

data: {"type":"delta","content":"There are 3 tasks currently running:\n\n1. **Analyze Q4 revenue trends** — started 5 minutes ago\n2. **Code review for auth module** — started 2 minutes ago\n3. **Generate test fixtures** — started 1 minute ago"}

data: {"type":"done","messageId":"msg-a8f3-4c2e","quickAccess":[{"entityType":"task","entityId":"task-9d4e-a1b2","label":"Analyze Q4 revenue trends"}]}

Respond to Permission Request

POST /api/chat/conversations/{id}/respond

Allow or deny a pending permission or question request from an active chat turn. Resolves the in-memory promise that blocks the agent SDK's tool callback. Optionally save an always-allow rule.

Request Body

FieldTypeReqDescription
requestIdstring*ID of the pending permission request
behaviorenum*allow or deny
messageIdstringMessage ID to update status in the UI
updatedInputobjectModified tool input (only applied on allow)
messagestringMessage back to the agent (used on deny)
alwaysAllowbooleanPersist as a permanent permission rule
permissionPatternstringPattern for the always-allow rule
toolNamestringTool name for auto-building the permission pattern
toolInputobjectTool input for auto-building the permission pattern

Response 200{ "ok": true, "stale": false }

The stale field is true if the in-memory request had already expired (timeout, HMR restart). The DB and UI are still updated regardless.

Errors: 400 — Missing requestId or behavior, 500 — Failed to resolve

Allow a tool use request and save it as a permanent rule so the agent won’t ask again:

// Approve a permission request and save a permanent rule
await fetch('/api/chat/conversations/conv-d4e2-7b1a/respond', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
  requestId: 'req-b7c3-2e4f',
  behavior: 'allow',
  alwaysAllow: true,
  toolName: 'Read',
  toolInput: { file_path: '/src/auth/login.ts' },
}),
});

// Deny a permission with an explanation
await fetch('/api/chat/conversations/conv-d4e2-7b1a/respond', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
  requestId: 'req-c8d4-3f5a',
  behavior: 'deny',
  message: 'Do not modify production config files',
}),
});

Search Entities

List Models

GET /api/chat/models

Return available chat models discovered from configured SDKs. Falls back to a hardcoded catalog if SDKs are unreachable.

Response 200 — Array of model objects

ChatModelOption

FieldTypeReqDescription
idstring*Model identifier (e.g., haiku, sonnet, gpt-5.4)
labelstring*Display name
providerenum*anthropic, openai, or ollama
tierstring*Performance tier: Fast, Balanced, or Best
costLabelstring*Relative cost: $, $$, $$$, or Free

Fetch available models to populate a model selector — shows capabilities and pricing tier:

// Build a model picker grouped by provider
const models: ChatModelOption[] = await fetch('/api/chat/models').then(r => r.json());

const byProvider = Object.groupBy(models, m => m.provider);
for (const [provider, providerModels] of Object.entries(byProvider)) {
console.log(`${provider}:`);
providerModels.forEach(m => {
  console.log(`  ${m.label} [${m.tier}] ${m.costLabel}`);
});
}

Example response:

[
  { "id": "haiku", "label": "Haiku", "provider": "anthropic", "tier": "Fast", "costLabel": "$" },
  { "id": "sonnet", "label": "Sonnet", "provider": "anthropic", "tier": "Balanced", "costLabel": "$$" },
  { "id": "opus", "label": "Opus", "provider": "anthropic", "tier": "Best", "costLabel": "$$$" },
  { "id": "gpt-5.4-mini", "label": "GPT-5.4 Mini", "provider": "openai", "tier": "Fast", "costLabel": "$" },
  { "id": "gpt-5.3-codex", "label": "Codex 5.3", "provider": "openai", "tier": "Balanced", "costLabel": "$$" },
  { "id": "gpt-5.4", "label": "GPT-5.4", "provider": "openai", "tier": "Best", "costLabel": "$$$" }
]

Suggested Prompts

GET /api/chat/suggested-prompts

Return context-aware prompt categories with expandable sub-prompts for the chat input.

Response 200 — Array of prompt categories

PromptCategory

FieldTypeReqDescription
idstring*Category identifier
labelstring*Category display name
iconstring*Lucide icon name
promptsSuggestedPrompt[]*Array of prompts in this category

SuggestedPrompt

FieldTypeReqDescription
labelstring*Short display text (~40 chars)
promptstring*Full detailed prompt text

Fetch suggested prompts to display quick-action buttons in the chat input area:

// Populate the chat input with suggested prompt categories
const categories: PromptCategory[] = await fetch('/api/chat/suggested-prompts')
.then(r => r.json());

categories.forEach(cat => {
console.log(`${cat.label} (${cat.icon}):`);
cat.prompts.forEach(p => console.log(`  ${p.label}`));
});

Stream Event Reference

The Send Message endpoint emits these SSE event types:

Event TypeKey FieldsDescription
deltacontentIncremental assistant text
statusphase, messagePhase transition (thinking, tool_use, etc.)
permission_requestrequestId, toolName, toolInputAgent needs tool approval
questionrequestId, questions[]Agent asking structured questions
screenshotdocumentId, thumbnailUrl, width, heightScreenshot attachment
donemessageId, quickAccess[]Stream complete with entity links
errormessageTerminal error — stream closes

Default Models

ProviderModel IDLabelTierCost
AnthropichaikuHaikuFast$
AnthropicsonnetSonnetBalanced$$
AnthropicopusOpusBest$$$
OpenAIgpt-5.4-miniGPT-5.4 MiniFast$
OpenAIgpt-5.3-codexCodex 5.3Balanced$$
OpenAIgpt-5.4GPT-5.4Best$$$