Workflows
A workflow is a directed graph of agent steps connected by labelled edges. Each step runs an agent against the current input and produces output; edges define what happens next — including conditional branches based on what the previous step said.
Workflows are built on a visual canvas and executed via the API or triggered by an adapter.
Core concepts
| Concept | Description |
|---|---|
| Step | A node in the graph — runs an agent or a standalone prompt |
| Edge | A directed connection from one step to the next |
| Branch | Multiple edges out of one step, each with a condition label |
| Router | The LLM logic that reads a step’s output and picks which branch to follow |
Step types
| Type | Description |
|---|---|
| Agent step | Runs a specific agent you’ve already created. The step uses the agent’s model, system prompt, and all its attached capabilities. |
| Custom step | Runs a standalone prompt with its own model and resource attachments — no pre-existing agent required. Useful for lightweight transformation or classification steps. |
Both step types support attaching skills, integrations, and knowledge bases directly on the step — independently of whatever is attached to the underlying agent.
Building a workflow
- Go to Workflows → New workflow, give it a name, and open the canvas.
- Add steps by clicking + Add step and choosing the type (agent or custom).
- Connect steps by dragging from one step’s output handle to another’s input handle.
- When you want branching, draw multiple edges out of the same step and give each edge a condition label.
- Save. The workflow is ready to execute.
Branching and routing
When a step has more than one outgoing edge, the workflow enters a branch point after that step executes.
Each edge has a plain-language label describing the condition under which it should be followed — for example:
"the caller has a technical issue""the user wants to cancel their subscription""the question is about billing"
At runtime, after the step produces its output, the router sends the output plus all the edge labels to the LLM and asks it to pick the best match. The chosen edge’s target step runs next.
This means routing logic is expressed in natural language, not code. You don’t write if/else — you write conditions the model can reason about.
Example: support triage
Start
└── Identify Issue (agent step — greets user, asks what help is needed)
│
├── "technical issue" → Troubleshoot (agent step — methodical debugging)
│ └──────────────────────────┐
└── "billing question" → Account & Billing (agent step — verify identity first)
│
Resolve or Escalate (agent step)
└── End
Both branches converge to a shared resolution step before the workflow ends.
Step resources
Each step can have its own:
| Resource | Effect |
|---|---|
| Skills | Additional instructions injected at this step only |
| Integrations | MCP tools the agent can call during this step |
| Knowledge bases | Documents searched to provide context at this step |
This lets you give focused context to each stage without polluting every other step with irrelevant tools.
Executing a workflow
Via the API
POST /api/workflows/:workflowId/execute
Authorization: Bearer <api-token>
Content-Type: application/json
{
"prompt": "Hi, I can't log into my account"
}
The response returns the final step’s output:
{
"text": "I can help with that. First, let me verify your identity…",
"steps": [
{ "stepId": "identify", "output": "Technical issue — login problem" },
{ "stepId": "troubleshoot", "output": "…" },
{ "stepId": "resolve", "output": "…" }
]
}
Via an adapter
Attach a Telegram or Discord adapter to the workflow directly (rather than to an individual agent). Incoming messages are run through the full workflow graph, and the final step’s reply is sent back to the user.
Step ordering
The execution engine traverses steps topologically — it respects dependencies (a step runs only after all its upstream steps have completed). Parallel branches that don’t depend on each other are eligible to run concurrently.