OA OpenAgent
Source-available AI coding agent

Learn agent systems by running one yourself.

OpenAgent is a beginner-friendly coding agent that lets you study the real thing: a backend agent loop, multiple user surfaces, live streaming over WebSocket, a tool system that can read and write files, and a runtime that is simple enough to inspect but complete enough to deploy.

Tool use Planning Streaming Memory Sub-agents Teams
Overview

What this project actually is.

OpenAgent is not just a chat UI. It is a complete runtime stack for an AI coding agent: one backend that owns the agent loop, one tool system that can act on a workspace, and three interfaces that expose the same core capability at different levels of abstraction.

Readable

The architecture is explicit. You can follow the message from the browser to the backend, into the LLM adapter, through tool execution, and back to the user.

Educational

The project is built to teach how agent systems work in practice: not only prompts, but planning, streaming, tools, memory, verification, tasking, and runtime boundaries.

Deployable

It is simple enough to study locally and complete enough to run on a real public server with HTTPS, reverse proxying, and separate developer and user-facing surfaces.

Product Surfaces

One backend, three different product experiences.

README and HOW_IT_WORKS both make the same architectural point: the intelligence does not live in the frontends. The UI and CLI are surfaces over one shared backend contract.

Surface 01

Developer UI

Dark theme, tool blocks, token usage, raw WebSocket visibility, and controls for planning, approval, tracing, and teams. Built for builders and debuggers.

Surface 02

User UI

A lighter interface with reduced protocol detail, more task-oriented feedback, and a simpler interaction model for end users who care more about outcomes than internals.

Surface 03

Terminal CLI

A direct command-line surface with session persistence, history, slash commands, and a richer terminal workflow for developers who prefer keyboard-first control.

Shared Core

Agent API

FastAPI + WebSocket backend that stores conversations, streams agent output, registers tools, manages memory and tasks, and mediates all LLM interaction.

Core Runtime

The agent loop is the product core.

The system behavior is built around a simple but powerful pattern: send the conversation and tool definitions to the model, stream its response, execute requested tools, send results back, and repeat until the model finishes.

Step 1

The backend prepares system prompt, history, memory, available skills, and tool schemas.

Step 2

The LLM receives conversation state through a provider-agnostic client interface.

Step 3

Text deltas stream back live over WebSocket while tool calls are collected.

Step 4

The runtime executes tools, enforces guards, and feeds results back into the loop.

Step 5

The agent self-verifies with the think tool and exits only when it decides the task is done.

while not done:
    micro_compact(messages)
    drain_background_notifications()
    drain_teammate_inbox()
    response = model(messages, tools)

    if response.truncated:
        continue
    if not response.tool_calls:
        break

    results = execute_tools(response.tool_calls)
    messages.append(results)

final_summary = model(messages, [])
Streaming is first-class. Users do not wait for a final blob response. They see text, tool calls, tool results, status updates, and completion events as they happen.
Verification stays inside the loop. The system does not bolt on a separate QA phase. The agent uses the think tool to inspect its own work and keep iterating if needed.
Context is actively managed. Old tool output can be compacted, transcripts can be kept, and the loop can continue past truncation limits instead of failing abruptly.
Architecture

A layered architecture with clear responsibilities.

One of OpenAgent’s strengths is that the major concerns are explicit: surfaces, transport, runtime loop, tool system, and LLM adapter boundary are separable enough to learn from and evolve independently.

Surfaces

  • `agent-ui` for developers
  • `agent-user-ui` for end users
  • `agent-cli` for terminal workflows

Transport

  • REST for setup and retrieval
  • WebSocket for live interaction
  • Structured events for tools and status

Runtime

  • Agent loop
  • Compaction
  • Memory
  • Approval and plan mode

Execution

  • Tool registry
  • File and bash tools
  • Background jobs
  • Sub-agents and teammates

Provider abstraction

  • Shared `LLMClient` contract
  • Provider-specific adapters
  • Loop does not depend on a single vendor SDK

Persistence

  • SQLite for conversations and token usage
  • `workspace/` for temporary task artifacts
  • hidden internal folders for memory, tasks, transcripts, and team state

Ops reality

  • Reverse proxy matters
  • Same-origin behavior matters
  • Storage semantics matter
Feature Set

Capabilities that make it feel like an actual agent system.

The README positions OpenAgent as more than a loop around a model. The feature set backs that up: tools, planning, memory, verification, background work, and multi-agent collaboration are all part of the runtime story.

Tool use

Bash, file operations, code navigation, tasking, skill loading, background execution, and more.

Planning and approval

Read-only plan mode, agent-initiated planning, and optional human-in-the-loop approval gates.

Memory and compaction

Cross-session memory plus multiple compaction layers to keep long-running sessions workable.

Sub-agents and teams

Focused child agents, teammate status tracking, inbox messaging, and coordinated parallel work.

Deployment Reality

The runtime details matter in production.

The project documentation and recent deployment work make several product-relevant boundaries explicit: workspace files are temporary, chat history is stored separately, and without auth the system behaves as a shared deployment rather than isolated user accounts.

Storage

Workspace

Files created by the agent live under `WORKSPACE_DIR` and are temporary by default.

Persistence

Chat history

Conversation history lives in SQLite and survives session end, but not necessarily container recreation.

Routing

Same-origin

The web UIs need production-aware API and WebSocket routing, not hardcoded localhost assumptions.

Access model

No auth

Without app-level authentication, conversations are shared at the deployment level, not the user level.

Further Reading

The repo stays readable end to end.

This page is an editorial layer over the source material, not a replacement for it. The repo itself still contains the clearest references for setup, architecture, and component-level behavior.

GitHub repo

The full monorepo is on GitHub. That is the best place to inspect the backend agent loop, terminal CLI, both web UIs, and deployment-facing docs in one place.

Published Packages

OpenAgent is also published on PyPI if you only want the packaged CLI instead of a monorepo checkout.

pip install openagent-app
openagent