What is agent-os?

agent-os is an open-source, filesystem-based continuity system that gives AI agents persistent memory and seamless handoffs — across sessions, across tools, and across time.

It's not a platform. It's not a SaaS product. It's a directory structure and a set of operating procedures that any AI agent can read and write. Clone it, point your agent at it, and your AI workflows gain the one thing they've always lacked: continuity.

The Pain Points We Solved

🧠
Context Loss Between Sessions

The most expensive problem in AI-assisted development isn't token costs — it's the 15-30 minutes you spend re-explaining your project every time you open a new chat. agent-os maintains a structured MEMORY.md that serves as durable truth. When a new session starts, the agent picks up exactly where the last one left off.

🤝
Multi-Agent Amnesia

Modern development involves multiple AI tools — a WebUI for deep coding, a Discord bot for quick tasks, a CLI agent for automation. Without agent-os, each one operates in its own silo. agent-os introduces a handoff protocol: when one agent pauses, it writes what's done, what's blocked, and the exact next steps. The next agent continues seamlessly.

Long-Running Jobs That Outlive Sessions

Building a Docker image for 3 hours? Compiling a massive Rust project overnight? Your chat session will time out long before the job finishes. agent-os uses build-state files to track long-running operations. If a session dies mid-build, the next agent knows the job's status and how to pick up the pieces.

📚
Unstructured Knowledge Decay

Chat logs are terrible documentation. Important decisions and debugging insights get buried in thousands of lines. agent-os organizes knowledge into a clear hierarchy: curated active truths, daily notes, project docs, and decision records.

🛡️
Production Safety

AI agents with SSH access to production servers can do real damage. agent-os includes built-in safety workflows: SSH rate-limit awareness, pre-shutdown checklists, and golden rules about what requires human confirmation.

How It Works

The entire system is just files — Markdown files in a well-organized directory. No database. No API. No vendor lock-in.

agent-os/
├── MANAGEMENT.md              # System overview (agents read this first)
├── MEMORY.md                  # Active durable truth
├── TODO.md                    # Cross-session priorities
├── AGENT-BOOTSTRAP-PROMPT.md  # One prompt to onboard any new agent
├── memory/                    # Daily notes
├── docs/workflows/            # Reusable procedures
├── handoffs/                  # Active task transfers between agents
└── build-state/               # Long-running job state

To onboard a new agent, you give it one instruction: "Read ~/agent-os/MANAGEMENT.md and follow the startup read order." The agent bootstraps itself, reads the current state, checks for pending handoffs, and gets to work.

Works With Every Model, Every Provider, Every Setup

This is where agent-os fundamentally differs from platform-specific memory solutions: it doesn't care what's running your agent.

Switch Models Freely

Start with Claude Opus for architecture, hand off to Sonnet for implementation, use Haiku for maintenance. Memory persists across all of them.

Switch Providers Entirely

Use Anthropic's Claude today, OpenAI's GPT tomorrow, Google's Gemini next week. If the agent can read MEMORY.md, it's onboarded.

Run Local Models

Pull LLaMA from Meta, run Hermes through Ollama, or host any open-weight model on your own GPU. Local models read files just like cloud models do.

Use Any Agent Framework

OpenClaw, LangChain, AutoGen, Claude Code, Codex CLI, a custom Python script — the framework is irrelevant. agent-os is the shared memory layer underneath.

Mix and Match Simultaneously

Run cloud Claude for complex reasoning alongside a local LLaMA for batch tasks. Both share the same agent-os/ directory and see each other's work.

Zero Vendor Lock-in

If a provider raises prices, switch. If a better model drops, swap it in. Your memory, decisions, and safety rails persist independently of the model reading them.

Platform-native memory features like OpenAI's memory or Claude's project knowledge are walled gardens — they only work within that ecosystem. agent-os is the open alternative.

Why Filesystem-Based?

1
Universality

Every agent, every platform, every OS can read Markdown files. No SDK required.

2
Inspectability

You can cat MEMORY.md and see exactly what your agents know. No opaque embeddings, no mystery context.

3
Version Control

It's a git repo. Full history of how your agents' knowledge evolved. Diff, revert, branch, and merge.

Built From Real Operations

agent-os wasn't designed in a vacuum. It evolved from months of running AI agents across production infrastructure — ZK proving clusters, crypto mining pools, game servers, and trading bots. Every workflow, every safety rail, and every naming convention exists because we hit a real problem and needed a real solution.

The handoff protocol exists because a WebUI agent started a 4-hour Docker build and the session timed out. The SSH safety workflow exists because an agent got IP-banned from a server by reconnecting too aggressively. The build-state system exists because we lost track of which servers had completed builds and which had failed.

These aren't theoretical problems. They're scars.

Get Started

git clone https://github.com/AlphaMine-Tech/agent-os.git ~/agent-os

That's it. Point your agents at the directory and start building continuity. MIT licensed, free forever, built for the community.