Go 1.24 v1.1.4 MIT 81 unit tests ~18MB Docker

Go Agent API

Production-ready AI Agent framework built in Go. DDD architecture · Eino workflow engine · OpenAI-compatible · Human-in-the-loop approval · Streaming SSE

Built for Production AI Agents

Everything you need to build, deploy, and operate agentic AI systems.

Eino Workflow Engine

DAG-based agentic workflows with Router → Think → Act → Observe loop. Multi-agent routing built-in. LangGraph equivalent for Go.

OpenAI-Compatible

Drop-in replacement for OpenAI's /v1/chat/completions. Works with any OpenAI client, LiteLLM, or custom LLM endpoint.

Human-in-the-Loop

Pause workflow execution and request human approval before sensitive tool calls. Resume or reject with a reason via REST API.

JWT Auth + API Keys

DB-managed token expiration with per-token rate limits, allowed tools, and allowed models. Scoped access control per API key.

Streaming via SSE

Real-time token streaming with Server-Sent Events. Compatible with OpenAI's streaming format. Zero buffering, low latency.

~18MB Docker Image

Multi-stage build from scratch. Single binary, no runtime dependencies. PostgreSQL + Redis included in Compose.

Clean Architecture

Strict Domain-Driven Design — dependencies flow inward only.

Infrastructure Layer
HTTP handlers · PostgreSQL repos · Redis · LiteLLM client · Eino graphs
↓ depends on
Application Layer
Use cases · DTOs · Workflow orchestration · Tool registry
↓ depends on
Domain Layer (innermost — pure Go, zero external deps)
Entities · Value objects · Repository interfaces · Domain services
Eino Workflow Graph
START → Router (keyword-based agent selection)
Think (LLM call with available tools)
Act (execute tool calls in parallel)
Observe (inject results, loop or finish)
HumanApproval (pause & wait for /approve)
Response → END

Quick Start

Up and running in under 5 minutes.

1. Clone & configure
git clone https://github.com/wyuneed/go-agent-api.git
cd go-agent-api
cp .env.example .env
# Set LLM_BASE_URL and LLM_API_KEY in .env
2. Start all services
docker compose -f deployments/docker-compose.yml up -d
# Starts: API + PostgreSQL + Redis + LiteLLM + Migrations
3. Make your first call
# Register
curl -X POST http://localhost:8080/v1/auth/register \
  -H "Content-Type: application/json" \
  -d '{"email":"dev@example.com","password":"Secure1234","name":"Dev"}'

# Login → get token
curl -X POST http://localhost:8080/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"dev@example.com","password":"Secure1234"}'

# Chat
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer <access_token>" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello!"}]}'
Prerequisites
go 1.22+, PostgreSQL 16, Redis 7
make dev-deps   # installs golangci-lint, air, migrate
Setup & run
cp .env.example .env   # fill in DB, Redis, LLM settings
make migrate-up        # run all 5 migrations
make run               # start server on :8080
# or: make run-watch   # hot reload with air
Test & build
make test            # 81 unit tests with race detection
make test-coverage   # generates coverage.html
make build           # produces bin/server

API Reference

13 endpoints. All protected routes require Authorization: Bearer <token>

Auth
POST /v1/auth/register
POST /v1/auth/login
POST /v1/auth/refresh
Chat · OpenAI-Compatible
POST /v1/chat/completions
Conversations · Stateful
POST /v1/conversations
GET /v1/conversations
GET /v1/conversations/{id}
POST /v1/conversations/{id}/messages
POST /v1/conversations/{id}/approve
Tools
GET /v1/tools
POST /v1/tools/execute
POST /v1/tools/batch
Health
GET /health
GET /ready

API Explorer

Browse and explore the full OpenAPI specification.