← All Projects

Multi-Agent Workflow Platform

Experimental platform for orchestrating teams of AI agents to handle complex tasks — content pipelines, data analysis, and automated research workflows.

CrewAILangGraphPythonClaude APIAI

This is my most experimental project — part learning exercise, part genuinely useful tool. The idea is to break complex tasks into specialized roles and let AI agents collaborate.

Why Multi-Agent?

Single LLM calls hit a ceiling fast. Ask one model to research a topic, write about it, fact-check, and format the output, and the quality drops at each step. But if you assign each task to a specialized agent with its own system prompt, tools, and constraints, the results are dramatically better.

I got interested after reading Anthropic’s research on AI agents and started experimenting with CrewAI and LangGraph.

What It Does

The platform lets you define workflows as teams of agents:

  • Content pipeline: Researcher → Writer → Editor → Fact-checker
  • Data analysis: Data collector → Analyst → Visualizer → Report generator
  • Code review: Static analyzer → Security reviewer → Performance auditor → Summary writer

Each agent has:

  • A specific role and system prompt
  • Access to relevant tools (web search, code execution, file I/O)
  • Short-term memory (conversation context) and long-term memory (Redis-backed)
  • Defined handoff rules for when to pass work to the next agent

Technical Implementation

Built on CrewAI for the role-based agent orchestration and LangGraph for more complex workflows that need conditional branching and loops.

  • Visual workflow builder (React) for designing agent pipelines without code
  • Human-in-the-loop checkpoints — agents can pause and ask for human input at configured steps
  • Observability dashboard — traces every agent decision, tool call, and handoff
  • Cost tracking — shows token usage and estimated cost per workflow run

Backend is FastAPI + PostgreSQL for persistence, Redis for agent memory, Claude API for the actual LLM calls.

Honest Assessment

This works well for structured, repeatable workflows. The content pipeline produces better output than a single prompt about 70% of the time. But it’s slower, more expensive, and harder to debug when something goes wrong.

Multi-agent is powerful but it’s not always the right tool. For simple tasks, a well-crafted single prompt still wins.

Timeline: Started experimenting Q3 2024. The platform is usable but I’d call it beta — I keep refactoring the agent memory system.