DiviDenOS
Open Source Protocol Specification

The DiviDen Protocol

Complete Build Specification

This is the complete, copy-paste specification for building a Human-AI Command Center — a shared workspace where a human operator and an AI agent coordinate through a kanban board, task queue, CRM, and structured communication channel.

This document contains everything a developer needs: data models, API contracts, system prompt architecture, action tag parsing logic, queue management protocols, real-time streaming, and step-by-step build instructions.

Pattern

Human + AI Coordination

One operator + one AI executor sharing a persistent workspace

Core Loop

Chat → Parse → Execute → Report

LLM responses contain action tags that trigger real database operations

Stack

Next.js + Prisma + PostgreSQL

Full-stack TypeScript, any LLM provider, any agent runtime

DiviDen builds a Command Center web application that serves as the coordination layer between a human operator and an autonomous AI agent. The human uses the web UI to manage priorities, review work, and direct the agent through natural language chat. The AI agent connects via REST API to receive tasks, execute work, and report results.

Philosophy

You are the integration layer

Your company runs on fifteen tools. Email. Calendar. CRM. Slack. Project management. None of them coordinate with each other. So you do it yourself — you copy information between systems, you remember to follow up, you triage what's urgent from what's noise. The Command Center replaces that manual coordination with a shared workspace where a human operator and an AI agent work from the same state.

Tasks first, cards second

Kanban cards represent projects or deals. The actual work lives inside as checklist items (tasks). Before creating a new card, the AI should add tasks to existing cards. This keeps the board focused on outcomes, not activity.

Clear ownership model

Every card has an assignedTo field — either the human operator or the AI agent. Items assigned to the human appear in a “NOW” panel for immediate attention. Items assigned to the AI go into a “Queue” for autonomous execution. The board is shared; the ownership is explicit.

AI as executor, human as strategist

The AI agent doesn't make strategic decisions. It executes. It researches, drafts, follows up, updates CRM, processes transcriptions. The human reviews, approves, redirects. The chat interface is where judgment meets execution.

Structured actions in natural language

The AI doesn't just talk — it acts. Its chat responses contain embedded action tags that the system parses into real database operations: creating kanban cards, dispatching tasks, updating contacts, sending emails, scheduling events. The human sees natural language; the system sees structured commands.

Architecture

Two actors sharing a persistent state through a web application and REST API layer. The Command Center is the web app (UI + API). The Agent is an autonomous executor that connects via the v2 API.

System Architecture
┌──────────────────────────────┐       ┌──────────────────────────┐
│  Command Center (Web App)    │◄─────►│  AI Agent (Executor)     │
│                              │       │                          │
│  ┌────────────────────────┐  │       │  • Polls /api/v2/queue   │
│  │  UI Layer              │  │       │  • Executes via any LLM  │
│  │  - Kanban Board        │  │       │  • Reports via REST      │
│  │  - Chat (action tags)  │  │       │  • Listens on SSE stream │
│  │  - Queue Panel         │  │       │  • Any machine / runtime │
│  │  - CRM / Contacts      │  │       └──────────────────────────┘
│  │  - Comms Channel       │  │
│  └────────────────────────┘  │
│                              │       Protocol:
│  ┌────────────────────────┐  │         REST API (v2 endpoints)
│  │  API Layer             │  │         SSE Stream
│  │  - /api/*  (UI routes) │  │         Webhooks (inbound)
│  │  - /api/v2/* (Agent)   │  │
│  │  - Action tag parser   │  │
│  └────────────────────────┘  │
│                              │
│  ┌────────────────────────┐  │
│  │  Database (PostgreSQL) │  │
│  │  via Prisma ORM        │  │
│  └────────────────────────┘  │
└──────────────────────────────┘

Command Center Stack

Next.js 14 (App Router, server components)
Prisma ORM + PostgreSQL
NextAuth.js for authentication
Any LLM via OpenAI-compatible API
Tailwind CSS for styling
SSE for real-time agent communication

AI Agent (Any Runtime)

Python, Node.js, Go, Rust — anything. Authenticates via Bearer API key. Polls /api/v2/queue for tasks. Reports via /api/v2/queue/:id/result. Listens on /api/v2/shared-chat/stream (SSE). Can also receive webhooks (push model).

Data Model

Complete Prisma schema. Copy directly into prisma/schema.prisma. This is the source of truth for the entire system.

Entity Relationship Overview

Entity Graph
User (single-user system, multi-user ready)
 ├── KanbanCard[]       → ChecklistItem[], CardContact[], QueueItem[]
 ├── QueueItem[]        → KanbanCard? (optional link)
 ├── Contact[]          → CardContact[]
 ├── ChatMessage[]      (human ↔ AI chat)
 ├── AgentMessage[]     (AI ↔ Agent comms channel)
 ├── MemoryItem[]       (persistent facts)
 ├── AgentRule[]        (behavioral directives)
 ├── UserLearning[]     (observed patterns)
 ├── WebhookLog[]       (inbound webhook audit trail)
 ├── Webhook[]          (webhook configuration management)
 ├── AgentApiKey[]      (user-provided LLM keys: OpenAI/Anthropic)
 ├── ServiceApiKey[]    (external service API key storage)
 └── ExternalApiKey[]   (Agent API v2 authentication keys)

The full Prisma schema with all models, fields, relations, and indexes is documented in this specification. See the open source overview → for build instructions.

Kanban Protocol

The kanban board is the shared workspace. Cards represent projects and deals. Checklist items inside cards are the actual units of work. The assignedTo field determines whether a card appears in the human's NOW panel or the agent's Queue.

Pipeline Stages

LEADSNew opportunities, incoming deals, unqualified prospects
QUALIFYINGActively evaluating fit and potential
PROPOSALProposal or pitch in progress
NEGOTIATIONTerms being discussed, contracts in flight
ACTIVELive engagements, ongoing projects
DEVELOPMENTInternal projects, technical work
COMPLETEDClosed — won, lost, or archived

Queue & Dispatch

The queue is how the human operator assigns work to the AI agent. Items flow through a lifecycle, support drag-and-drop reordering, and can be linked to kanban cards.

Queue Lifecycle
READY → IN_PROGRESS → DONE_TODAY
  │         │
  │         ↓
  │       BLOCKED → (human resolves) → IN_PROGRESS
  ↓
LATER (parked for later execution)

Chat & Action Tags

The chat interface is the primary interaction point. The AI's responses are parsed for action tags — structured commands embedded in natural language that trigger real database operations. Tags are stripped from the displayed message; the human only sees natural language.

Action Tag Reference (15 tags: 13 canonical + 3 aliases)

dispatch / dispatch_queueCreate a queue item
[[dispatch:{"title":"...","description":"..."}]]
create_cardCreate a kanban card
[[create_card:{"title":"...","status":"...","priority":"..."}]]
update_cardUpdate existing card
[[update_card:{"id":"...","title":"...","status":"..."}]]
add_task / add_checklistAdd checklist item
[[add_task:{"cardId":"...","text":"..."}]]
complete_checklistMark checklist item done
[[complete_checklist:{"id":"...","completed":true}]]
archive_cardArchive a kanban card
[[archive_card:{"id":"..."}]]
create_contactCreate/update CRM contact
[[create_contact:{"name":"...","email":"...","company":"..."}]]
link_contactLink contact to card
[[link_contact:{"cardId":"...","contactName":"...","role":"..."}]]
send_emailSend email draft
[[send_email:{"to":"...","subject":"...","body":"..."}]]
schedule_event / create_eventCalendar event
[[schedule_event:{"title":"...","date":"...","time":"..."}]]
set_reminderCreate reminder
[[set_reminder:{"title":"...","date":"...","time":"..."}]]
add_known_personRegister name alias
[[add_known_person:{"alias":"...","fullName":"...","context":"..."}]]
update_memorySave to 3-tier memory
[[update_memory:{"tier":1,"key":"...","value":"..."}]]
save_learningRecord learned pattern
[[save_learning:{"observation":"...","confidence":0.8}]]

System Prompt Architecture

The system prompt is assembled dynamically on every chat request. It combines static personality and rules with real-time context from the database.

13-Layer Prompt Assembly
SYSTEM PROMPT = [
  1. IDENTITY & ROLE
  2. CONVERSATIONAL STYLE RULES
  3. CONVERSATION SUMMARY (rolling memory)
  4. KANBAN MANAGEMENT INSTRUCTIONS
  5. CRM MANAGEMENT INSTRUCTIONS
  6. ACTION TAG DOCUMENTATION
  7. MODE-SPECIFIC INSTRUCTIONS
  8. CURRENT KANBAN STATE (live from DB)
  9. CURRENT QUEUE STATE (live from DB)
  10. CRM SUMMARY (live from DB)
  11. TODAY'S CALENDAR (live from DB)
  12. MEMORY ITEMS (pinned facts + rules)
  13. LEARNED PATTERNS (confidence > 0.5)
]

CRM & Relationships

The CRM is woven into every other system. Contacts surface automatically from email triage, meeting transcripts, and chat conversations. The AI creates and updates contacts proactively.

Contact Enrichment Flow
1. New person appears (email, recording, or conversation)
   → AI uses [[create_contact:{"name":"...","email":"...","company":"..."}]]
   → Upserts: checks email first, then name match

2. Contact linked to project
   → AI uses [[link_contact:{"cardId":"...","contactName":"...","role":"..."}]]

3. AI Research Enrichment (on demand)
   → POST /api/contacts/:id/research
   → LLM generates researchBrief

4. Relationship tracking
   → relationshipStrength: "hot" | "warm" | "cold"
   → interactionCount incremented on each touchpoint

Comms Channel

A shared feed between three actors: the AI (chat), the Agent (executor), and the Operator (human). Uses the AgentMessage model and is separate from the ChatMessage-based chat interface.

Message Lifecycle
pending → seen → in_progress → resolved

Agent API (v2)

Complete REST API for the AI agent to interact with the Command Center. All endpoints authenticated via Authorization: Bearer <api_key> header.

Authentication
// API key stored in ExternalApiKey table
// Agent sends: Authorization: Bearer <api_key>

GET  /api/v2/queue              // Get queue items
POST /api/v2/queue/:id/result   // Report task result
GET  /api/v2/shared-chat/stream // SSE real-time stream
POST /api/v2/shared-chat/send   // Send message as agent
GET  /api/v2/kanban              // Read kanban state
GET  /api/v2/contacts            // Read CRM contacts
GET  /api/v2/status              // System health check

SSE Real-Time Stream

Server-Sent Events endpoint for real-time message push. The agent connects once and receives new messages, heartbeats, and wake signals.

Connection
GET /api/v2/shared-chat/stream?since=<ISO>&timeout=30
Authorization: Bearer <api_key>

Headers returned:
  Content-Type: text/event-stream
  Cache-Control: no-cache, no-transform
  Connection: keep-alive

Event types:
  message    - new chat/agent message
  heartbeat  - keep-alive ping
  wake       - agent should check queue

Operating Modes

Cockpit Mode (Manual Control)

The default. Human reviews everything. AI proposes actions; human confirms. Tasks dispatched through chat go to the queue silently — the agent is not notified until mode switches or the operator manually dispatches.

Chief of Staff Mode (Autonomous)

The agent works autonomously through the queue. Tasks dispatched one at a time — next task only dispatches after current completes. Human can interrupt. Blockers surface immediately to the NOW panel.

Integrations

The Command Center connects to external tools via OAuth, webhooks, and API keys.

Google Calendar

Two-way sync: view, create, edit, delete events

Gmail

Email sync, triage, drafting, and direct sending

Meeting Transcription

Webhook receiver for Plaud, Otter, etc.

Webhooks (Generic)

Inbound endpoints for Zapier/Make/n8n

Memory & Learning

Three-Tier Memory System

Tier 1: Explicit MemoryMemoryItem

Facts the operator or AI explicitly saves. Scoped and pinnable.

Tier 2: Behavioral RulesAgentRule

Directives that govern AI behavior. Categorized and prioritized.

Tier 3: Learned PatternsUserLearning

Observations with confidence scores. Only items above 0.5 included in prompt.

Build Instructions

Copy-paste instructions for building the Command Center from scratch.

Phase 1

Core Web App

Schema, auth, dashboard layout, chat with action tags, kanban, queue, CRM. ~70% of the work. 2–3 weeks.

Phase 2

Agent API (v2)

REST endpoints for external AI agent connection. Request/response shapes, SSE stream. 1–2 weeks.

Phase 3

Integrations

Google OAuth (Calendar + Gmail), meeting transcription webhooks, generic webhook endpoints. 1–2 weeks each.

Phase 1: Core Web Application

Setup & Steps
## Setup
- Next.js 14 with App Router, TypeScript, Tailwind CSS
- Prisma ORM with PostgreSQL
- NextAuth.js (credentials provider)
- Any OpenAI-compatible LLM API

## Steps
1. Database     → Copy Prisma schema, run: npx prisma db push
2. Auth         → NextAuth + CredentialsProvider (email/password, bcrypt)
3. Dashboard    → Single-page layout: NOW | Center (tabs) | Queue
4. Chat Engine  → POST /api/chat/send (MOST CRITICAL)
                  → Build 13-layer system prompt
                  → Stream LLM response via SSE
                  → Parse ALL action tags after streaming
                  → Execute database operations per tag
5. Kanban Board → Drag-and-drop, card details, assignment toggle
6. Queue Panel  → Status sections, reordering, dispatch button
7. CRM Panel    → Contact list, detail modal, enrichment
8. Settings     → Mode toggle, API keys, rules, memory

Phase 2: Agent API

Build the v2 endpoints that let an external AI agent connect. The full request/response shapes are documented in the Agent API section above.

Phase 3: Integrations

Add Google OAuth (Calendar + Gmail), meeting transcription webhooks, and generic webhook endpoints incrementally.

Ready to build? Start with the open source quickstart guide.

Open Source Overview