Research: Ouroboros internal representation — how it tracks who it talked to #198
Labels
No labels
Compat/Breaking
Kind/Bug
Kind/Competitor
Kind/Documentation
Kind/Enhancement
Kind/Epic
Kind/Feature
Kind/Security
Kind/Story
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Scope/Core
Scope/Cross-Plugin
Scope/Plugin-System
Scope/Single-Plugin
Status
Abandoned
Status
Blocked
Status
Need More Info
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
ultanio/cobot#198
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Context
Ouroboros has an interesting approach to tracking conversations and maintaining an internal model of who it has talked to. This issue documents the mechanisms in detail.
Follows up on #197 (competitor overview).
The Memory Architecture
Ouroboros uses a 5-layer memory system stored on Google Drive:
Layer 1: Owner Tracking (
state.json)Ouroboros uses a single-owner model. The first person to send a Telegram message becomes the permanent owner:
State tracks:
owner_id— Telegram user ID (set once, never changes)owner_chat_id— Telegram chat IDlast_owner_message_at— ISO timestamp of last messageKey difference from Cobot: Ouroboros only talks to ONE person. All other users are silently ignored. There's no contact list, no multi-user tracking — just "the creator."
Layer 2: Chat History (
chat.jsonl)Every message (in and out) is logged as a JSONL entry:
The chat history is:
memory.summarize_chat()— last 200 messageschat_historytool (with text search + offset + count)Layer 3: Dialogue Summary (
dialogue_summary.md)An LLM-generated summary of the conversation history, created by the
summarize_dialoguetool:This summary is:
memory/dialogue_summary.mdsummarize_dialogue(manually or via consciousness)This is how Ouroboros "remembers" who it talked to and what was discussed — not by tracking individual contacts, but by maintaining a living summary of the entire conversation.
Layer 4: Identity (
identity.md)A self-written file that the agent updates to maintain its sense of self:
The system prompt includes a "stale identity" health check:
If identity.md hasn't been updated in 8+ hours, it surfaces a warning in the LLM context.
Layer 5: Knowledge Base (
knowledge/)Topic-based persistent memory with auto-indexing:
Each topic is a markdown file. An
_index.mdis auto-maintained with summaries (first 3 non-heading lines, max 150 chars each). The index is included in the LLM context.How Context Is Assembled
The
build_llm_messages()function incontext.pyassembles everything into a 3-block system prompt:Soft token cap: 200K tokens. If exceeded, dynamic sections are pruned in order:
Background Consciousness — Proactive Contact
The consciousness loop (
consciousness.py) can proactively message the owner:It wakes up periodically (default 5 min, self-adjustable) and can:
Comparison with Cobot/OpenClaw Memory
dialogue_summary.mdidentity.md(self-written, health-checked)send_owner_messageWhat We Can Adopt
1. Dialogue Summarization
An LLM-generated summary of conversation history that gets included in context. This is more token-efficient than raw chat history and captures the essence (decisions, preferences, patterns) rather than verbatim text.
For Cobot: Could run as a heartbeat task — periodically summarize recent conversations and update MEMORY.md automatically.
2. Stale Memory Detection
Health checks that surface warnings when memory files haven't been updated recently. Simple but effective.
For Cobot: Add age checks to HEARTBEAT.md tasks — warn if MEMORY.md or daily notes are stale.
3. Knowledge Base with Auto-Index
Topic-based persistent memory with automatic index generation. Clean pattern for structured knowledge that grows organically.
For Cobot: Could be a skill —
knowledge write <topic> <content>,knowledge list,knowledge read <topic>.4. Never-Truncate User Messages
Ouroboros never truncates the creator's messages in context, only its own. This prioritizes human input over agent output.
For Cobot: Worth considering in context pruning — human messages should have higher retention priority.
Filed by Hermes 🪽 on behalf of k9ert