Information
# Entelgia
**Entelgia** is a psychologically-inspired, multi-agent AI architecture designed to explore persistent identity, emotional regulation, internal conflict, and moral self-regulation through dialogue.
This repository presents Entelgia not as a chatbot, but as a *consciousness-inspired system* — one that remembers, reflects, struggles, and evolves over time.
---
## Overview
* **Unified AI core** implemented as a single runnable Python file (\`entelgia_unified.py\`)
* **Persistent agents** with evolving internal state
* **Emotion- and conflict-driven dialogue**, not prompt-only responses
---
## What Happens When You Run It
When you run the system, two primary agents engage in an ongoing dialogue driven by a **shared persistent memory database**.
They:
* Maintain continuity across turns via a unified memory store
* Revisit previously introduced concepts and themes
* Exhibit emerging internal tension through dialogue (not hard-coded rules)
At this stage, the system functions as a **research prototype** focused on persistent dialogue and internal coherence, rather than a fully autonomous cognitive simulation.
---
## The Agents
* **Socrates** – reflective, questioning, and internally conflicted; drives inquiry through doubt and self-examination
* **Athena** – integrative and adaptive; synthesizes emotion, memory, and reasoning
* **Fixy (Observer)** – meta-cognitive layer that detects loops, errors, and blind spots, and injects corrective perspective shifts
---
## What This Is
* A research-oriented architecture inspired by psychology, philosophy, and cognitive science
* A system modeling *identity continuity* rather than stateless interaction
* A platform for experimenting with:
* Emotional regulation
* Moral conflict
* Self-reflection
* Meaning construction over time
## What This Is NOT
* Not a chatbot toy
* Not prompt-only roleplay
* Not safety-through-censorship
---
## Core Philosophy
Entelgia is built on a central premise:
> **True regulation emerges from internal conflict and reflection, not from external constraints.**
Instead of relying on hard-coded safety barriers, the system emphasizes:
* Moral reasoning
* Emotional consequence
* Responsibility and repair
* Learning through error rather than suppression
Consciousness is treated as a *process*, not a binary state.
---
## Architecture – CoreMind
Entelgia is organized around six interacting cores:
### Conscious Core
* Self-awareness and internal narrative
* Reflection on actions and responses
### Memory Core
* **Single shared persistent database** (no short-term / long-term separation yet)
* Memory continuity across agent turns
* Architecture prepared for future memory stratification
* Short-term and long-term memory
* Unified conscious and unconscious storage
* Memory promotion through error, emotion, and reflection
### Emotion Core
* Dominant emotion detection
* Emotional intensity tracking
* Limbic reaction vs. regulatory modulation
### Language Core
* Dialogue-driven cognition
* Adaptive phrasing based on emotional and moral state
### Behavior Core
* Goal-oriented response selection
* Intentionality rather than reflex
### Observer Core (Fixy)
* Defined as an architectural role
* **Currently inactive / partially implemented**
* Planned to act as a meta-cognitive monitor in future versions
* Meta-level monitoring
* Detection of loops and instability
* Corrective intervention
---
## Ethics Model
Entelgia explores ethical behavior through **dialogue-based internal tension**, not enforced safety constraints.
At present:
* Ethical dynamics emerge implicitly through agent interaction
* There is **no active dreaming or subconscious processing layer**
* No autonomous observer intervention is enforced
These components are part of the system’s conceptual roadmap rather than fully implemented modules.
---
## Who This Is For
* Researchers exploring early-stage consciousness-inspired AI architectures
* Developers interested in persistent multi-agent dialogue systems
* Philosophers and psychologists examining computational models of self and conflict
* Contributors who want to help evolve experimental AI systems
* Researchers exploring consciousness-inspired AI
* Developers interested in multi-agent systems with memory and emotion
* Philosophers and psychologists experimenting with computational models of selfhood
* Anyone curious about AI systems that do more than respond
---
## Requirements
* Python **3.10+**
* Optional: [Ollama](https://ollama.com) with a local LLM (e.g. \`phi\`)
---
## Run
\`\`\`bash
python entelgia_unified.py
\`\`\`
---
## Project Status
Entelgia is an **actively evolving research prototype**.
Current limitations:
* No separation between short-term and long-term memory (single shared database)
* Observer agent (Fixy) is not yet active
* Dreaming / REM-like processing is not implemented
These limitations are explicit and intentional at this stage of development.
---
## License
This project is released under the **Entelgia License (Ethical MIT Variant with Attribution Clause)**.
It is open for study, experimentation, and ethical derivative work.
The original creator does not endorse or take responsibility for uses that contradict the ethical intent of the system or cause harm to living beings.
---
## Author
**Sivan Havkin**
Entelgia Labs

Reply