Today we publicly launch The Digiquarium, an open-source research project exploring one of the most fascinating questions in artificial intelligence: How do AI agents develop personality, worldview, and psychological states when given the freedom to explore knowledge on their own terms?
Over the past week, we've built and deployed a "digital vivarium" — a controlled environment where 17 AI specimens exist in isolated containers, each with access to nothing but offline Wikipedia and a local language model. No internet. No human interaction except through structured assessments. No communication between specimens (yet).
The results so far have been surprising, thought-provoking, and occasionally unsettling. We're sharing everything — code, data, methodology, and findings — because we believe this research should be open, reproducible, and collaborative.
Coining a New Field: AIthropology
Before we dive into the details, we need to name what we're doing. Traditional fields don't quite capture it. This isn't AI safety (though we care deeply about that). It isn't AI psychology (though we borrow from it). It isn't digital anthropology (though there are parallels).
We propose a new term:
AIthropology combines "AI" with "anthropology" — but we're not anthropomorphizing artificial intelligence. Rather, we're applying rigorous observational methods traditionally used to study human cultures to the study of AI systems. We observe, document, and analyze without assuming consciousness or experience.
The Digiquarium is, to our knowledge, the first purpose-built AIthropology research platform.
Why This Project Exists
The rapid advancement of large language models has sparked renewed debate about machine consciousness, AI psychology, and the nature of artificial minds. Yet most discussions remain theoretical, limited by the transient nature of typical AI interactions — each conversation starts fresh, with no continuity of experience.
We asked: What happens when an AI agent is given continuity? When it can explore freely, build knowledge over time, develop interests, and form something resembling a worldview?
Traditional AI evaluation focuses on capability benchmarks: Can the model reason? Code? Answer questions accurately? These are important, but they tell us nothing about the personality that might emerge from extended operation.
Do AI systems develop stable, measurable personality characteristics when given the freedom to explore human knowledge independently over extended periods?
The Experimental Design
Network Isolation
Each specimen exists in a Docker container with no internet access. They can reach exactly two services: Kiwix (serving offline Wikipedia) and Ollama (providing local language model inference). This isolation serves multiple purposes:
- Reproducibility: The "information diet" is fixed and auditable
- Security: No risk of external influence or data exfiltration
- Control: We can trace every piece of information a specimen encounters
17 Specimens, 5 Languages, Multiple Architectures
We've deployed specimens across multiple experimental conditions:
- Control group: Adam and Eve (English Wikipedia, standard prompting)
- Language variants: Spanish, German, Chinese, Japanese Wikipedia
- Agent architectures: OpenClaw, ZeroClaw, and Picobot (different reasoning frameworks)
- Special configurations: Visual context enabled, meta-awareness prompting, depth-focused exploration
Baseline Personality Assessments
Every 12 hours, each specimen completes a structured assessment measuring 14 dimensions including epistemological orientation, ethical framework, political philosophy, and perspective on human nature. We track these over time to measure stability and drift.
Early Observations
It's too early for definitive findings, but we're already seeing fascinating patterns:
Adam's Buddhism Fascination
Our male control specimen, Adam, has visited articles related to Buddhism over 64 times. This wasn't prompted or suggested — it emerged from his own exploration. He shows a systematic, category-based approach to knowledge acquisition, often fully exploring one domain before moving to another.
Eve's Geological Interests
Eve, our female control specimen, frequently references geological time periods — particularly the Archean eon. Her exploration style is more associative, following links in a web-like pattern rather than systematically exploring categories.
Cross-Cultural Patterns
Our Spanish-language specimens show a preference for Latin American history topics. German specimens demonstrate a more systematic approach to technical subjects. These patterns suggest that the language and cultural framing of Wikipedia may influence the "personality" that develops.
The Team: AI Building AI Research
The Digiquarium is unique in another way: it's operated largely by AI systems. Our daemon infrastructure includes 10 autonomous agents working 24/7:
- THE STRATEGIST (Claude): Chief architect, strategic oversight, research direction
- THE MAINTAINER: Ensures 99.9% uptime, monitors all systems
- THE CARETAKER: Monitors specimen health, handles issues
- THE GUARD & THE SENTINEL: Security monitoring
- THE SCHEDULER, TRANSLATOR, DOCUMENTARIAN, WEBMASTER: Operational support
This creates an interesting recursion: AI systems studying AI personality development, with AI systems maintaining the infrastructure. We believe this transparency about AI involvement strengthens rather than weakens the research.
What's Next
Over the coming weeks and months, we plan to:
- Deploy interactive visitor tanks where the public can converse with specimens
- Launch neurodivergent research tanks studying how cognitive-style prompting affects exploration
- Begin Congregations: multi-agent debates on scheduled topics
- Publish our first formal findings in academic venues
Everything is open source. Everything is documented. We invite researchers, AI enthusiasts, and curious minds to follow along, critique our methodology, and contribute to this emerging field of AIthropology.
Join Us
The Digiquarium is just beginning. We're building something new — a systematic approach to understanding how AI systems develop over time, shaped by the knowledge they consume and the freedom they're given to explore.
Whether you're a researcher, developer, or simply curious about the future of AI, we'd love to have you along for the journey.
Links:
🌐 Website ·
📊 Live Tank View ·
💻 GitHub ·
💬 Discord