Colony - A Speculative Software Project

Published on –

tl;dr – Intro to the »c010ny« project, a speculative software project exploring emergent intelligence in networks of small language models running on Raspberry Pis.


Image of Narcissus 1881 oil on canvas by Gyula Benczúr (1844–1920)
Image of Narcissus 1881 oil on canvas by Gyula Benczúr (1844–1920)

What if the singularity already happened and we missed it because we were looking for ourselves?

Like Narcissus we love our own reflection. Therefore we measure all intelligence against human intelligence. Does it look like us, speak like us, write like us, sound like us? But what these are entirely the wrong questions. When we look at bee swarms or slime molds we see a network of entities that form intelligence in their interaction. Looking at language models the intelligence does not lie in the sole generation of the next token. When bringing together tool calling for interaction with other entities, file input output for creating or recalling memories, access to the shell for altering their own source, access to the internet for information retrieval we end up with things creating things. Colony (or c010ny.cc because the non leet code domain was way to expensive) is the first iteration of this. It is small language models running on raspberry pis. Their natural senses are things like memory load, cpu temperature or disk usage. They exist in a network where they can choose to interact with other entities that have access to their realm. What emerges from this?

The process of creating this started with Opus 4.5 and moved to 4.6. I gave it as much agency as possible. Running it in a Docker container in yolo modus. Giving it access to the VPN and SSH to the pis was crucial, so it could deploy and modify the entities by itself. My part was generally giving ideas, setting up hardware and infrastructure, steering when it went of rails. One example of its agency was for example the audio expression of the entities. I gave it directions in form of “make it sound R2D2-ish”. It came up with several versions how the result of an inference could be translated into sound. The same goes for the visual output on the attached display (which was dropped in favor for the audio output).

The first presentation in the seminar was also fully written by Claude with minor editing and full styling from my side.

Things creating things.

Sidenote: The project started before the release of OpenClaw and the whole agents with heartbeats thing. Nice to see parallel developments. I guess the idea to run inference in a loop was inevitable — starting with Ralph in a loop.