A developer known as Cagdas Ucar has released Robot Brain, an open-source Node.js implementation of a self-organizing hierarchical temporal neural network that operates without backpropagation, training epochs, or labeled data. The project, available on GitHub under an Apache 2.0 license, departs from mainstream deep learning architectures by drawing on neuroscience concepts — specifically cortical column mechanics — rather than gradient descent optimization. Neurons form, compete, decay, and die based on prediction errors, with higher-order abstraction layers emerging when lower-level predictions consistently fail.

The architecture encodes time structurally rather than statistically: connections carry not just the relationship between two events but the precise temporal distance between them, making sequential data a first-class citizen of the model. A voting mechanism across all currently active neurons — weighted by hierarchy level and recency — produces predictions without any central controller. When a pattern activates on a parent neuron, it suppresses the parent's raw connection-based predictions, allowing the system to self-correct. Ucar reports building and testing the system for over a year before the public release.

Two demos ship with the release. A stock trading demo, run against five years of historical data across 100 stocks sourced via Alpaca's market data API, reports a 1016% ROI from a $15,000 starting capital using 3-hour timeframe data. The author acknowledges that base-level price prediction accuracy sits at roughly 50%, consistent with market noise; profitability emerges from reward-weighted action selection that learns which contextual patterns precede better outcomes. A text sequence learning demo separately demonstrates 100% character-level prediction accuracy within five training episodes.

Ucar notes the current Node.js codebase is intended as a reference implementation and that a high-performance C++ core with Python and Node.js bindings is in development. Transformer-based architectures dominate production agent systems, but alternatives have been gaining research traction. Predictive coding — the idea that the brain minimizes prediction error rather than optimizing a loss function — and Numenta's Hierarchical Temporal Memory framework share philosophical DNA with Robot Brain's approach, though neither has cracked scalability in production. The trading benchmarks are in-sample only and say nothing about live performance. The architectural claims around unsupervised hierarchy formation and temporal encoding are a different matter — those are worth researchers' time.