A developer publishing under the GitHub handle "stupid-genius" has released a full multi-layer perceptron written in JavaScript, available at github.com/stupid-genius/Perceptron. The project surfaced on Hacker News as a "Show HN" submission and is technically notable for one specific design choice: it uses dual numbers for automatic differentiation rather than the reverse-mode AD that underpins PyTorch, TensorFlow, and JAX.
Dual numbers are a mathematical construct that carry both a real value and an infinitesimal derivative component simultaneously, enabling exact gradient computation in a single forward pass per input dimension. Forward-mode AD contrasts with the reverse-mode used by PyTorch's autograd, TensorFlow's GradientTape, and JAX's grad() API — and the difference matters at scale. For a function mapping n inputs to m outputs, forward-mode requires O(n) passes while reverse-mode requires O(m). Neural network training involves a scalar loss (m=1) and potentially millions of parameters (large n), so reverse-mode wins by a wide margin for production use. ForwardDiff.jl in Julia occupies the forward-mode niche in scientific computing, where the tradeoff flips: few inputs, many outputs.
The dual-number approach does have one concrete advantage: it's easier to follow. Because derivatives are carried inline with values, there's no separate computation graph to build or store, no gradient checkpointing problem, and no opaque backward pass — the chain rule is directly visible in the code. The project covers the full training loop: forward pass, backpropagation, gradient updates, five built-in activation functions (RELU, SIGMOID, TANH, STEP, IDENTITY), and four loss functions (MSE, MAE, HUBER, CROSS_ENTROPY). Inspired by the Welch Labs "Neural Networks Demystified" series, it still has metrics and visualization on the TODO list, so the developer appears to be actively extending it.