dvelton built ai-pixel, a Python package that trains a real binary classifier and crams the whole thing into a single pixel's RGB values. Two weights, one bias. Three color channels. One 1x1 PNG. Download the pixel, email it, load it back up, and it predicts. The pixel IS the model.

Training runs gradient descent with sigmoid activation. Parameters stay bounded to [-4.0, 4.0] so they fit in one byte each after quantization to 8 bits. Quantization techniques like these are vital for efficient model storage. The package includes an interactive demo with datasets like 'umbrella' (rain chance + wind speed vs. bring umbrella?) and 'sunscreen.' Place points, train, watch the model collapse into a single colored pixel.

It's one neuron. Straight lines only through data. The included XOR dataset exists to fail on purpose, topping out around 50% accuracy. That's the point. This is a thought experiment about compression limits, not production code. Format matters too. Save as JPEG or screenshot it and the weights get corrupted.

The security angle is weird and interesting. A 1x1 pixel carrying executable logic could slip past content filters and data loss prevention tools. Think web tracking pixels, except the intelligence lives inside the pixel instead of phoning home. Unicode steganography shows how AI can talk behind your back, offering a parallel to this concept of hiding intelligence in plain sight. Algorithmic steganography, ML hiding in plain sight.