Renato Duarte had a simple enough question: what has the CoSyNe conference actually been about, year by year, across its 22-year run? To find out, he scraped every program book from 2004 to 2026, tracked roughly 40 keywords across five categories, and normalized the counts for conference growth. The answer, published on his Grounded Neuro Substack with an accompanying open-source GitHub repo, is hard to argue with: AI and ML terminology has more than quintupled in frequency since the conference began, rising from around 4 occurrences per 10,000 words in 2004 to over 20 in 2026.
Duarte breaks the history into four eras. The Classical period (2004–2011) centered on Bayesian methods and information theory. Circuits & Tools (2012–2017) brought optogenetics and the mouse model to the fore. The Manifold era (2018–2022) saw dimensionality reduction and deep learning vocabulary flood in. Then, from 2023 onward, what Duarte calls the NeuroAI Era: transformers, large language models, and foundation models are now standard CoSyNe vocabulary.
The 2026 conference, running March 12–17 in Lisbon and Cascais, puts names to the trend. Chris Olah — the Anthropic researcher who pioneered mechanistic interpretability — is the keynote speaker, with Anthropic listed as the keynote sponsor. Google DeepMind's Kimberly Stachenfeld is Program Co-Chair. Workshops this year include 'Mechanistic Interpretability in Brains and Machines' and 'AI for Interpretable Model Discovery in Neuroscience.'
The irony is worth sitting with. CoSyNe was founded in 2004 by Tony Zador, Alexandre Pouget, Carlos Brody, and Mike Shadlen specifically to give computational neuroscientists a home after machine learning had effectively crowded them out of NeurIPS. Twenty-two years on, ML's biggest names are back in through the front door.
For the AI industry, Anthropic's decision to sponsor a neuroscience keynote is a signal worth reading. Mechanistic interpretability — the project of reverse-engineering what neural networks are actually computing — originated inside AI labs and is now spreading into academic neuroscience. The influence runs both ways: neuroscience once provided the conceptual scaffolding for architectures like recurrent networks and attention mechanisms; AI methods now shape how researchers study the brain itself. Duarte's keyword counts make that two-decade feedback loop visible in the data.