Research
Computational NeuroscienceDynamical SystemsMachine LearningReservoir Computing

Dynamical Structure-Preserving Manifolds (dSPM)

A framework for analytically programming reservoir computers using physics-based representations, eliminating the need for traditional training while maintaining interpretability.

A framework for hypothesis-driven identification of neural algorithms

The Wisdom of Precocial Intelligence

Within minutes of birth, a wildebeest calf can run from predators across complex terrain. A newly hatched sea turtle navigates toward the ocean it has never seen. A foal stands and walks within an hour, already computing the physics of balance and locomotion. These precocial behaviors demand sophisticated real-time physics—predicting trajectories, computing interception paths, anticipating collisions—yet they emerge with virtually no opportunity for learning.

How does biology encode such rich computational machinery directly into neural circuitry?

This question points to a deep puzzle at the intersection of cognitive science and neuroscience—one that challenges the dominant paradigm of modern artificial intelligence, where sophisticated behavior emerges only after extensive training on massive datasets. Nature seems to know something we don't: how to install algorithms rather than learn them.

The Gap Between Mind and Brain

We have compelling theories suggesting that brains build structure-preserving representations of the world—internal models that mirror the causal structure of physics, encoding objects, their properties, and the rules governing their interactions. These "physics engines in the head" elegantly explain how we predict that a ball will bounce, a tower will topple, or a liquid will pour. The cognitive science literature is rich with evidence for such representations.

But these theories are typically implemented in Python or MATLAB, using high-level programming constructs that bear no resemblance to biological neural circuits. They tell us what the brain might be computing at an abstract functional level, but offer no account of how neurons could actually realize such computations. The algorithm remains trapped in silicon, disconnected from the wetware it purports to explain.

Meanwhile, deep neural networks achieve remarkable fits to neural data. We can train recurrent networks to predict ball trajectories, and their internal representations often correlate with recorded neural activity. But the computation itself remains opaque—a black box shaped by gradient descent across millions of parameters. We can ask what training objective was used, what architecture was chosen, what data was provided—but we cannot directly specify or test hypotheses about what algorithm the network implements.

This is the central tension: interpretable cognitive theories that lack neural grounding, versus neurally-plausible models that lack algorithmic transparency.

Dynamics as the Bridge Language

Dynamical Structure-Preserving Manifolds (dSPM) resolves this tension by recognizing that dynamical systems provide the natural language for expressing neural algorithms. The key insight is deceptively simple: a symbolic, structure-preserving representation of the world—complete with objects, collision logic, and physical laws—can be exactly translated into a system of coupled differential equations. These equations define a dynamical manifold whose geometry corresponds to the represented domain.

And here is the crucial step: this dynamical system can be analytically embedded into the connectivity of a recurrent neural network, without any training whatsoever.

The mathematics here is not approximate curve-fitting or optimization. We solve, in closed form, for the precise pattern of synaptic weights that will cause a reservoir computer to implement our specified computation. The network's dynamics become topologically semi-conjugate to the dynamical algorithm—meaning the high-dimensional neural trajectory faithfully tracks the low-dimensional computation we designed.

No gradient descent. No training data. No loss functions. The algorithm is not learned—it is installed.

The Analytical Programming Framework

The technical foundation of dSPM builds on recent work in the physics of dynamical systems, particularly methods for "programming" reservoir computers. (This is all thanks to the brilliant work of my collaborator Jason Kim, who invented this analytical programming framework—I encourage you to read the original paper in Nature Machine Intelligence.) Traditional reservoir computing sets the recurrent weights randomly and trains only the output layer. This approach allows us to invert this paradigm entirely.

We begin by expressing a cognitive hypothesis as a dynamical algorithm: a set of coupled ordinary differential equations whose variables encode the relevant representational quantities—positions, velocities, collision states—and whose dynamics implement the necessary computations. For physical prediction in a Pong-like environment, this means:

  • Integration equations that update position from velocity
  • Pitchfork bifurcations that detect proximity to walls—a single stable attractor splits into two stable attractors when the ball approaches a boundary
  • Hysteretic switches that flip velocity signs through bistable NAND-gate dynamics, implementing elastic collision

The mathematical core of our method then analytically determines the connectivity matrix of a reservoir computer such that the network's population dynamics implement this dynamical algorithm. We decompose the reservoir's state into a basis of its inputs and their time derivatives, expand via multivariate Taylor series, then solve a least-squares problem to align this basis with the target dynamics.

The result is a neural network that is, by construction, a white-box at the algorithmic level. We know exactly what computation it performs because we specified that computation directly. The reservoir becomes a physical instantiation of our hypothesis—falsifiable by neural data.

Single-State Sufficiency: A Striking Prediction

We applied dSPM to investigate physical prediction in the dorsomedial frontal cortex (DMFC) of macaques performing a ball interception task. Previous work had shown that DMFC encodes information about ball position even when occluded—but the nature of this computation remained unclear.

The dSPM framework enabled us to test a specific hypothesis: that DMFC implements a physics-based, structure-preserving representation of the scene encoded in the geometry of a low-dimensional neural manifold. This hypothesis makes a striking prediction we call single-state sufficiency: if the manifold geometry truly encodes a physics-based representation, then the initial configuration of the scene should bias neural activity toward a region of state space from which the entire future trajectory—not just the next time step—is deterministically specified.

Think about what this means. On a properly configured manifold, every point encodes not just "where is the ball now" but "where will the ball be at every moment until interception." The future is not computed step-by-step through simulation—it is geometrically implicit in the current state.

Remarkably, this is exactly what we found.

Confirmation in Neural Data

Within approximately 250 milliseconds of trial onset—roughly the time required for visual information to reach frontal cortex—the complete future trajectory of the ball becomes linearly decodable from DMFC population activity. Not just the endpoint where the ball will be intercepted, but its position at every moment along the way.

Task-optimized recurrent neural networks, despite being trained to predict ball trajectories and achieving good behavioral performance, fail to exhibit this rapid whole-trajectory encoding. They show gradually improving predictions over time, consistent with step-by-step simulation rather than manifold-based computation.

The dSPM model, by contrast, matches the temporal profile of DMFC encoding with high fidelity. Using representational similarity analysis, we found that dSPM explains substantial variance in the neural data—variance that alternative models cannot account for. When we statistically control for what task-optimized networks explain, dSPM maintains its correlation with neural activity. The reverse is not true: after controlling for dSPM, the task-optimized networks explain almost nothing additional.

Perhaps most tellingly, we tested whether DMFC might be using a simple heuristic—a linear mapping from current position and velocity to final position, which should work pretty well in this simplified task as the nonlinearity involved is a single reflection making the trajectory still piecewise linear. We found that such a heuristic predicts perfect generalization across conditions with different numbers of wall bounces. But DMFC, like dSPM, shows condition-specific nonlinear dynamics that do not generalize across bounce conditions. The brain implements trajectory-specific physics, not task-specific shortcuts.

Implications for AI and the Science of Intelligence

The success of dSPM in explaining DMFC activity suggests a different paradigm for understanding—and perhaps building—intelligent systems.

The dominant approach in modern AI relies on task optimization: specify an objective, provide data, and let gradient descent discover whatever computational strategy minimizes the loss. This has proven remarkably effective, but it produces systems whose internal computations resist interpretation and whose capabilities remain tightly bound to their training distribution. We can make networks larger and train them longer, but we cannot directly specify what algorithm they should implement.

Biological intelligence appears to work differently. Precocial species demonstrate that sophisticated computation can be directly encoded in neural connectivity, presumably through evolutionary processes that have "solved" for the appropriate circuit structure over phylogenetic time. This work is taking the stance that evolution had access to something like the dSPM framework: a way to translate computational requirements into connectivity patterns.

This suggests a complementary approach to AI development—one rooted in control theory and dynamical systems rather than statistical learning. Instead of training networks to approximate desired input-output mappings, we might analytically construct networks that implement desired dynamical computations. The resulting systems would be:

  • Interpretable by design—we know what algorithm runs because we specified it
  • Data-efficient—no training corpus required, just mathematical specification
  • Compositional—dynamical primitives can be combined to build complex computations
  • Verifiable—algorithmic properties can be analyzed mathematically

This is not to say that learning is unimportant—clearly, adaptive systems must learn from experience. But perhaps the foundation should be analytically programmed structure that learning then refines, rather than blank-slate networks that must discover everything from scratch. Biology seems to have reached this conclusion long ago.

Looking Forward

The dSPM framework opens new avenues for the science of intelligence. By unifying symbolic cognitive theories with neural dynamics, it enables hypothesis-driven exploration of how biological brains build and manipulate the rich mental representations that make adaptive behavior possible.

We see immediate applications in understanding spatial cognition (where toroidal manifolds for grid cells suggest similar analytical constructions), mental navigation, and online perception-action loops. More speculatively, the framework may extend to structure-preserving representations beyond physics—representations of agents, social dynamics, and abstract conceptual spaces.

The broader vision is a science of neural algorithms where hypotheses are expressed as dynamical systems, compiled into network connectivity, and tested against neural data. Not black boxes trained to perform tasks, but glass boxes designed to implement computations.

The wildebeest calf running from lions does not have time to learn physics by gradient descent. Neither, perhaps, should our machines.

Manuscript

The full manuscript, including detailed methods and supplementary analyses, is available below.

Calbick, D., Kim, J.Z., Sohn, H., & Yildirim, I. (2025). Hypothesis-Driven Identification of Neural Algorithms With Dynamical Structure-Preserving Manifolds. Under review.