Writing
PhD Qualification MusingsComputational NeuroscienceDynamical Systems

The Analog Brain: Harmonic Analyzers and Neural Computation

What mechanical tide predictors reveal about the computational differences between cortical and subcortical structures—and the shared oscillatory language of neural information flow.

This essay was written in response to a question from Professor Steve Chang for my doctoral qualifying examination. The question asked what I thought was a major difference between neural computations occurring in higher-level association cortices versus subcortical structures.

My answer begins in a seemingly unrelated place: the machine rooms of nineteenth and early twentieth century science, where spinning brass discs and steel spheres computed the tides.

Lord Kelvin's harmonic analyzers and Vannevar Bush's differential analyzers represent a profound idea in the history of computation—that physical systems can be made to embody mathematical relationships, not merely calculate them. Kelvin's tide-predicting machines strung together dozens of oscillating components, each tuned to a different tidal frequency, their outputs summed by a single pen tracing predictions onto paper. Bush's differential analyzers extended this principle, using rotating discs and wheel integrators to solve arbitrary systems of differential equations through continuous mechanical analog.

These machines did not simulate every water molecule in the ocean. They captured the relational essence of tidal dynamics—the structure of the governing equations—through physical operations that mirrored mathematical ones. The isomorphism between machine and phenomenon was the computation.

I argue that the nervous system operates on the same principle: neurons as oscillators, circuits as analyzers, the brain as an analog computer that preserves the dynamical structure of the world it models. But unlike a purpose-built analyzer, the brain exhibits a striking division of labor—flexible reconfiguration in cortical association areas, crystallized specialization in subcortical structures. Tracing information through the visual pathway reveals how this division emerges from a shared computational substrate.

The Essay

The Analog Brain

Harmonic Analyzers and Neural Computation

TL;DR: I think that the key difference between computations in these regions is that association cortices implement highly plastic, high-dimensional dynamical models, whereas subcortical structures implement lower-dimensional, task-specialized controllers.

To address this question in full, I would like to start in a seemingly unrelated place: differential and harmonic analyzers. In the early 20th century, engineers faced a computational challenge that would come to shape our understanding of computation through continuous dynamics, a paradigm known as analog computation. Digital computers were far from existing and yet we needed efficient ways to compute and model the physical world, and to do that we needed to solve complex systems of differential equations. The solution—mechanical devices that through physical analog elements offer a compelling lens through which to examine the fundamental question of neural computation in cortical versus subcortical structures. Using spinning spheres resting on rotating discs, these machines performed continuous integration, with multiple units chained together to solve sophisticated problems like Fourier decompositions. Their most celebrated application was perhaps in oceanography, where many individual oscillatory units were strung together, and by tracing the noisy real-world data distribution (akin to stochastic information coming from neural sensors) performed a multi-frequency Fourier decomposition to produce the most accurate tide prediction tables of their era. Crucially, this was not done by simulating every water molecule, but by capturing the relational essence of tidal dynamics through mechanical operations that mirrored the underlying differential equations. The key insight is that these analog computers achieved their computational power through structure-preserving representations, where the physical network of oscillators shares the homeomorphic physical properties of the data it is analyzing. They create a computational isomorphism between mechanical operations and physical phenomena, embodying mathematical relationships directly in their architecture. The analyzer didn't merely compute numerical solutions; it instantiated the dynamical structure of the systems it modeled.

This principle illuminates a fundamental aspect of neural computation. Like differential analyzers, the nervous system functions as an analog computer where computational elements—neurons—adjust their properties of frequency (firing rate, λ), phase (timing and synchrony, φ), amplitude (signal strength A), and sign (excitation or inhibition ±w) to create structure-preserving models of the physical world. However, the brain's computational architecture reveals a striking division of labor: flexible, reconfigurable processing in cortical association areas versus specialized, often crystallized computations in subcortical structures.

Indeed, even the atomic constituents of the network are oscillators as shown by the Hodgkin-Huxley model. This seminal model of neural dynamics can be analyzed as a four-dimensional dynamical system with time-varying state variables (V, m, h, n) that exhibits limit cycle oscillations under appropriate conditions (Hodgkin & Huxley, 1952). When coupled in networks, these oscillating units can be approximated using phase reduction techniques, allowing analysis via Kuramoto-type models (Kuramoto, 1997; Strogatz 2000). Such coupled oscillator networks have been shown to exhibit complex collective dynamics, including synchronization transitions and, under specific conditions, critical phenomena.

Just as harmonic analyzers use relatively simple oscillators as building blocks—maintaining their fundamental computational principles, while configurable for different problems—cortical association areas exhibit remarkable flexibility in their dynamical configurations. Meanwhile, subcortical structures implement more specialized transformations—akin to dedicated modules in an analog computing system, such as a governor module to balance stochastic and uneven inputs and turn them into smooth/regulated outputs. Understanding this distinction requires examining how different brain regions transform neural signals into behaviorally relevant outputs, and how the computational strategies differ between the adaptive processing of cortical networks and the specialized algorithms of subcortical circuits.

To explore the computational structures across different brain areas, I would like to follow the path of information flow in a single sensory pathway, the visual stream. The goal here is to create a shared definition of what we mean by "information" and "computation" by understanding the dynamical properties of the underlying computational substrate. By starting at the sensors themselves I aim to explore the (1) shared computational language of information flow and (2) how subcortical, early cortical, and association regions of the brain differ in their computations and (3) how they are similar/interface with one another.

Photoreceptors are often caricatured as binary "pixel" detectors that count photons, but biophysically they are leaky standing-wave resonators. Opsin molecules modulate cyclic-nucleotide-gated channels whose conductance integrates incident photons over tens of milliseconds; dense lateral gap-junction coupling lets neighboring outer segments share current, smoothing over single-photon noise. The light field that it is continuously sampling from is recording not just the amplitude of the light but its phasic information as well at multiple scales. So the information that is being processed at the sensor level is: (1) Amplitude (A)—photon flux (brightness); (2) Intrinsic frequency / time-constant (λ)—outer-segment membrane sets a ~30 Hz low-pass; cones run faster (~100 Hz) than rods (~10 Hz), embedding scale information; (3) Phase (φ)—at the smallest scale the optical interference fringes across ~5 µm receptor spacing create sub-pixel phase shifts in the photocurrent of the fluctuating field; the continuous sampling of the field must then sustain coherent amplitude and phasic information to trigger the discretization of information into bipolar cells as sign-preserving hyper- and de-polarizations. At slightly larger scales phase information can be gathered from saccades that rapidly sample the scene one is engaged with. At even larger scales binocular convergence within the optic chiasm merges left and right visual fields next to each other across both eyes, thus phase information can interfere relaying phasic computations across interocular distances, preserved within the ocular dominance columns in the primary visual cortex; (4) Sign (±w)—ON vs OFF biochemical cascade hard-wires polarity, with center-surround dynamical computations beginning at the sensor-level.

Retinal ganglion cells thus act to quantize information about spatiotemporal filters. Bipolar → amacrine microcircuits implement classical center-surround and direction-selective filters. Importantly, their kernels obey Gaussian-derivative or Gabor forms—exact Fourier-basis computational units. Thus, ganglion cells did not "invent" the Fourier code; rather their own phase space evolution evolves corresponding to the same computational language, preserving the structure of the light-field dynamics from which it is discretely sampling and evolves in the same basis.

From the optic nerve the interocular phase interference begins mixing at the optic chiasm and is also one of the first places where we encounter the first (more atavistic) subcortical computation in the CNS: as information interacts with subcortical regions such as the hypothalamus, to calculate and correct the hormonal cycles and circadian rhythm of the body. After decussating along left/right visual field axes, the retinal preprocessed information is further organized in the Thalamus, acting as a governor that regulates the flow of information across finer detail within the six layers forming the koniocellular, parvocellular, magnocellular routes. These packets become ready for cortical processing according to eye of origin (ocular dominance), ON/OFF sign, and spatial frequency band. Further this LGN-switchboard is where information can begin to bifurcate into regions of the hippocampus, Basal Ganglia, and other subcortical structures.

As the information reaches the primary visual cortex (V1), the cortical columns can decompose the already "cleaned and organized" oscillatory information into spatial frequencies tiling the visual field. Acting as a graph convolution, V1 hypercolumns are arranged such that orientation and ocular dominance vary smoothly across the cortical sheet. Each neuron's sub-threshold integration window (its dendritic and lateral spread) realizes a graph convolution of the LGN packet field—with time-constant diversity setting receptive-field size. Fast spiny stellates act on centre-frequency packets; slower star pyramids average over broader scales.

Notice how computation has transformed: from the fixed filtering operations of retinal circuits and the specialized routing of LGN, we now see the beginning of flexible, learnable transformations in V1's graph convolutions—a computational strategy that will become increasingly dominant as we ascend to association areas. As information moves up the hierarchy of the dorsal stream (V2, V4, MT), feed-forward drive comes from neurons with progressively longer time constants, so the same equations yield larger convolutional kernels. Meanwhile feedback from inferotemporal or parietal areas can inject prediction-error that can stabilize and tune the oscillatory network with the subcortical processing's output to adjust/bias the phase and amplitudes.

Finally, at the level of association cortex—prefrontal, parietal, temporal, retrosplenial—the signals arriving from every sensory stream have already been re-expressed in a shared oscillatory code: amplitude and phase envelopes lie on comparable time-scales, carrier frequencies have been band-limited, and polarity is marked in complementary ON/OFF channels. The cochlea, for example, delivers a true Fourier decomposition along its tonotopic axis, the vestibular nuclei send semicircular-canal phase signals that match those frequency bands, and somatosensory afferents enter S1 already centre-surround filtered by dorsal-column nuclei. What reaches the association cortices, therefore, is a commensurate set of basis functions that can now be mixed, superposed, and remapped with minimal additional conversion effort.

Consider visually guided reaching. Fronto-parietal cortex (IPS ↔ PMd) continuously recomputes trajectory predictions under changing goals and dynamics; its state can be reconfigured within a learning session. In contrast, the basal-ganglia-brain-stem loop selects whether the reach is even permitted (Go/No-Go) and supplies a single scalar urgency signal—a dedicated controller with little representational versatility. Flexibility therefore resides in corticocortical manifolds, while subcortical circuits supply streamlined, policy-specific computations. The way I think about this information computed in the subcortical hubs, is as implementing fast and evolution-hardened transforms. The basal ganglia compress fronto-parietal state vectors into a low-dimensional "action policy" via dopaminergic temporal-difference learning; once cortico-striatal weights stabilize, downstream pallidal gating becomes quasi-reflexive. The superior colliculus fuses retinotopic and auditory space in a hard-wired map that can trigger saccades in &lt 70 ms, that is (to my knowledge) virtually untouched by adult plasticity.

Another subcortical structure, the cerebellum, breaks the simple flexible-cortex/rigid-subcortex dichotomy. Its granule cell layer fans inputs into a 1000-fold higher-dimensional space; parallel fibres then drive Purkinje cells whose plasticity is supervised by climbing-fibre error signals. This architecture resembles a massively parallel error-correction co-processor—like a dedicated biological GPU—that can adapt almost as quickly as association cortex yet remains specialized for timing- and gain-adjustment tasks. Critically, cerebellar outputs are routed back to cortex via thalamus, inserting rapid micro-corrections into the very predictive loops discussed above.

The cortex thus operates as a flexible, reconfigurable front-end capable of arbitrarily rich internal modelling, while subcortical circuits—and cerebellum acting as its high-throughput coprocessor—supply the domain-specific, resource-efficient algorithms that keep the organism adaptive on behaviorally relevant time-scales. As information converges within the thalamocortical "switchboard" and interfaces with the frontal/pre-frontal association cortices, we get neural algorithms and computations (through dynamics and latent dimensional manifolds) commensurate with the generalized multimodal predictions we see in sophisticated animal behavior.

Afterword

The differential analyzer framing emerged from a conviction that the brain's computational principles are ancient—not in the sense of being primitive, but in the sense of being fundamental. The same mathematics that governs spinning brass discs governs spiking neurons. Structure-preserving representation is not a design choice; it is the only way analog systems can compute.

What surprised me in writing this essay was how naturally the visual pathway reveals the cortical/subcortical division. Following information from photoreceptor to association cortex, you can watch the transition happen: from fixed filtering to flexible remapping, from hardwired routing to learnable transformation, from specialized modules to general-purpose manifolds.

The cerebellum remains the most fascinating exception to the clean dichotomy. It has the plasticity of cortex but the specialization of subcortex—a high-dimensional error-correction coprocessor that somehow bridges both worlds. I suspect understanding how the cerebellum achieves this will be key to understanding how biological systems balance pre-programmed structure with adaptive flexibility.

I plan to write more extensively about Lord Kelvin's harmonic analyzers and Vannevar Bush's differential analyzers in a future piece. There is a rich intellectual history here—connecting nineteenth-century physics, early twentieth-century engineering, and contemporary neuroscience—that deserves its own treatment. The tide predictor is not just a metaphor; it is a proof of concept for structure-preserving computation that predates digital computers by nearly a century.