Through-Lines: Insights from my Qualification Process & The Eigenvectors of Thought
Reflections on my doctoral qualifying examination—how forty-eight papers, four mentors, and three essays revealed the invariant structures underlying my approach to neuroscience, physics, and computation.
The qualifying examination is a strange ritual. You spend months reading—deeply, obsessively—across four different domains with four different mentors, accumulating papers like sedimentary layers. Then you sit down to answer questions that force you to synthesize, to find the threads, to discover what you actually think.
I didn't fully understand what I thought until I wrote these essays.
The Reading
The Interdisciplinary Neuroscience Program (INP) structures qualification around breadth: four professors, roughly twelve papers each, spanning whatever territory you and your mentors decide is essential to your formation as a scientist. My readings ranged across dynamical systems theory, network neuroscience, computational models of learning, and the philosophy of representation.
What strikes me now, looking back at those forty-eight papers stacked in my mind, is how they weren't forty-eight separate things. They were forty-eight perspectives on the same set of questions, viewed from different angles. Kuramoto on coupled oscillators. Barabási on scale-free networks. Hopfield on attractor dynamics. Gardner on grid cells and toroidal manifolds. Sharpee on hyperbolic geometry for memory. Friston on free energy. Each paper was a window into the same room—I have been fascinated by gazing into this window (that's what first beckoned me to pursue a PhD in neuroscience) but this process really helped me organize what previously felt like ethereal intuition and hone it into a structured thesis by forcing me to put all of these turbulent ideas into laminar flow as words on the page.
The qualification reading list isn't just preparation. It's installation. The papers become part of your cognitive architecture, shaping how you see problems before you're consciously aware of seeing them at all. Or at least that's my inner poet's take on what the "platonic goal" of the INP's qualification process is meant to do.
The Writing
The written exam gives you a set of questions from each professor—open-ended, expansive, the kind of questions that could act as structural girders for the bridge to come in our dissertations and our eventual careers. INP students pick three and we have a two days to write them up.
I remember staring at the questions, feeling the familiar vertigo of too much possibility. Then something shifted. I stopped trying to answer the questions as posed and started listening for what I actually wanted to say. The questions became occasions, not constraints.
Professor McCarthy asked about developmental programming—whether the brain might "compile" computational primitives before birth. I found myself thinking about megapode chicks, those impossible birds that fly and hunt on their first day of life. No matched filter explains that. Something deeper must be pre-installed.
Professor Chang asked about the difference between cortical and subcortical computation. I reached for differential analyzers—those beautiful nineteenth-century machines that solve equations by embodying them in spinning brass. The brain, I argued, operates on the same principle: computation as structural correspondence, not symbol manipulation.
Professor Lynn asked about preferential attachment in networks and its relationship to Hebbian plasticity. I started with a party metaphor—the popular get more popular—and ended somewhere near the foundations of statistical physics, watching the same mathematics appear in magnets and forest fires and synaptic weight distributions.
Three questions. Three essays. But as I wrote, I kept noticing echoes. The same ideas kept appearing in different costumes.
The Through-Lines
Here is what I learned about my own thinking:
I believe computation is structure-preservation. The differential analyzer doesn't calculate tide tables by manipulating symbols; it embodies the tidal equations in its physical structure. The brain doesn't represent the world by storing descriptions; it builds dynamical models that share the relational structure of what they represent. This conviction—that computation is isomorphism, not symbol-shuffling—runs through everything I wrote.
I believe the boundary between structure and dynamics is illusory. Development installs attractor geometries. Learning refines them. The network shapes the correlations; the correlations reshape the network. Structure and dynamics aren't separate categories—they're two views of the same continuous process of self-organization.
I believe scale invariance is telling us something fundamental. Power laws appear everywhere: synaptic weights, network degree distributions, earthquake magnitudes, forest fire sizes. Systems tune themselves to criticality—the boundary between order and chaos where sensitivity and stability coexist. The brain may be one of these systems, operating at the edge because that's where computation is most powerful.
I believe the same mathematics keeps appearing because reality has deep structure. The Ising model describes magnets. It also describes neural networks. Preferential attachment describes the growth of the internet. It also describes Hebbian plasticity. These aren't analogies—they're identities. The universe is more unified than our disciplinary boundaries suggest.
At its core, mathematics is the language that unites fields seemingly far afield from one another on the surface. When we ask "how does energy flow through a crystal lattice as heat?" or "how do thoughts arise from microvolt fluctuations across billions of interconnected neurons?"—we are asking questions that harken back to Shannon's information theory and, in their most fundamental form, to the nature of symbols and numbers themselves.
This is a guiding question for me, one that nods to Wigner's famous essay "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." If all of these processes are unified under the same language of energy and information conservation, if they all exhibit criticality and local stochastic dynamics, perhaps the answers we seek lie not within any single discipline but at the intersection of all these reference frames—once we realize we are all looking at the same fundamental laws
The Eigenvectors
In linear algebra, eigenvectors are the directions that remain invariant under transformation—the axes along which a system stretches or compresses without rotating. If you want to understand a transformation, find its eigenvectors.
These essays revealed the eigenvectors of my thinking. No matter what question I was asked, my answers kept aligning along the same directions: toward dynamics over static description, toward structure-preservation over symbol manipulation, toward universality over domain-specificity, toward the mathematics of criticality and phase transitions.
This isn't something I chose. It's something I discovered. The qualification process didn't teach me what to think—it showed me how I already think, made explicit the implicit architecture that was shaping my engagement with every paper I read.
The Thesis Beneath the Thesis
My dissertation research focuses on programming reservoir computers—on developing analytical frameworks for specifying the dynamics of recurrent neural networks without iterative training. But that's the surface description. The deeper question, the one these essays helped me articulate, is this:
What is the relationship between the structure of a dynamical system and the computations it can perform?
The megapode chick suggests that evolution has solved this problem—that development can compile sophisticated computations into neural connectivity. The differential analyzer suggests that analog systems solve it through structural correspondence. The preferential attachment model suggests that simple local rules can generate the global architectures that support computation.
My thesis is an attempt to make these intuitions rigorous. To develop a mathematical language for the structure-preserving representations that brains and machines might share. To understand how dynamics can be programmed, not just trained. And most importantly to bring these rigorous insights into the world.
What Qualification Taught Me
The ritual worked, at least for me. Not because passing some institutional hurdle matters in itself, but because the process—the reading, the discussions, the synthesizing, the pressure of having to articulate what you think under time constraint—forced a crystallization.
Before qualification, I had intuitions. After, I had a framework. The same ideas, but now I could see their connections. The same commitments, but now I could name them.
The three essays in this collection are artifacts of that crystallization. They're responses to specific questions, but they're also windows into the way I think about not only neuroscience, but the deep connections that connect the biological and physical world. I share them here not as finished arguments but as snapshots of a mind in the process of discovering its own shape.
The eigenvectors are still the same. The transformations continue.