The greatest neuroscientist you’ve never heard of

Sai Gaddam
9 min readJul 26, 2021

(This is a review of Conscious Mind, Resonant Brain: How Each Brain Makes a Mind)

On October 27th 1864, a Scottish laird and polymath delivered a scientific paper to the venerable Royal Society in London. In keeping with tradition, it was read to the society’s distinguished members at a meeting convened a month later. The contents of this paper would forever change our understanding of the world, unifying electricity and magnetism. It would, more than any other idea before or after, also unify the world, unshackling communication from earth-bound wires and suffusing the skies with thoughts, hopes, desires, and TikTok videos ricocheting at the speed of light itself. But on the day of its reading, and for nearly twenty years after, its impact was virtually unnoticed. The math was considered complex and difficult, the equations an impenetrable thicket of all known notation. For an era steeped in the Newtonian mathematics and metaphors of “one-fluid” and “two-fluid” theories, and of storing currents in Leyden jars, it was simply too far ahead of its time.

It was only two decades later when electromagnetic waves were detected for the very first time, finally proving his predictions, that James Clerk Maxwell’s equations would win universal acclaim and acceptance.

We are likely in a similar epochal interregnum now. Just as electric motors and electromagnetic generators were discovered and in use before a real understanding of the underlying principles, we are now at the dawn of an AI age with astonishingly powerful artificially intelligent machines being put to use — without any real understanding of how and why they work.

The unification of mind and brain holds the promise of unshackling intelligence and consciousness from their earth-bound biological vessels and paving the path to a new world of soaring superintelligences that currently only inhabit the realms of science-fiction. And just as in Clerk Maxwell’s time, the unifying framework may already be here with us, unappreciated and far ahead of its time.

Stephen Grossberg at age 27 in 1967 (as an assistant professor at MIT where he created and taught a course about neural networks), and at age 76 in 2016. credit

To neuroscientists and AI researchers of recent vintage, the name Stephen Grossberg may not ring a bell. To not recognize the name of one of the greatest researchers in the field should be heresy, but perhaps understandable in neuroscience considering its staggering complexity, vastness, and the innumerable trails leading to the subject. But those in the know, know.

Here is the British neuroscientist Karl John Friston (and a Fellow of the aforementioned Royal Society of London) speaking of Grossberg’s many contributions:

“Whenever you claim to be “the first to do” this or that in artificial intelligence, it is customary — and correct — to add “with the exception of Stephen Grossberg”. Quite simply, Stephen is a living giant and foundational architect of the field.”

And here is cognitive scientist Margaret Boden discussing Grossberg’s early research in her fantastic book, Mind as Machine: A History of Cognitive Science:

“…one case in point is the early work of the highly creative cognitive scientist Stephen Grossberg. He was perhaps the first to formulate three ideas that are influential today under the names of other people…Grossberg also pioneered many more notions — including back propagation — that are commonly attributed to others, if not actually named after them.”

(Those in the know have also called Grossberg the “Newton and Einstein of the Mind.” Comparisons to the three greatest physicists ever are warranted, as we’ll see, but when measuring his contributions for their world-transforming ability, Maxwell edges ahead. Incidentally, Newton, Maxwell, and Einstein have all reconfigured our notions of what it means to measure. And Grossberg continues this tradition.)

Over 60 years (starting in his freshman year in 1957!), Grossberg has pioneered and developed foundational intuitions and mathematics that have become invisible tramlines in the field. The equations and architectures he formulated offer a rare and breathtaking unifying framework that can model and explain a vast number of brain processes. Many of these predictions have been subsequently proven by experimental results “5–30 years after they were first published.” How, one might ask, does such seminal work go unrecognized for so long? In Grossberg’s own words, it might have been because the path he chose was “lonely and independent” and whenever he had an idea, he “usually had it too far ahead of its time” or “would develop it too mathematically for most readers.”

Fortunately for us, Stephen Grossberg has finally published his Magnum Opus, which brings together decades of his work in its most accessible form. (Having read Grossberg’s papers first as a Ph.D. student nearly two decades ago, and revisiting his body of work over the past two years for a book I am co-authoring, I can vouch for the “most accessible” descriptor. Update: Journey of the Mind is out in stores on March 8th, 2022 ; here’s a review)

Birds, Frogs, and the Great Mountain

Young AI practitioners and computational neuroscientists used to snazzy deep learning frameworks and mammoth models with hundred layers and hundred million parameters might wonder if some of the decades-old material in Grossberg’s 750-page treatise is dated. The British-American polymath Freeman Dyson once likened mathematicians to birds and frogs.

Birds fly high in the air and survey broad vistas of mathematics out to the far horizon. They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and see only the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time.

Grossberg’s genius lies in embodying both, and in recognizing that “obvious hypotheses, with which no one would disagree, together imply conclusions about deep properties of brain organization.” Can a deep understanding of boundaries and surfaces of simple, toy objects lead to the most profound insights into the nature of learning and attention? Can an understanding of how silence flows across time and alters the perception of words uttered before it, help explain consciousness itself? Grossberg shows how.

Mission: acquire, gnaw, try not to destroy

To appreciate how such a leap might even be possible, consider this following scene: Our 15-month old sees his older sister playing with what looks like a new thing she’s just unboxed. His shrewd sister, fully aware of his annoying tendency to find, acquire, gnaw, and invariably destroy all that is new, is attempting to play behind a rocking chair, out of his sight. No luck. He has spotted something of interest and initiates his mission to seek it out and totters across a floor dappled by the evening sunlight, much to his sister’s audible dismay.

Here’s a non-exhaustive list of what his mind must do to help him succeed at this:

  • Disregard varying sunlight intensity to recognize familiar objects and surfaces in the room (discounting the illuminant)
  • Detect the presence of a potential new thing in a cluttered scene of familiar objects (novelty detection)
  • Detect an unknown thing even if it is partially hidden (recognizing occluded objects)
  • Learn what this new thing looks like while not knowing what it is, and without any explicit instruction (unsupervised learning)
  • Refocus his interest from his toy car to this new thing (attentional shift)
  • Translate the increased motivation to acquire this potential new thing into a motor sequence (motor planning)
  • Convert the visual coordinates of this new thing into coordinates his still-growing, still-learning-to-walk, 15-month old musculature can understand and reach for (coordinate transformation)
  • Keep the goal of reaching this new thing in mind while being distracted by his sister’s shouts (working memory)
  • Increase his tottering speed to match his urgency, but not so much as to destabilize his baby body (adaptive timing)
  • Learn that his sister’s shouts are not the label for this new about-to-be-acquired thing (attentional blocking)
  • Learn that the rustle of the wrapper that was heard a few moments before the spotting of the new thing was predictive of it, but not the evening bird calls streaming in from outside (associative learning)

This single, seemingly trivial act requires the orchestration of such an astonishing number of cognitive and emotional competencies, and learning capacities that modeling it computationally seems daunting. But Grossberg shows how only a few equations and models, “suitably specialized,” can explain all of these competencies. Moreover, he also shows how the mind must be built that way, and not as a bag of disparate tricks, because that’s the only path to the holy grail — our well-integrated, continuously updated, uniquely-me sense of Self. By building on a shared set of fundamentals, and allowing all sensory, cognitive, and emotional modalities to interact seamlessly, our mind makes a Self cohere, stabilize, and emerge. The bird’s eye view emerges from the diligence of many a frog croaking together in synchrony. Such an integrated, interconnected, interchangeable architecture while allowing for the miracle that is our Self, occasionally also allows for something like this to happen:

Hear this on a loop while switching attention between the phrases

(Grossberg’s models can explain these viral perceptual illusions as well; this is a feature, not a bug.)

We have leapfrogged to the emergence of a Self without actually defining what a Mind is, so now is an opportune time:

The Mind is what allows an individual to autonomously adapt in real-time to a changing world that is filled with unexpected events.

The blooming, buzzing, confusion of an infant’s world only gives way to a chaotic, nonlinear, nonstationary world of inherent ambiguities. The Self, then, is the illusion that emerges from the Mind’s continuous tussle with the irresolvably indeterminate. Note that these twined definitions of the Mind and Self do not invoke any biological substrate. A logical extension is that it is entirely possible to see these emerge in sufficiently sophisticated synthetic bodies. Indeed, Grossberg hints at this future when stating the following:

the development of increasingly autonomous adaptive intelligent agents will become the most important technological revolution of our age, and that insights from biological intelligence will play an increasingly vital role in these developments during the next generation

The new world order needs Gregorian AIs

The philosopher Daniel Dennett might characterize current-generation AI as Skinnerian creatures that, to paraphrase his definition, can adjust their behavior in reaction to the “reinforcement” of backpropagated errors, but only do so by blindly testing random new behaviors, and might perish before learning anything useful. This limitation has been overcome by engorging these creatures with billions of data points about the world, allowing for a simulacrum of competence. But lacking the capability for building an integrated, continuously updated, and testable model of the world they inhabit, they break down in brittle, unexplainable ways. Like the most advanced self-driving car in the world mistaking an orange-tinged moon for a yellow light. Dennett’s Gregorian creatures would avoid these mistakes.

The Gregorian creature’s Umwelt is well stocked with thinking tools, both abstract and concrete: arithmetic and democracy and double-blind studies, and microscopes, maps, and computers. A bird in a cage may see as many words every day (on newspaper lining the cage floor) as a human being does, but the words are not thinking tools in the bird’s Umwelt.

— Daniel Dennett, From Bacteria to Bach and Back

The first draft blueprint for these Gregorian creatures might well be in Stephen Grossberg’s work and summarised in his book.

— — — — — — — — — — — — — — — — — — — — — — — — — — — —

Update (04/08/2021)

Update (15/09/2022):

Journey of the Mind: How Thinking Emerged from Chaos

Our book, Journey of the Mind, is now out in stores. Many great reviews, but I think this one from a kind reader calling it the “most fascinating read of my life so far” has to be my favorite :)

--

--

Sai Gaddam

Co-Founder @ Comini Learning ; Co-Author: Journey of the Mind (2022); A Billion Wicked Thoughts (2011); PhD, Computational Neuroscience