Brain and Mathematics
(Distinguished Research Professor Cognitive Neuroscience, Department of Psychology, Georgetown University; and School of Computational Sciences, George Mason University
Professor Emeritus Stanford and Radford Universities)
To be published in "Brain and Being: At the Boundary between Science, Philosophy, Language and Art" John Benhamins, 2004
It sometimes appears that the resistance to accepting the evidence that cortical cells are responding to the two dimensional Fourier components of stimuli [is due] to a general unease about positing that a complex mathematical operation similar to Fourier analysis might take place in a biological structure like cortical cells. It is almost as if this evoked for some, a specter of a little man sitting in a corner of the cell huddled over a calculator. Nothing of the sort is of course implied: the cells carry out their processing by summation and inhibition and other physiological interactions within their receptive fields. There is no more contradiction between a functional description of some electronic component being a multiplier and its being made up of transistors and wired in a certain fashion. The one level describes the process, the other states the mechanism. DeValois & DeValois, 1988 p 288
The fact that the formalism describing the brain microprocess is identical with the physical microprocess allows two interpretations: (a) The neural microprocess is in fact based on relations among microphysical quantum events, and (b) that the laws describing quantum physics are applicable to certain macrophysical interactions when these attain some special characteristics” (p. 270). The formalism referred to describes the receptive fields of sensory neurons in the brain cortex. These were mapped in terms of Gabor wavelets or more generally, “four dimensional information hyperspaces based on Jacobi functions (Atick and Redlich, 1989) or Wigner distributions (Wechsler, 1991). Pribram, 1991 Epilogue
A PERSONAL ROAD OF DISCOVERY
The story of how, as a non-mathematician, my interest was engaged in Gabor-like mathematics is worthwhile repeating. Why would I follow such a path, when so many neurophysiologists and experimental psychologists shun, with the exception of statistical analyses, mathematical expressions (one could say, mathematical metaphors) in attempts to understand brain/mind transactions?
The story begins in the late 1930s, working in Ralph Gerard's laboratory at the University of Chicago. Gerard showed us that a cut separating two parts of the brain cortex did not abolish transmission of an electrical stimulus across the separation as long as the parts were in some sort of contact. Meanwhile, I discussed these observations with my physics professor. I argued with both Gerard and the physicist that such large scale phenomena could not account for the brain processes that allowed us to perceive, think and act. Gerard, of course, agreed but insisted that more than simple neuronal connections were important in understanding brain function. My physics professor also agreed but had nothing to offer. He may have mentioned quantum physics but was not versed in it.
At about the same time, Walter Miles, Lloyd Beck and I were pondering the neural substrate of vision. I was writing an undergraduate thesis on retinal processing in color sensation under the supervision of Polyak, making the point that beyond the receptors, the bipolar cells seemed to differentiate the three color bands to which the receptors were sensitive into a greater number of more restricted bandwidths. We bemoaned our inability to come up with some similar understanding for form vision. I distinctly recall saying: “wouldn't it be wonderful if we had a spectral explanation for brain processing of black and white patterns.”
By 1948 I had my own laboratory at Yale University and began a collaboration with Wolfgang Koehler told me of his Direct Current hypothesis as the basis for cortical processing in vision and demonstrated to me and my laboratory PhD students, Mort Mishkin and Larry Weiskrantz just how the anatomy of the auditory system would explain how the scalp auditory at the apex of the skull was transmitted by the brain's tissue: no neural connections needed. Shades of my experience with Gerard.
This time I set to work to test Koehler's hypothesis. We worked together with monkeys and humans displaying a white cardboard in front of their eyes and recorded from their visual cortex. (It was easy in those days to do such experiments with awake humans with their permission. Surgery had been done for clinical purposes with local anesthesia of the scalp – touching the brain itself is not felt by the patient.) Indeed we found a Direct Current (DC) shift during the display. One of my students and I then repeated the experiment using auditory stimulation in monkeys and obtained the same result in recording from the auditory cortex. (See Pribram 1971 Lecture 6 for review.)
In addition, I created minute epileptogenic foci in the visual cortex of monkeys and tested for their ability to distinguish very fine horizontal from vertical lines. Once electrical seizures commenced as shown by electrical recordings from their visual cortex I expected their ability to distinguish the lines to be impaired and even totally lost. The recordings showed large slow waves and total disruption of the normally patterned electroencephalogram (EEG).
Contrary to expectation, the monkeys performed the task without any deficiency. Koehler exclaimed: “Now that you have disproved not only my theory of cortical function in perception but everyone else's, as well, what are you going to do?” I answered: “I'll keep my mouth shut”. In fact, I refused to teach a course on brain mechanisms in sensation and perception when I transferred to Stanford University (in 1958) shortly thereafter.
I did not come up empty-handed, however. What did occur was that the epileptic seizures delayed the monkeys' learning of the task some seven fold. This led to another series of experiments in which we imposed a DC current across the cortex from surface to depth and found that a cathodal current delayed learning while an anodal current enhanced it. There is more to this story but that has to wait for another occasion.
Once at Stanford I turned to other experiments that demonstrated cortical control of sensory input in the visual and auditory systems, feedback processes that were important to the conceptions Miller, Galanter and I had put forward in “Plans and the Structure of Behavior” (1960).
Some years into my tenure at Stanford, Ernest Hilgard and I were discussing an update of his introductory psychology text when he asked me about the status of our knowledge regarding brain physiology in perception. I answered that I was dissatisfied with what we knew: I and others had disproved Koehler's (1958) suggestion that perception could be ascribed to direct current brain electrical fields shaped like (isomorphic with) envisioned patterns. Hubel and Wiesel (1968) had just shown that elongated stimuli such as lines and edges were the best shapes to stimulate neurons in the primary visual receiving cortex – and that perception followed from putting together something like stick figures from these elementary sensitivities. As much of our perception depends on shadings and texture, the stick figure approach failed for this and other reasons to be a satisfactory. I was stumped. Hilgard, ordinarily a very kind and patient person seemed peeved and declared on a second encounter, that he did not have the luxury of procrastination as he had to have something to say in the text. So he asked once again to come up with some viable alternative to the ones I had so summarily dismissed.
I took the problem to my laboratory group and told them about Hilgard's problem and my dissatisfaction with the two extant proposals. I added that there was one other suggestion that had been offered which had the advantage that neither I nor anyone else knew how it might work either neurologically or with regard to perception: Lashley (1942) had proposed that interference patterns among wave fronts in brain electrical activity could serve as the substrate of perception and memory as well. This suited my earlier intuitions, but Lashley and I had discussed this alternative repeatedly, without coming up with any idea what wave fronts would look like in the brain. Nor could we figure out how, if they were there, how they could account for anything at the behavioral level. These discussions taking place between 1946 and 1948 became somewhat uncomfortable in regard to Don Hebb's book (1948) that he was writing at the time we were all together in the Yerkes Laboratory for Primate Biology in Florida. Lashley didn't like Hebb's formulation but could not express his reasons for this opinion: “Hebb is correct in all his details but he's just oh so wrong”.
Within a few days of my second encounter with Hilgard, Nico Spinelli a postdoctoral fellow in my laboratory, brought in a paper written by John Eccles (Scientific American, 1958) in which he stated that although we could only examine synapses one by one, presynaptic branching axons set up synaptic wavefronts. Functionally it is these wavefronts that must be taken into consideration. I immediately realized (see Fig. 1-14, Languages of the Brain 1971) that axons entering the synaptic domain from different directions would set up interference patterns. (It was one of these occasions when one feels an utter fool. The answer to Lashly's and my first question as to where were the waves in the brain, had been staring us in the face and we did not have the wit to see it during all those years of discussion.)
Within another few days I received my current edition of Scientific American in which Emmet Leith and J. Upatnicks (1965) describe how recording of interference patterns on film tremendously enhanced storage and processing capability. Images could readily be recovered from the store by appropriate procedures that had been described by Dennis Gabor (1946) almost two decades earlier. Gabor called his mathematical formulation a hologram.
Using the mathematical holographic process as a metaphor seemed like a miraculous answer to Hilgard's question. Shading, detail, texture, everything in a pattern that we perceive can be accomplished with ease. Russell and Karen DeValois (1988) book on “Spatial Vision” and my (1991) book “Brain and Perception” provide detailed reviews of experimental results that support the conjecture that holography is a useful metaphor in coming to understand the brain/mind relation with regard to perception. Here I want to explore some further thoughts engendered by this use of a mathematical formulation to understand the brain/mind relation.
Some years later, in Paris, during a conference sponsored by UNESCO where both Gabor and I were speakers, we had a wonderful dinner together. I told him about the holographic metaphor for brain processing and we discussed its Fourier basis. Gabor was pleased in general but stated that “brain processing [of the kind we were discussing] was Fourier-like but not exactly Fourier.” I asked, what then might such a relation look like and Gabor had no answer. Rather we got onto a step-wise process that could compose the Fourier -- an explanation that I later used to trace the development of the brain process from retina to cortex. Gabor never then nor later told me about his 1946 contribution to communication theory and practice: that he had developed a formalism to determine the maximum compressibility of a telephone message that renders it still intelligible. He used the same mathematics that Heisenberg had used to describe processes in quantum physics and therefore called his “unit” a quantum of information. It took me several years to locate this contribution which is referred to in Likleiter's article on acoustics in Stevens 1951 Handbook of Experimental Psychology.
Does this application indicate that the formalism of quantum physics applies more generally to other scales of inquiry? Alternatively, for brain function, at what scale do actual quantum physical processing take place? At what anatomical scale(s) do we find quantum coherence and at what scale does decoherence occur? What relevance does this scale have for our experience and behavior?
To summarize: The formalisms that describe the holographic process and those that describe quanta of information apparently DO extend to scales other than the quantum. Today we use quantum holography to produce images with the technique of functional Magnetic Resonance (fMRI). The quantities described by terms of the formalisms such as Planck's constant will, of course, vary but the formulations will to a large extent be self-similar. The important philosophical implications for the brain/mind issue have been addressed in depth by Henry Stapp on several occasions (e.g 2003, “The Mindful Universe”) as well as by many others including myself (e.g. Pribram, 1997, What is mind that the brain may order it?).
Deep and surface processing scales:
Brain, being material, has at some scale a quantum physical composition. The issue is whether the grain of this scale is pertinent to providing insights into those brain processes that organize experience and behavior. In my book “Languages of the Brain” (1971) I identify two very different scales at which brain systems operate. One such scale, familiar to most students of the nervous system, is composed of circuits made up of large fibers usually called axons. These circuits operate by virtue of nerve impulses that are propagated along the fibers by neighborhood depolarization of their membranes.
But other, less well popularized, operations take place in the fine branches of neurons. The connections between neurons (synapses) take place for the most part within these fine fibers. Pre-synaptically, the fine fibers are the terminal branches of axons that used to be called teledendrons. Both their existence and their name have more recently been largely ignored. Postsynaptically, the fine fibers are dendrites that compose a feltwork within which connections (synapses and electrical ephapses) are made in every direction. This feltwork acts as a processing web.
The mathematical descriptions of processing in the brain's circuits needs to be different from the descriptions that describe processing in fine fibers. The problem that needs to be addressed with regards to circuits is that the connecting fibers are of different lengths and diameters that can distort the conduction of a pattern. The problem that needs addressing with regards to fine fiber processing is that, practically speaking, there are no propagated impulses within them so conduction has to be accomplished passively. Roberto Llinas (2000; Pellionitz and Llinas 1979; 1985) has provided a tensor theory that addresses the propagation in circuits and my holonomic (quantum holographic) theory models processing in the fine fibered web.
For me it has been useful to compare Llinas theory with mine to be able to detail their complementarity. The primary difference between the theories rests on the difference between the neural basis each refers to: Llinas is modeling neural circuits, what I (Pribram, 1997; Pribram and Bradley,1998) have called a surface processing structure. Holonomic theory models what is going on in the fine fibered parts of these circuits, what I have referred to as deep processing. (The terms were borrowed from Noam Chomsky's analysis of linguistic structure and may, perhaps be able to provide a neurological account of these aspects of linguistic processing).
Despite the different scales of these anatomical substrates, both Llinas and I emphasize that the processing spacetime in the brain is not the same as the spacetime within which we ordinarily get about. Llinas developed a tensor theory that begins, as does holonomic theory with oscillators made up of groups of neurons or their fine fibered parts. Next both theories delineate frames of reference that can be described in terms of vectors. Llinas uses the covariance (and contravarience) among vectors to describe tensor matrices where the holonomic theory uses vectors in Hilbert phase space to express the covariance. Llinas' tensor metric is not limited to orthogonal coordinates as is holonomic theory. (Llinas indicates that if the frame of reference is thought to be orthogonal, proof must be provided. I have provided such evidence in “Brain and Perception” and indicated when orthogonality must be abandoned in favor of non-linearity).
In keeping with his caveat, Llinas does use the Fourier transform to describe covariation for the input, that is the sensory driven vectors: “[There are] two different kinds of vectorial expressions both assigned to one and the same physical location P, an invariant. The components v/i of the input vector are covariant (they are obtained by the orthogonal projection method) while the components v\j of the output vector are contravariant (obtained by the parallelogram method)” (Pellionitz and Llinas 1985, p 2953). As in the holonomic theory, the tensor theory needs to establish entities and targets and it does this (as in the holonomic theory (see Pribram 1991, Lectures 5 and 6) by using the motor output to create contravariant vectors. The covariant-contravariant relationship is combined into a higher level invariant tensor metric.
Thus Llinas states that “sensory systems in the CNS are using expressions of covariant type while motor systems use components of a contravariant type” (p2953). This is similar to the use of motor systems in “Brain and Perception” to form Lie groups to produce the perception of invariants basic to object perception. Llinas' theory is more specific in that it spells out contravariant properties of the motor process. On the other hand, Holonomic theory is more specific in specifying the neural substrate produced by nystagmoid and other such oscillating movements (that result in co-ordination of pixels moving together against a background of more randomly moving pixels).
Another advantage of the holonomic theory is that it can explain the fact that the processes that form the experiencing of objects, project them away from the processing medium. “Projection” can be experienced by viewing a transmission hologram. Georg von Bekesy (1967) demonstrated this attribute of visual and auditory processing by arranging a set of vibrators on the skin of the forearm. Changing the phase relations among the vibrators resulted in feeling a point stimulus moving up and down the skin. Bekesy then placed two such arrays of vibrators, one on each forearm. Now, with appropriate adjustments of phase, the sensation obtained was a point in space in front of and between the arms. A similar phenomenon occurs in stereophonic sound: adjusting the phase of the sound coming out of the two or more speakers projects the sound away from the speakers (and, of course the receiver where the processing is actually occurring).
There is more to the rich yield obtained by comparing the Tensor theory to the Holonomic theory. For instance, Pellionisz and Llinas develop a look-ahead module via Taylor-assemblies that are practically the same as the anticipatory functions based on Fourier series (Pribram 1997).
The two theories also converge as Tensor Theory is based on “a coincidence of events in which both the target and interceptor merge into a single event point. This is an invariant known in physical sciences as a four dimensional Minkowski-point or world-point.” (Pellionitz and Llinas, p. 2950). Holonomic Theory also requires a high-dimensional position-time manifold. “As originally implied by Hoffman (1996) and elaborated by Caelli, et al. (1978), the perceptual representation of motion should be subject to laws resembling the Lorenz transformations of relativity theory.” This means that the Poincare group (Dirac, 1930; Wigner, 1939) is relevant, requiring a manifold of as many as ten dimensions. In the context of modeling the brain process involved in the perception of Shepard figures, what needs to be accomplished “is replacing the Euclidian group [that ordinarily describes geodesics] with the Poincare group of space time isometries, the relativistic analogues of geodesics --.” (Pribram 1991, p.117)
Both theories handle the fundamental issue as to “how can coordinates be assigned to an entity which is, by its nature, invariant to coordinate systems” (Pellionez and Llinas p. 2950). The very term “holonomy” was chosen to portray this issue.
It is fitting that surface structure tensor circuit theory uses insights from relativity theory while deep structure holonomy regards quantum –like processing. As physicists struggle to tie together relativity and quantum field theory in terms of quantum gravity, perhaps further insights will be obtained for understanding brain processing. (Hameroff and Penrose,1995; Smolin 2004; Ostriker and Steinhardt 2001).
The main practical difference between the theories is that In the Tensor Theory, time synchrony among brain systems (which means correlation of their amplitudes) is all that is required. Holonomic theory indicates that a richer yield is obtained when phase coherence is manifest. Principle component analysis will get you correlations but it takes Independent Component Analysis (equivalent to 4 th order statistics) to capture the detail (e.g. texture) represented in the phase of a signal. (King, Xie, Zheng, and Pribram 2000).
Some of the relationships between the theories are being implemented in the production of functional Magnetic Resonance Imaging (fMRI). Heisenberg matrices (representations of the Heisenberg group) are used and combine in what is called quantum holography (that is, holonomy) with the tensor geometry of relativity. (Schempp 2000)
Llinas, in a book called the “i of the vortex” (2001) spells out in detail the primacy of the Motor Systems not only in generating behavior but also in thinking (conceptualized as internal movement) and the experience of the self. This is an important perspective for the psychological and neurosciences (see e.g. Pribram in press) but addresses issues beyond the scope of this essay.