Brains on Fire: Dr. Steven Novella Explains, "The Mind Is the Fire of the Brain&quot

brains on fire.jpg

Dr. Steven Novella, clinical neurologist at Yale University School of Medicine, has posted two responses to my posts on memory and neuroscience. His first post was scattered, and not worth a reasoned response, but he collects himself in his second post and provides an opportunity for discussion.

Dr. Novella, with my commentary:

As far as I can tell, Egnor has two actual points to make. The first is that memories (or representations of memories — whatever that means) cannot be stored in the brain, because then the brain would need to know the location of each memory, and the location of the location, with infinite regression. Therefore the notion of physically storing memories collapses under its own logical impossibility.

I’ll recap my argument:

The difference between a memory and a representation of a memory is obvious. Right now I remember that I have an appointment at noon. I’m writing down "appointment at noon" on my calendar.

My memory is my thought that I have an appointment at noon.

The representation of my memory is the written note on my calendar.

A thought differs from a note. A thought is something I experience; a note is something I write. My memory is a psychological thing. My note is a physical thing. My memory is represented in my note. My memory is not the same thing as my note. A memory is not the same thing as a representation of a memory.

MInd-and-Technology3.jpgI hope that’s clear. I’m not sure how I can be clearer.

Does my brain contain a representation of my memory that I have an appointment at noon?

We all agree that my memory is associated in some way with a brain state. Obviously something is happening in my brain when I have a memory, and likely the brain state correlates with my memory and likely the brain state is necessary for my memory.

But is that brain state a representation of my memory? A representation is something that maps to the thing it represents. A map of a city is a representation of the city. For each point on the map, there is a corresponding point in the city. A drawing of a face is a representation of a real face. For each point on the drawing there is a corresponding point on the real face.

If my brain state is a representation of my memory that I have an appointment at noon, what in the brain state maps to what in the memory? More incisively, what in the brain state could map to anything in the memory? How could my brain state represent my memory of my appointment? An actual written note in my cortex? A little calendar in my hippocampus? A tiny alarm set to go off in my auditory area? How, pray tell, could a brain state map to a thought, especially a thought that is not an image?

For most of our memories and thoughts — those that are not pure images — the concept of representation in the brain simply doesn’t make sense.

But let’s imagine via some materialist miracle (materialism is shot through with miracles) that I can map my memory of my appointment at noon to a brain state. But of course the map in my brain would need two things: it would have to be located and it would need to be read (maps don’t read themselves). So I would have to have some (unconscious) memory of where the brain map was and some (unconscious) memory of how to read the map, as well as some entity (homunculus?) who would read the map.

So even if my memory that I have an appointment at noon were represented in a brain state, I still have not solved the problem of memory. It still remains unexplained how the representation is accessed, decoded, and read.

Note: hand waving about "integrated… overlapping… massive parallel processing" and other neurobabble won’t do. Any map needs to be accessed and read, or it can’t be a map.

The concept of representation of memory in the brain is unintelligible.

Novella:

Also, as many commenters have pointed out, if Egnor were correct then computers could not function.

Computers function just fine. Computers contain representations of human memory and knowledge and information, in the same way that books and calendars do. Human beings put information into the computer and the software that human beings wrote maps the input to the output, which is read by human beings. The computer is a tool that we use, from beginning to end. It leverages our thought by manipulating representations of our thought.

But we are not computers. So you can’t explain human thought by reference to pure manipulation of representations, which is what computers do. Computation presupposes human thought. So computation can’t explain human thought.

You can’t explain human thought by appeal to human thought.

Novella:

Also, in all Egnor’s various follow ups and side posts, he has not addressed the key factual objection to his position — that the brain stores information in multiple overlapping patterns of neurons. A prominent feature of brain function is that it learns largely by association, so you can get to a memory through the many other associated memories, without the need for the brain knowing the physical location of the memory. Memories are not isolated discrete things, but integrated, overlapping, analog things in the brain.

Neurobabble. The representation of the memory in the brain still has to be accessed, decoded, and read, which is unintelligible. Elaborating on plain nonsense by invoking "multiple overlapping patterns of neurons… association…integrated…overlapping…analog…" is still nonsense. The unintelligibility of memory stored as representation is not ameliorated by invoking mass unintelligibility. This is the pleonastic fallacy — the notion that an unbridgeable explanatory gap can be crossed by multiplying the things that aren’t explainable. Example: "The claim that a memory can be a brain representation may be nonsense, but it makes sense if you have lots of memories and representations that can’t be explained…"

Novella chastises me for dualism:

Egnor’s second point is the tired old dualist claim that mental phenomena are not physical, therefore dualism. I have already pointed out that this amounts to little more than assuming one’s conclusion. It is pure circular reasoning…

A few paragraphs later, Novella writes:

The mind is the fire of the brain… As I have pointed out numerous times myself — mental phenomena are functional active things. They are based in the physical substance of the brain, but [mental phenomena] are not just the physical substance — they are what results from the function of the physical substance. The mind is what the brain does. To use another metaphor, the mind is like fire and the brain its fuel. The fire is not the fuel, it is a chemical process happening to the fuel. [Emphasis added.]

In materialist neuroscience, metaphors pass for metaphysics. Novella’s metaphors, to the extent that they are intelligible, are dualist.

Neuroscience is a conceptual morass.

Image credit: jerry ferreria/Flickr.

Michael Egnor

Professor of Neurosurgery and Pediatrics, State University of New York, Stony Brook
Michael R. Egnor, MD, is a Professor of Neurosurgery and Pediatrics at State University of New York, Stony Brook, has served as the Director of Pediatric Neurosurgery, and is an award-winning brain surgeon. He was named one of New York’s best doctors by the New York Magazine in 2005. He received his medical education at Columbia University College of Physicians and Surgeons and completed his residency at Jackson Memorial Hospital. His research on hydrocephalus has been published in journals including Journal of Neurosurgery, Pediatrics, and Cerebrospinal Fluid Research. He is on the Scientific Advisory Board of the Hydrocephalus Association in the United States and has lectured extensively throughout the United States and Europe.

Share

Tags

Continuing SeriesMind and TechnologyScienceViews