Wednesday, October 15, 2014

Memory, Learning, and Modularity: An Interview with Randy Gallistel

Have you ever asked yourself, how does the brain store a number in memory? 

It might surprise you to hear that the neuroscience community doesn’t really have a story about how this basic operation happens. Or so one fellow by the name of Gallistel is saying. 

I caught up with Randy Gallistel in May to chat about his new book Memory & the Computational Brain: Why cognitive science will transform neuroscience

The reason that I think linguists ought to be paying attention to this chap is that he’s been working out the details of symbolic/representational/computational theories of learning & perception for decades now, and the results seem promising (from a generative grammar point of view). 


This is a longer interview and it covers a lot of ground, but I’d like to pick at a couple of things that stood out for me in text. (A long interview deserves an equally long ranty post?)

Thing the first: the relationship between learning & memory 

Under the associanistic view, learning and memory are of a kind: learning is forming and strengthening associations; memory is a reflection of the strengths of those associations. You can gloss the above in your contemporary associanistic jargon of choice (e.g. connection strengths). I think that this way of dividing the labour is deeply confused. I won’t go into it here, though standby for a post devoted to the topic. 

Under the computational view, learning is the extracting of information from experience. And memory is the carrying forward of information through time. This way of carving up the apple seems sound to me. But not without a couple of caveats. 

Fair Warning: if you continue past this point, you’re in for it.  

Caveat about “experience”: the concept of experience should be approached with caution. This is because there’s a difference between an organism being exposed to some stimuli, and an exposure to stimuli that strikes an organism as an experience. That is to say, the sort of stimulation that a particular creature will treat as an experience depends on the kind of a creature it is, and the kind of perceiving and thinking it is capable of. 

Maybe the most common example of this distinction in linguistics is that of synthetic speech. It appears that if you expose a human to synthetic speech (when they are under the impression that what they are hearing is not speech) they report hearing whistles and squeaks. In other words, they do not have a linguistic experience. Alternatively, if you tell the subject that they are listening to speech, they will hear the same stimuli as speech. Now, they have a linguistic experience.1 

Or maybe you’d prefer the following example: supposedly, the ape auditory system is attuned to the same distinctive-features as humans. But observe that even though they organize the acoustic stimuli along the lines of features, the data doesn’t seem to result in anything; they do not have a linguistic experience per se.

Anyway, if you allow yourself to be puzzled by how it is that infants identify language-related data in the environment to begin with, you will see why this caveat is neat. To put it another way, why is it that my pet cat, (or monkey, or seal) doesn’t have a “linguistic experience” from the same stimuli as my niece. Or to put it yet another way, if you expose your eyeballs to acoustic data, we can say that this constitutes a type of stimuli, but it is not a visual experience (which is what eyes are all about, isn’t it?). Only visual stimuli of the proper kind will cause us to have a visual experience from which we can then draw information. 

More importantly, this observation isn’t limited to perceptual systems such as vision. It can be extended to encompass the conceptual apparatus (as Fodor has been trying to do for the past million years or so.) This is a timely issue, especially as work in generative grammar shifts to investigating the Conceptual-Intentional Interface. To re-gloss: a conceptual repertoire depends on how things in the world strike a particular creature (and often what intentional history a thing strikes them as having). Anyway, look out for more on this in the nebulous future. 

Caveat about learning: at some point in the interview, Gallistel says that learning is contiguous with perception, but it seems to me that this is not always the case. First, let’s say, for the sake of argument, that learning is the process by which we fix our beliefs about the world. Well there are patently at least two kinds of beliefs to be fixed: perceptual beliefs about the status of the world, and conceptual-intensional beliefs that are fixed by comparing new potential beliefs with your current belief system. The whole belief system. 

The former kind of belief fixation is mediated by modules. Modules are fast, dumb, topic-specific, informationally encapsulated, and generally quite badass. Consider for instance human perceptual performance under laboratory conditions. Reportedly, even when visual and auditory stimuli are presented at many times the speed at which humans would generally be exposed to them in natural experience, memory recall and analysis is exceedingly reliable. That is, if you have a subject listen to a stream of speech being produced at speeds that aren’t physically possible for a human to produce, the subject is able to parse and understand the utterance. Mutatis Mutandis visual scenes. I haven’t the references on hand, but I will likely edit this post to include them in the near future. 

Digression for B.D. Mitchell: The kind of learning that modules mediate (of course) reflects the innate architecture of the modules, and so the information they can extract is contingent, and peculiar; providing us with richly structure perspectives from which to view the world (as Noam is wont to put it). 

Returning to the main point, this kind of learning is contiguous with perception. Your language module demands, all things being equal, that you hear the sentence you’re paying attention to. You can’t not hear the sentence. Mutatis Mutandis, your language module demands that you learn that all human grammars have the property of being structure-dependent. Your perception and your learning, in these instances, is reflexive, virtually instantaneous, and generalizable among the species. 

The latter kind of belief fixation is mediated by...who-knows-what?  
It is slow(er?), and holistic. I think Lila Gleitman once called this kind of learning hell on wheels. Consider for instance the act of doing science. Hypothesis testing and confirmation of this variety is long, arduous, and generally speaking can be quite lame. Or consider, if you prefer, the far less glamorous example of the popular distinction between a simple linguistic joke & an intellectual joke. This is something that occurred to me at the last linguistics salon that we host here in the wonderful city of Toronto. 

I observe that people generally make a distinction between linguistic jokes that rely on a tacit knowledge of one’s grammar (such as those that play on homophony), and intellectual jokes. The former aren’t taken to reflect on one’s intellectual prowess but rather they are taken to reflect on one’s linguistic aptitude. The latter require you to check the information presented in the joke against your entire belief system. What you’ve got in your central belief box, and how fast you can search it, seems to be what makes ‘getting’ an intellectual joke impressive compared with a simple linguistic joke, which any human can do with ease. 

Compare for example: 
  1. linguistic joke (Aarons, 121) :
    "If it ducks like a quack it probably is one." 
  1. Intellectual joke:
    “Werner Heisenberg, Kurt Gödel, and Noam Chomsky walk into a bar. Heisenberg turns to the other two and says, ‘Clearly this is a joke, but how can we figure out if it's funny or not?’ Gödel replies, ‘We can't know that because we're inside the joke.’ Chomsky says, ‘Of course it's funny. You're just telling it wrong.’”
Thing the second: memory (& attention) & modularity 

There are two ways of thinking about the relationship between memory & modularity (if you buy the modularity story). The first is that modules carry out their proprietary business by drawing on general resources of memory and attention (they all draw on one and the same memory mechanism). 

The second is that modules carry out their proprietary business by deploying proprietary mechanisms of memory & attention. As Gallistel rightly points out in the interview, the neurological evidence strongly suggests that there is one memory mechanism at the neurological level. Although this leaves open, in my opinion, the possibility that there are different deployment conditions (or what have you) at the computational level. Franz Joseph Gall held something similar, and Fodor’s Modularity of Mind has a great discussion of this topic. 

The importance of this distinction becomes apparent when we consider syntactic theory. I suppose that just about everybody in the house believes that the language faculty builds mental representations. And that these mental representations are bona fide mental particulars with all the rights and responsibilities accorded to such things (they have causal powers & they are subject to peculiar conditions on well-formedness). 

One such condition is that of locality. (think displacement in all its various instantiations: binding, raising, control, feature checking, etc). This condition is interesting because there are a number of people that have attempted to explain its presence by appealing to facts about the memory mechanism. (I think maybe I read a paper by Gary Marcus that was trying do this?) 

I’m generally in favour of this approach, but I think that the facts about the memory limitations of the language faculty (and the conditions it imposes such as the kind locality we find in natural language) are going to turn out to be specific to that faculty. I think this is so not only because there isn’t any a priori reason to think otherwise, occasional empiricist kink notwithstanding, but also because it seems to me that different modules build, store, and address different varieties of mental representations. I’m quite open to having my mind changed about this, but I have a strong intuition that the visual system builds mental representations that have forms which reflect the demands of the task at hand. 

To be clear: I am not saying that there are fundamentally different species of mental representations. Maybe all mental representations are formed by the same operations (Merge, Label?), and all share the notion of locality that is implicit in all computational-symbolic representational systems at play today. But any additional constraints on well-formedness, (like maybe having uninterpretable features) are, I think, likely to be peculiar to their domains. And thus, paying attention to and keeping a record of those various structures when all of your modules are firing at once seems to me to suggest a variegated memory/attention mechanism. At least at the computational level. 

Okay... that’s all for now. Feel free to share your thoughts about all this in the comments section below. 

Notes, Admissions, Qualifications, Apologies: 
  1. Fodor, Jerry. (1983) Modularity of Mind. Page 49
  2. The post above draws mostly on my recent reading of Gallistel, Hornstein, Chomsky, and Fodor. Though I doubt they'd approve of their ideas being run together quite like this.  
  3. Apologies for some unintended noise in the recording. I’m not sure where it’s coming from, or how to get rid of it. 
  4. The linguistic joke above is taken from Jokes and the Linguistic Mind by Debra Aarons (2014)
  5. If you'd like to hear more Gallistel, check out this bloggingheads video interview.

Wednesday, October 1, 2014

The Fodorgraph Redux

Hi.

In this post we bring you a brief addendum to our interview with JFo.

After we ended out chat about linguistics, the conversation took a turn for the political. Since we enjoyed the outcome quite a bit, we've decided to make a habit of asking all our interviewees about their beliefs regarding the politics of science doing.



If the value of this topic isn't self-evident to you, consider the following:
  • science is done by people
  • science is done by people working within massive institutions 
  • massive institutions are closely networked (read: funded) with states & industries
  • (therefore) the structure of these institutions is going to reflect the interests of the states & industries in which they are embedded, at least in part 
This way of organizing our science world not only has the consequence of shaping the values of our universities, but it also has the consequence of narrowing down which individuals get to participate by filtering out those persons that don't share those values. 

Of course, the political economy of the university system is not solely determined by institutional pressures from outside; for all sorts of historical and economic reasons it is still the case that a significant part of the House of Higher Education is run by those that work in it. 

The practical outcomes of this arrangement (for science) aren't unambiguously negative. For instance, our state system still reflects (some shred of) a commitment to the public welfare. Thus our universities produce scholars that are often civic minded. One could easily imagine (and in a number of cases witness) an arrangement in which universities are places where scholars are totally divorced from a concern about the popular applications of their work. (Needless to say, what it means to be civic minded is hotly contested; see Noam's Liberal Scholarship and Objectivity for an in-depth discussion of the topic...sort of). 

On the other hand, this arrangement often leads to attenuating the deep questions into superficial engineering questions. Typically, this happens under the pressure of industry for new sources of profit. For those of us interested in brain-y things, the more egregious case of this is probably the study of artificial intelligence. That is, at some point in its development as a field, A.I. went from being concerned with understanding organic intelligence by studying the nature of computational, representational theories of mind to being concerned with building better ipods. If you'd like to hear someone with more clout on the topic say this, check out this (fairly) recent symposium on artificial intelligence held at MIT which included the fields founding figures. Specifically, check out what Patrick H. Winston has to say (time index 1:39:00)   

More specifically for linguistics, the era of the corporate university has had some dubious affects on linguistics. For example, the field has been inundated with quantitative studies wherein the motive for descriptive analysis of language overrides any interest in the big questions: such as what the hell is it that we're studying? what is language? (see here for a more personal account of this sort of change by Sascha Felix). 

As JFo says in the interview: the corollary to the desire for short-term profit in industry is the desire for discoveries at the 0.05 significance levels in the academy. In other words, the political climate inclines people to take less risks. (You can find Sydney Brenner saying something analogous in this recent interview).

Maybe where this whole big-picture discussion meets the everyday life of the researcher is at the laboratory. It is now common to hear that a researcher or research institute will spend half of their time on writing grant proposals. It is equally common to hear serious grumblings about the peer-review system, and the publish-or-perish era as a whole. In sum, there seems to be a general crapification of academic life that has measurable consequences. 

Homework Question: what happens when you build an enormous infastructure that is dependent on state and industry funding and then you take away the state?


Side-notes:
  1. Apologies for some glitches in the recording. Stand by for new recording equipment.
  2. Tune in two weeks from today to hear our interview with C. Randy Gallistel.