Friday, October 23, 2015

The Queen: An Interview

In this episode we bring you our interview with Robin Queen. Based out of the University of Michigan, her work on language is situated at the intersections of language, gender, media, and cognition. In a nutshell, Queen endeavours to explicate the details of how our “mental representations of the social world” crisscross with our “mental representations of language.” Our discussion largely focused on the issues surrounding the use of media data for linguistic inquiry.
Aperitifs: As usual, I’ll point out a few gems that surfaced In the course of our conversation. First, the practical: Recently I’ve come across a fair amount of research that makes use of media data to answer questions that - in my humble opinion - are not answerable using this source. As such, I was keen to pick Robin’s brain on effective use of the media as a body of data. (Check out her recent book on the topic of media, "Vox Popular: The Surprising Life of Language in the Media".) This part of the conversation contains some of the most practically (as well as conceptually) useful tidbits for the working linguist. Principally, Robin emphasizes that the linguist must be honest with themselves about their priorities, as well as the scope and limit of their approach. Put simply, they must ask whether their research question can be appropriately answered using media data; recognize the limits of this type of source, and refrain from pushing beyond what is answerable. They must also ask themselves why some particular piece of media has grabbed their attention; does it faithfully address the topic they are pursuing? Both of these lines of thought depend on having first framing a scientifically tractable question that reasonably captures their interest.   Second, the conceptual: We then explored how the media, as a commercial enterprise, contrives to portray a version of reality that reflects their commercial interests. Even the most cursory consideration of this fact invites a critical assessment of the quantity and quality of variation portrayed by the media, such as what kinds of linguistic variation are being presented by today’s media, and what exposure children receive to linguistic variation. Pertaining to children’s exposure to linguistic variation, Robin referenced a chapter by Rosina Lippa-Green entitled “English with an accent” which explored how heroes in Disney films use standard American English whereas non-heroes use non-standard varieties. Robin points out that although there are some online discussions pertaining to this topic, they are anecdotal in nature, and that in fact very little experimental work has been done on what (if any) effect this has on our perceptions of non-standard varieties of English. Third, the question: Finally, I leave you to mull over this idea at the intersection of language and cognition: How does media’s portrayal of linguistic variation reflect, or affect, our interpretation of the personality traits of those characters (i.e. introversion/extroversion, mental health, optimism/pessimism)? À la prochaine!



Note: This interview / post was conducted & composed by Selena Phillips-Boyle.

Thursday, May 7, 2015

The Phoneme: An Interview With Elan Dresher

In this episode, we're speaking with Elan Dresher, professor Emeritus at the University of Toronto.



Two things stand out from this interview like a sore thumb. The first is something I said which was at best controversial, and at worst just plain wrongheaded. The second is something which it seems to me was left tragically under addressed.

(1) The Structure of the Phoneme
  
The basic idea is simple (and has its origins in Fodor's Hume Variations) : the notion of “contrast” is of significant vintage in phonology, having its origins in such sources as Sapir, Jakobson, and the Prague school. And yet, it’s also been a central notion in generative grammar where a great deal of other structuralist notions have been eschewed.

In some ways, the idea of individuating (psychological) entities in virtue of their contrasts in a given schema is more in tune with the Pragmatist tradition, and indeed it seems rather isolated in linguistic theory to phonology and lexical semantics.

Thus, the obvious question : why should we continue to act on the belief that the contents of a mental particular is just whatever possible contrasts that it can sustain between itself and every other mental particular in a given system?

As far as I know, (and I know very little) it’s just not a question that bothers the phonologists in my circles -- even phonologists who are committed to explicating their corner of the language faculty in terms of a naturalist, realist psychology of language, who would otherwise consider themselves anti-Pragmatist.

To respond to this question in good faith would mean not just arguing convincingly that if phonology were to pivot around the notion of contrast then the data of externalization is explained with a neat formalism, but also that the concept has some independent theoretical motivation.  

As has been noted elsewhere, a number of questions confront the concept of contrast. Two of them are:

a) Holism: If a unit in a schema is defined solely by its relationship to every other unit in the schema then how can it be learned individually -- since grasping its character is a matter of grasping the relationships it has with all of the other units.

b) Substance: Concretely speaking, what is it that speakers contrast -- abstract mental symbols? acoustic features? articulatory features? 

(2) Phonetics-Phonology Interface

Although we briefly discuss the relationship between the two domains in this interview, we really don’t do the topic justice. For those listeners interested in an in-depth analysis, a good place to start might be Thomas Purnell's (2009) Phonetic Influence on Phonological Operations

One issue left unconsidered in this interview is that of the relationship between acoustic cues and the mental symbols they typically token in speakers. As Purnell observes, the relationship between these two entities is neither direct nor predictable simply on the basis of the acoustic information. To see this, consider the observation that acoustic properties are just the physical properties of a stream of sound -- they are layered, continuous, multitudinous -- while the mental symbols they token are in important respects atomistic and categorical. Acoustic cues are quite variable from speaker to speaker, as well as within a speaker, yet the behaviour of interpretation is astoundingly robust. 

Moreover, a single mental symbol (phonological feature) may be tokened by different acoustic cues. This suggests that the ground separating phonetics from phonology has some depth worth considering. 

In essence, this just recapitulates a very old argument about the origin of structured knowledge. Empiricists hold that the origin of structured knowledge is situated in the environment, which the mind reflects; rationalists hold that the origin of structured knowledge is situated in the mind itself, and whose final form is the product of an ongoing interaction between innate schemas and the raw stuff of the world. Doubtless the distinction as described above is whiggish, but it's enough for present purposes.

That's all for now.... 

....

Next Episode: We're speaking with the University of Michigan's Robin Queen. Stay tuned. 

Wednesday, February 11, 2015

sociolx intersects the pb

This week on the pb we dive into the fabulous realm of sociolinguistics. We sit down with Naomi Nagy, professor at the University of Toronto. Her work within the variationist paradigm seeks to understand how languages do and don’t change over time, primarily by looking at languages in contact situations.

For me, two things stand out from this interview:

(1)
Throughout our conversation, Naomi places great emphasis on the value of interdisciplinary work. She discusses how the aim of her dissertation work was not only to collect data on a small, minority language, but also to make contributions to greater questions of language structure and function. She suggests that more dissertations should collaborate in different domains, being supervised by professors in different areas of expertise (for example, sociolinguistics & syntax). Naomi explores the notion of “hybrid fieldwork”, and the strengths that differing methodologies bring to addressing the same questions. Many within the field would agree with the idea of working towards more integration and collaboration across linguistic disciplines. However, serious discussions remain to be had about how to implement these kinds of projects in a sustainable way that would enable us to address the bigger questions of language and mind.

(2)
Every field has starting assumptions from which they work, and sociolinguistics is no different. Current sociolinguistic work is challenging several of these fundamental underlying assumptions. For instance, in this interview Naomi challenges the idea of viewing prototypical speakers as monolingual: most people speak multiple languages, and are therefore simultaneously members of different linguistic communities. As such, a single speaker could be progressive in their use of one language, and conservative in another. An oft-cited example of this is women’s language: a great deal of recent variationist work shows that young women are linguistic innovators in, for example, English (see this article by Chi Luu for a popular description of this phenomena). Conversely, sociolinguists who work on language contact varieties see that it is the woman’s role to maintain the heritage language and culture in the home. This leads to a tension between our conception of women as language innovators (in, for example, the monolingual, English world), and women as language conservators (in, for example, the homes of heritage language speakers). Thus follows Naomi's research question: “Are they going to innovate in one language and be conservative in the other?” This tension begins to be resolved when we take into account the bilingual nature of individual speakers.

Note: however counter intuitive it may seem to some, this work reflects another point of agreement between generative grammar and sociolinguistics: as Noam Chomsky has stressed, the idea that a speaker / hearer instantiates a single grammar is an idealization for the purposes of particular kinds of inquiry. In point of fact, a number of researchers within the generative framework maintain that real speakers extract numerous grammars from the primary linguistic data. 



So if this blog is about language & mind, how does all this talk of language variation and contact intersect with cognition? Provided the sociolinguistic effects that the field has catalogued over the past 50 years, it is possible, for example, to inquire whether individuals with (acquired or innate) cognitive disorders display the same sensitivity to sociolinguistic factors as neurotypical speakers, and how that sensitivity is expressed. Such an enterprise would be a natural extension of Eric Lenneberg’s foundational foray into language development under conditions of adversity. Stay tuned for more on this topic in upcoming posts, including our interview with Robin Queen wherein we delve into her work on language contact and variation, and explore how cognition plays a role at this intersection.

Thursday, January 22, 2015

[yourdiscipline] is really just [mydiscipline]

if you’re involved in any discipline concerned with the nature of language and mind, this line probably sounds familiar. If you’ve been in the scene for essentially any length of time, a thick, bony cartilage has probably developed around any part of your psyche that may have ever taken such pronouncements (which are re-issued virtually every year in some form or another) at face value. 

Internal to linguistics, something like this sort of logic usually concerns the passing of particular phenomenon between (e.g.) syntax, phonology, semantics, and pragmatics. (I recall a number of students at the 2013 Linguistic Institute who were thoroughly scandalized by Sam Epstein's observation that word order is evidence about articulation, not syntax. Alternatively, I’ve got at least a couple of phonologists in my circles that are always trying to explain to me how this, that, and the other syntactic process is really just derivative of prosodic considerations). This kind of topic-shuffling can be highly productive and much of the time it is an indicator that the field still has a pulse. However, too often it is a reflection of flash-in-the-pan trends and academic politics.

External to linguistics, something like this sort of logic concerns the division of labour between neuroscientific and psychological inquiry.

Today, we’re posting our interview with one computational neuroscientist that seems to make both sides of the aisle sit a little easier, all the while maintaining a substantive proposal for integrating linguistics and neuroscience. Most refreshingly, he's challenging the [yourdiscipline] is really just [mydiscipline] rhetoric that's been so pervasive in the neuroscience of language. Below is a chat we recorded with David Poeppel back in December 2014.





A couple of things to note:
  • You can find more Poeppel & Co. over at the spectacular blog, talkingbrains
  • Poeppel argues that the right level of abstraction for the basic unit of computation is the neural circuit (see for instance his Towards a Computational(ist) Neurobiology of Language); This would seem to be, at least prima facie, in contradiction to Gallistel's recent sermons in which he argues that the basic unit of computation is intraneuronal. Perhaps there's no contraction for these two researchers and these differently sized units of computation are complementary -- however I didn't catch this difference in time for the interview. Perhaps you have some thoughts on this? 



Notes, Admissions, Qualifications, and Apologies: 
  1. the title of this post is lifted from Laura Howes tweet under the briefly (but thoroughly) trendy hashtag #ruinadatewithanacademicinfivewords
  2. I have no idea whether I am pronouncing "Poeppel" correctly, having neglected to confirm that during the interview. If I've screwed it up entirely, all apologies.
  3. I mispronounce the word "incommensurable" for the first third of the interview. I can live with it.

Saturday, November 15, 2014

Touring the Language Faculty: An Interview with Norbert Hornstein

did october happen? it seemed to careen right passed me into mid-november. despite this unforgivable betrayal by one of my favourite months of the year I did manage to pull off a mighty fun interview with syntactician, philosopher, and fellow-blogger, Norbert Hornstein.

I originally met this chap when I attended his syntax course at the LSA summer school some years back. listening to him speak on the topic, one is apt to get the feeling that generative grammar is building a cool mad max death truck out of scrap metal and wishes (to borrow a phrase from my flatmate). this is something that is often missing from the average lecture on the topic of generative syntax wherein one couldn't be faulted for getting the impression that the field is trying to do philology with both hands tied behind their back (methodologically and theoretically). Norbert is no philologist though, neither on his blog, the Faculty of Language, nor in this interview.

as usual, I'll take a quick dip into something raised in the interview that caught my attention.

during the latter half of the interview (about 48m) Norbert mentions the distinction between linguistics and philology. elsewhere in his writings, the distinction is made by appeal to such notions as explanation and description. I think that Norbert is right to point out that often enough the concerns motivating a programme of research aimed at a faithful description of a language are orthogonal to those motivating a programme of researched aimed at discovering the organizing principles which underlie language tout court. nevertheless, there can also be a palpable tension between the two. consider for instance, the levels of theoretical adequacy demarcated in Radford (1982):

"a grammar of a language is observationally adequate if it correctly predicts which sentences are (and are not) syntactically, semantically and phonologically well-formed in the language.

a grammar of a language is descriptively adequate if it correctly predicts which sentences are (and are not) syntactically, semantically and phonologically well-formed in the language, and also correctly describes the syntactic, semantic and phonological structure of the sentences in the language in such a way as to provide a principled account of the native speaker’s intuitions about this structure.

a grammar attains explanatory adequacy just in case it correctly predicts which sentences are and are not well-formed in the language, correctly described their structure, and also does so in terms of a highly restricted set of optimally simple, universal, maximally general principles of mental computation, and are ‘learnable’ by the child in a limited period of time, and given access to limited data."   

notice that there is a conflict of interest between descriptive adequacy and explanatory adequacy. the former is in a permanently taxonomic mood, and is primarily driven to record, sort, and occasionally predict particular language forms (and meanings); whereas the latter is in a mood to gloss, to provide the rules in virtue of which languages contain the forms and meanings that they do. and the specific pairings between form and meaning that they do.

the conflict arises when we try to map a chaotic, constantly changing world in which accidental and principled variation are observationally indistinguishable to the world of intelligible theory in which consistency and evaluability are supreme values. to my knowledge, the conflict between the two was first noticed by the ancients. for them, a crucial problem was how to relate the sophisticated geometrical and mathematical models of the time to the chaos of worldly phenomenon such as motion. Galileo was really the first (again, to my knowledge) to show the possibility of applying the concepts of geometry to the highly variable phenomenon of motion. relatedly, Bacon was the first to be recognized for proposing a mode of inquiry for, inter alia, discerning accidental variation from principled variation. namely, to carry out experiments which contrive experiences. the virtue of carrying out laboratory experiments is that it is possible to discover crucial discrepancies in theoretical prediction which can be used to hone in on the essential nature of a thing.

(returning to language & Radford's levels of adequacy)
feel free to substitute whatever variable that concerns you besides the syntactic, and whatever metric by which you'd like to evaluate well-formedness. but notice that the problems of marrying your descriptive analyses with your characterization of the abstract grammar doesn't go away (whether it be a grammar of gesture, or social relations, or morals). this is because psychological (to say nothing of theoretical) objects, grammars among them, are necessarily normative while the data is decidedly not. that is, grammars characterize a set of things which a given speaker (or speech community, if you really insist) will find well-formed with respect to form and meaning. so even a sociolect (a dialect in which linguistic varieties are correlated with sociological factors) is a kind of grammar in virtue of which speakers sort sociolinguistic forms and meanings into the well-formed and the ill-formed.

ultimately then I suppose it wouldn't be too off the mark to encapsulate the tension between linguistics and philology as a tension between accounting for forms (and meanings), which are quite varied and diverse, and accounting for the sense of well-formedness, which is largely stable and shared commonly amongst all humans.

caveat: the centrifugal force between philology and linguistics is, as any sensible researcher would acknowledge, quite often counter-balanced by a centripetal force between the two disciplines. specifically, philological projects set a baseline which any linguistic theory must meet if it is to be observationally and descriptively adequate. symmetrically, theoretical work provides the intellectual scaffolding by which philological work can proceed (think metrics of simplicity; criteria of sorting words into classes, languages into families, and the like; the very decisions about what is important to put in your taxonomy and what is not).


The End.


Notes, Admissions, Qualifications, and Apologies:
  1. Radford, Andrew. 1982. Transformational Syntax. Cambridge: Cambridge University Press. 
  2. Apologies for the odd clicking that starts at about 35 minutes. we are working on making sure that stops happening. if you have any insight as to where this clicking is coming from or how to get rid of it we would be very grateful.


Saturday, November 1, 2014

Science—Like The Shape of Bras—Changes Over Time

A few weeks ago, I had the good fortune to catch up with my good friend, and historian of science, Benjamin D. Mitchell. We have oft carried on lengthy arguments about the politics of science doing while he was working on his PhD at York University, and on this latest occasion I couldn’t resist making a brief transcription for the PB.

For some context: B.D. Mitchell is one of Canada’s foremost contemporary experts on Nietzsche and psychological controversies in the late nineteenth century; scholar of the periodical press & the popularization of science in the pre-WWII era; and lecturer at the University of King’s College (Halifax).He is also Editor-in-Chief of Beyond Borderlands, a critical journal of the weird, paranormal, and occult.

I think contemporary scholars of mind should be concerned with the history of the sciences not only because it offers us case studies about how and why progress & regression occur during the process of inquiry, but also because the political economy of science, which cannot be understood without a historical knowledge, is a monumental influence on our lives as brain-workers: from federal science policy, to the structure of our professional societies, down to the office politics that shape our teaching (and learning).

*Caveat for the Q/A: It is my perception that B.D. Mitchell’s perspective on science reflects the sensibilities of a historian, whereas my own sensibilities (and maybe yours) are that of a practitioner. In other words, Mitchell is often wont to bracket the truth/falseness of a particular belief system as part and parcel of his mode of historiography. This is all to the good within that domain. But the working linguist needs their "cheques" to cash at the end of the day, and if history can help make that happen, then good. If it’s not false, great; if its true, even better. (To put it in a less flowery way: the working scientist must un-bracket the truth/falseness of the belief systems that are available to them if they are to make progress in the sciences). This is often the cause of great tension between scientists and historians/philosophers of science, as you will likely experience in reading the Q/A below.

Embrace the tension; it will enrich you. happy reading:

~mb~

most scientists-in-training aren't obligated to study the philosophy or history of the sciences. this has lead to a number of issues in science-doing that we've chatted about before. if you could give one piece of advice from the history or philosophy of science to the contemporary working scientist, what would it be?


~bdm~

Keep your doors open, physically and metaphorically. Recognize that there is a social element to your science. The best scientists have historically been those who were the best at listening in to the larger discussions going on around them, and seeing how their own specialties could be productively applied within these larger discussions. The “reclusive scientific genius”, from Galileo, to Newton, to Darwin, Tesla, and Einstein, is more of a rhetorical device that devotees use to surround their intellectual heroes with an air of worship than an actual condition of their thought and work. They were not alone, just as you are not alone. They do great things because they are greatly interested in the world, both in its most mundane sense, and in its most exalted. That is all.

~mb~


how has scientific discourse changed over time?

~bdm~

I think that one of the biggest changes between the scientific discourse of the 19th and 20th centuries has been in terms of how the changing bureaucratic structure of financial rewards that scientists received for their work influenced the teaching, style, and intended audience of scientific writing.

The less prestige science had, and the more informal the teaching of science was, the more those proposing controversial scientific theories had to write well and for a mixed audience, appealing to both the specialists in their fields, and potentially high profile public backers and policy makers.

Thomas Henry Huxley wanted scientists to be both financially rewarded specialists and the new cultural elites capable of shaping public opinion. Yet arguably, the development of funding bodies and formalized teaching institutions throughout the nineteenth century led scientists to gain greater internal prestige and monetary incentives at the cost of sequestering themselves away from the very public that Huxley saw as the basis of securing the financial freedom and cultural importance of the scientist.

His victory was a partial one that would have profound implications for the relationship between science and the media. The varied interests of popular journals, newspapers, radio, and television has remained more or less steady, what changed was how the scientists themselves interacted with these forms of media.

While there were many important scientific popularisers in the 19th century, there were also plenty of practicing scientists whose professional writings were also targeted at a popular audience. It’s not that the popular media itself has changed, what changed was the reasons for scientists to actively participate in broader discussions about science, and the range of venues in which such discussions happened.

~mb~

how has the scholarly/popular perspective about the relationship between language & thought changed over time?


~bdm~

I think that in the history of the study of language we see several interpenetrating traditions that circle around some fairly fundamental questions: do we create language or does language create us? Are the limits of language the limits of thought? How does language relate to the world? Where does language come from? What is common about language? Where are the differences?

I say that the various philosophical, religious, cultural, etc. traditions that have thought about language are interpenetrating because no society has ever just had one answer to these questions. They’re not dichotomies so much as they are continuums. Because of this there is no one arrow of change, but a web of interrelated changes.

Despite this, starting around the time of the modern research, university disciplinary trends seem to be increasingly set on turning these questions into dichotomies for the purposes of teaching them in a formalized manner that could be used to process an ever growing number of students. In this regard many of the problems facing the study of language are the problems of modern professionalisation more broadly. That makes it difficult to talk about how the discussions differ between scholarly and popular perspectives, for part of these discussions are what makes this dichotomy in the first place. Here I’ll refer readers to Tuska Benes' In Babel's Shadow: Language, Philology, and the Nation in Nineteenth-Century Germany and William Clark's Academic Charisma and the Origins of the Research University.

One of the most important consequences of this is that questions of the relationship between language and thought are caught up in the problems that plague debates about the relationship between the subjective and objective more broadly in science and society. This is one particular point at which the study of language stands to gain the most from observing trends in the history and philosophy of science, which has been trying to wrestle with these issue for a very long time. See, for instance, Lorraine Daston's and Peter Galison's work Objectivity.

~mb~

what ought to be the division of labour between metaphysics and epistemology in the study of mind?

~bdm~

I think that epistemology is what allows us to understand our limits, while metaphysics is how we act creatively within those limits. Anything deserving the name of knowledge requires both. The error, and the conflation of the two, comes from thinking that we can use epistemology to come to any one certain and specific answer about the structure of the world, or, in this instance, of the mind. What epistemology can do is bring us consistently to a place where we can realize the necessity of having a metaphysics, but not the content of those metaphysics.

We can lament and gnash our teeth at the uncertainties of our finite existence, or see ourselves as skilled and living artists capable of producing whole ecologies of knowledge and meaning. This needn’t lead us to the boogyman of an “anything goes” style relativism, but a more refined relativism that can show us how there are still many important and shared structures and forms of evaluating the world that are common to the human, even if we can never prove that they are transcendental absolutes. And this is a good thing, for the absolute is inimical to life; it’s incapable of motion or growth. Epistemically, an ecology of absolutes is a monocultural wasteland, and no ecology at all.

~mb~

often in our conversations you invoke the voice of Nietzsche. what do you think it entails about the nature of our minds (language & thought) that it is possible to reliably adopt the style of reasoning and language of another person?


~bdm~

*laughs* I can’t claim that it’s ever reliable to adopt the style of reasoning and language of another person. Indeed, I would warn against thinking that, or of adopting any one other person’s ideas too completely, but it is productive to study some things, or people, deeply. You have to be aware though that what you study changes you. It can be incredibly enriching, but also limiting in its way. I think of it as a process much akin to aging, or at least aging well. Again, you’re finite, so you have to make choices, and those choices leave their mark, because you have a history.

I guess what I am trying to say is: be careful what you research!

.

.

.

The End.

Wednesday, October 15, 2014

Memory, Learning, and Modularity: An Interview with Randy Gallistel

Have you ever asked yourself, how does the brain store a number in memory? 

It might surprise you to hear that the neuroscience community doesn’t really have a story about how this basic operation happens. Or so one fellow by the name of Gallistel is saying. 

I caught up with Randy Gallistel in May to chat about his new book Memory & the Computational Brain: Why cognitive science will transform neuroscience

The reason that I think linguists ought to be paying attention to this chap is that he’s been working out the details of symbolic/representational/computational theories of learning & perception for decades now, and the results seem promising (from a generative grammar point of view). 


This is a longer interview and it covers a lot of ground, but I’d like to pick at a couple of things that stood out for me in text. (A long interview deserves an equally long ranty post?)

Thing the first: the relationship between learning & memory 

Under the associanistic view, learning and memory are of a kind: learning is forming and strengthening associations; memory is a reflection of the strengths of those associations. You can gloss the above in your contemporary associanistic jargon of choice (e.g. connection strengths). I think that this way of dividing the labour is deeply confused. I won’t go into it here, though standby for a post devoted to the topic. 

Under the computational view, learning is the extracting of information from experience. And memory is the carrying forward of information through time. This way of carving up the apple seems sound to me. But not without a couple of caveats. 

Fair Warning: if you continue past this point, you’re in for it.  

Caveat about “experience”: the concept of experience should be approached with caution. This is because there’s a difference between an organism being exposed to some stimuli, and an exposure to stimuli that strikes an organism as an experience. That is to say, the sort of stimulation that a particular creature will treat as an experience depends on the kind of a creature it is, and the kind of perceiving and thinking it is capable of. 

Maybe the most common example of this distinction in linguistics is that of synthetic speech. It appears that if you expose a human to synthetic speech (when they are under the impression that what they are hearing is not speech) they report hearing whistles and squeaks. In other words, they do not have a linguistic experience. Alternatively, if you tell the subject that they are listening to speech, they will hear the same stimuli as speech. Now, they have a linguistic experience.1 

Or maybe you’d prefer the following example: supposedly, the ape auditory system is attuned to the same distinctive-features as humans. But observe that even though they organize the acoustic stimuli along the lines of features, the data doesn’t seem to result in anything; they do not have a linguistic experience per se.

Anyway, if you allow yourself to be puzzled by how it is that infants identify language-related data in the environment to begin with, you will see why this caveat is neat. To put it another way, why is it that my pet cat, (or monkey, or seal) doesn’t have a “linguistic experience” from the same stimuli as my niece. Or to put it yet another way, if you expose your eyeballs to acoustic data, we can say that this constitutes a type of stimuli, but it is not a visual experience (which is what eyes are all about, isn’t it?). Only visual stimuli of the proper kind will cause us to have a visual experience from which we can then draw information. 

More importantly, this observation isn’t limited to perceptual systems such as vision. It can be extended to encompass the conceptual apparatus (as Fodor has been trying to do for the past million years or so.) This is a timely issue, especially as work in generative grammar shifts to investigating the Conceptual-Intentional Interface. To re-gloss: a conceptual repertoire depends on how things in the world strike a particular creature (and often what intentional history a thing strikes them as having). Anyway, look out for more on this in the nebulous future. 

Caveat about learning: at some point in the interview, Gallistel says that learning is contiguous with perception, but it seems to me that this is not always the case. First, let’s say, for the sake of argument, that learning is the process by which we fix our beliefs about the world. Well there are patently at least two kinds of beliefs to be fixed: perceptual beliefs about the status of the world, and conceptual-intensional beliefs that are fixed by comparing new potential beliefs with your current belief system. The whole belief system. 

The former kind of belief fixation is mediated by modules. Modules are fast, dumb, topic-specific, informationally encapsulated, and generally quite badass. Consider for instance human perceptual performance under laboratory conditions. Reportedly, even when visual and auditory stimuli are presented at many times the speed at which humans would generally be exposed to them in natural experience, memory recall and analysis is exceedingly reliable. That is, if you have a subject listen to a stream of speech being produced at speeds that aren’t physically possible for a human to produce, the subject is able to parse and understand the utterance. Mutatis Mutandis visual scenes. I haven’t the references on hand, but I will likely edit this post to include them in the near future. 

Digression for B.D. Mitchell: The kind of learning that modules mediate (of course) reflects the innate architecture of the modules, and so the information they can extract is contingent, and peculiar; providing us with richly structure perspectives from which to view the world (as Noam is wont to put it). 

Returning to the main point, this kind of learning is contiguous with perception. Your language module demands, all things being equal, that you hear the sentence you’re paying attention to. You can’t not hear the sentence. Mutatis Mutandis, your language module demands that you learn that all human grammars have the property of being structure-dependent. Your perception and your learning, in these instances, is reflexive, virtually instantaneous, and generalizable among the species. 

The latter kind of belief fixation is mediated by...who-knows-what?  
It is slow(er?), and holistic. I think Lila Gleitman once called this kind of learning hell on wheels. Consider for instance the act of doing science. Hypothesis testing and confirmation of this variety is long, arduous, and generally speaking can be quite lame. Or consider, if you prefer, the far less glamorous example of the popular distinction between a simple linguistic joke & an intellectual joke. This is something that occurred to me at the last linguistics salon that we host here in the wonderful city of Toronto. 

I observe that people generally make a distinction between linguistic jokes that rely on a tacit knowledge of one’s grammar (such as those that play on homophony), and intellectual jokes. The former aren’t taken to reflect on one’s intellectual prowess but rather they are taken to reflect on one’s linguistic aptitude. The latter require you to check the information presented in the joke against your entire belief system. What you’ve got in your central belief box, and how fast you can search it, seems to be what makes ‘getting’ an intellectual joke impressive compared with a simple linguistic joke, which any human can do with ease. 

Compare for example: 
  1. linguistic joke (Aarons, 121) :
    "If it ducks like a quack it probably is one." 
  1. Intellectual joke:
    “Werner Heisenberg, Kurt Gödel, and Noam Chomsky walk into a bar. Heisenberg turns to the other two and says, ‘Clearly this is a joke, but how can we figure out if it's funny or not?’ Gödel replies, ‘We can't know that because we're inside the joke.’ Chomsky says, ‘Of course it's funny. You're just telling it wrong.’”
Thing the second: memory (& attention) & modularity 

There are two ways of thinking about the relationship between memory & modularity (if you buy the modularity story). The first is that modules carry out their proprietary business by drawing on general resources of memory and attention (they all draw on one and the same memory mechanism). 

The second is that modules carry out their proprietary business by deploying proprietary mechanisms of memory & attention. As Gallistel rightly points out in the interview, the neurological evidence strongly suggests that there is one memory mechanism at the neurological level. Although this leaves open, in my opinion, the possibility that there are different deployment conditions (or what have you) at the computational level. Franz Joseph Gall held something similar, and Fodor’s Modularity of Mind has a great discussion of this topic. 

The importance of this distinction becomes apparent when we consider syntactic theory. I suppose that just about everybody in the house believes that the language faculty builds mental representations. And that these mental representations are bona fide mental particulars with all the rights and responsibilities accorded to such things (they have causal powers & they are subject to peculiar conditions on well-formedness). 

One such condition is that of locality. (think displacement in all its various instantiations: binding, raising, control, feature checking, etc). This condition is interesting because there are a number of people that have attempted to explain its presence by appealing to facts about the memory mechanism. (I think maybe I read a paper by Gary Marcus that was trying do this?) 

I’m generally in favour of this approach, but I think that the facts about the memory limitations of the language faculty (and the conditions it imposes such as the kind locality we find in natural language) are going to turn out to be specific to that faculty. I think this is so not only because there isn’t any a priori reason to think otherwise, occasional empiricist kink notwithstanding, but also because it seems to me that different modules build, store, and address different varieties of mental representations. I’m quite open to having my mind changed about this, but I have a strong intuition that the visual system builds mental representations that have forms which reflect the demands of the task at hand. 

To be clear: I am not saying that there are fundamentally different species of mental representations. Maybe all mental representations are formed by the same operations (Merge, Label?), and all share the notion of locality that is implicit in all computational-symbolic representational systems at play today. But any additional constraints on well-formedness, (like maybe having uninterpretable features) are, I think, likely to be peculiar to their domains. And thus, paying attention to and keeping a record of those various structures when all of your modules are firing at once seems to me to suggest a variegated memory/attention mechanism. At least at the computational level. 

Okay... that’s all for now. Feel free to share your thoughts about all this in the comments section below. 

Notes, Admissions, Qualifications, Apologies: 
  1. Fodor, Jerry. (1983) Modularity of Mind. Page 49
  2. The post above draws mostly on my recent reading of Gallistel, Hornstein, Chomsky, and Fodor. Though I doubt they'd approve of their ideas being run together quite like this.  
  3. Apologies for some unintended noise in the recording. I’m not sure where it’s coming from, or how to get rid of it. 
  4. The linguistic joke above is taken from Jokes and the Linguistic Mind by Debra Aarons (2014)
  5. If you'd like to hear more Gallistel, check out this bloggingheads video interview.