Saturday, November 15, 2014

Touring the Language Faculty: An Interview with Norbert Hornstein

did october happen? it seemed to careen right passed me into mid-november. despite this unforgivable betrayal by one of my favourite months of the year I did manage to pull off a mighty fun interview with syntactician, philosopher, and fellow-blogger, Norbert Hornstein.

I originally met this chap when I attended his syntax course at the LSA summer school some years back. listening to him speak on the topic, one is apt to get the feeling that generative grammar is building a cool mad max death truck out of scrap metal and wishes (to borrow a phrase from my flatmate). this is something that is often missing from the average lecture on the topic of generative syntax wherein one couldn't be faulted for getting the impression that the field is trying to do philology with both hands tied behind their back (methodologically and theoretically). Norbert is no philologist though, neither on his blog, the Faculty of Language, nor in this interview.

as usual, I'll take a quick dip into something raised in the interview that caught my attention.

during the latter half of the interview (about 48m) Norbert mentions the distinction between linguistics and philology. elsewhere in his writings, the distinction is made by appeal to such notions as explanation and description. I think that Norbert is right to point out that often enough the concerns motivating a programme of research aimed at a faithful description of a language are orthogonal to those motivating a programme of researched aimed at discovering the organizing principles which underlie language tout court. nevertheless, there can also be a palpable tension between the two. consider for instance, the levels of theoretical adequacy demarcated in Radford (1982):

"a grammar of a language is observationally adequate if it correctly predicts which sentences are (and are not) syntactically, semantically and phonologically well-formed in the language.

a grammar of a language is descriptively adequate if it correctly predicts which sentences are (and are not) syntactically, semantically and phonologically well-formed in the language, and also correctly describes the syntactic, semantic and phonological structure of the sentences in the language in such a way as to provide a principled account of the native speaker’s intuitions about this structure.

a grammar attains explanatory adequacy just in case it correctly predicts which sentences are and are not well-formed in the language, correctly described their structure, and also does so in terms of a highly restricted set of optimally simple, universal, maximally general principles of mental computation, and are ‘learnable’ by the child in a limited period of time, and given access to limited data."   

notice that there is a conflict of interest between descriptive adequacy and explanatory adequacy. the former is in a permanently taxonomic mood, and is primarily driven to record, sort, and occasionally predict particular language forms (and meanings); whereas the latter is in a mood to gloss, to provide the rules in virtue of which languages contain the forms and meanings that they do. and the specific pairings between form and meaning that they do.

the conflict arises when we try to map a chaotic, constantly changing world in which accidental and principled variation are observationally indistinguishable to the world of intelligible theory in which consistency and evaluability are supreme values. to my knowledge, the conflict between the two was first noticed by the ancients. for them, a crucial problem was how to relate the sophisticated geometrical and mathematical models of the time to the chaos of worldly phenomenon such as motion. Galileo was really the first (again, to my knowledge) to show the possibility of applying the concepts of geometry to the highly variable phenomenon of motion. relatedly, Bacon was the first to be recognized for proposing a mode of inquiry for, inter alia, discerning accidental variation from principled variation. namely, to carry out experiments which contrive experiences. the virtue of carrying out laboratory experiments is that it is possible to discover crucial discrepancies in theoretical prediction which can be used to hone in on the essential nature of a thing.

(returning to language & Radford's levels of adequacy)
feel free to substitute whatever variable that concerns you besides the syntactic, and whatever metric by which you'd like to evaluate well-formedness. but notice that the problems of marrying your descriptive analyses with your characterization of the abstract grammar doesn't go away (whether it be a grammar of gesture, or social relations, or morals). this is because psychological (to say nothing of theoretical) objects, grammars among them, are necessarily normative while the data is decidedly not. that is, grammars characterize a set of things which a given speaker (or speech community, if you really insist) will find well-formed with respect to form and meaning. so even a sociolect (a dialect in which linguistic varieties are correlated with sociological factors) is a kind of grammar in virtue of which speakers sort sociolinguistic forms and meanings into the well-formed and the ill-formed.

ultimately then I suppose it wouldn't be too off the mark to encapsulate the tension between linguistics and philology as a tension between accounting for forms (and meanings), which are quite varied and diverse, and accounting for the sense of well-formedness, which is largely stable and shared commonly amongst all humans.

caveat: the centrifugal force between philology and linguistics is, as any sensible researcher would acknowledge, quite often counter-balanced by a centripetal force between the two disciplines. specifically, philological projects set a baseline which any linguistic theory must meet if it is to be observationally and descriptively adequate. symmetrically, theoretical work provides the intellectual scaffolding by which philological work can proceed (think metrics of simplicity; criteria of sorting words into classes, languages into families, and the like; the very decisions about what is important to put in your taxonomy and what is not).


The End.


Notes, Admissions, Qualifications, and Apologies:
  1. Radford, Andrew. 1982. Transformational Syntax. Cambridge: Cambridge University Press. 
  2. Apologies for the odd clicking that starts at about 35 minutes. we are working on making sure that stops happening. if you have any insight as to where this clicking is coming from or how to get rid of it we would be very grateful.


Saturday, November 1, 2014

Science—Like The Shape of Bras—Changes Over Time

A few weeks ago, I had the good fortune to catch up with my good friend, and historian of science, Benjamin D. Mitchell. We have oft carried on lengthy arguments about the politics of science doing while he was working on his PhD at York University, and on this latest occasion I couldn’t resist making a brief transcription for the PB.

For some context: B.D. Mitchell is one of Canada’s foremost contemporary experts on Nietzsche and psychological controversies in the late nineteenth century; scholar of the periodical press & the popularization of science in the pre-WWII era; and lecturer at the University of King’s College (Halifax).He is also Editor-in-Chief of Beyond Borderlands, a critical journal of the weird, paranormal, and occult.

I think contemporary scholars of mind should be concerned with the history of the sciences not only because it offers us case studies about how and why progress & regression occur during the process of inquiry, but also because the political economy of science, which cannot be understood without a historical knowledge, is a monumental influence on our lives as brain-workers: from federal science policy, to the structure of our professional societies, down to the office politics that shape our teaching (and learning).

*Caveat for the Q/A: It is my perception that B.D. Mitchell’s perspective on science reflects the sensibilities of a historian, whereas my own sensibilities (and maybe yours) are that of a practitioner. In other words, Mitchell is often wont to bracket the truth/falseness of a particular belief system as part and parcel of his mode of historiography. This is all to the good within that domain. But the working linguist needs their "cheques" to cash at the end of the day, and if history can help make that happen, then good. If it’s not false, great; if its true, even better. (To put it in a less flowery way: the working scientist must un-bracket the truth/falseness of the belief systems that are available to them if they are to make progress in the sciences). This is often the cause of great tension between scientists and historians/philosophers of science, as you will likely experience in reading the Q/A below.

Embrace the tension; it will enrich you. happy reading:

~mb~

most scientists-in-training aren't obligated to study the philosophy or history of the sciences. this has lead to a number of issues in science-doing that we've chatted about before. if you could give one piece of advice from the history or philosophy of science to the contemporary working scientist, what would it be?


~bdm~

Keep your doors open, physically and metaphorically. Recognize that there is a social element to your science. The best scientists have historically been those who were the best at listening in to the larger discussions going on around them, and seeing how their own specialties could be productively applied within these larger discussions. The “reclusive scientific genius”, from Galileo, to Newton, to Darwin, Tesla, and Einstein, is more of a rhetorical device that devotees use to surround their intellectual heroes with an air of worship than an actual condition of their thought and work. They were not alone, just as you are not alone. They do great things because they are greatly interested in the world, both in its most mundane sense, and in its most exalted. That is all.

~mb~


how has scientific discourse changed over time?

~bdm~

I think that one of the biggest changes between the scientific discourse of the 19th and 20th centuries has been in terms of how the changing bureaucratic structure of financial rewards that scientists received for their work influenced the teaching, style, and intended audience of scientific writing.

The less prestige science had, and the more informal the teaching of science was, the more those proposing controversial scientific theories had to write well and for a mixed audience, appealing to both the specialists in their fields, and potentially high profile public backers and policy makers.

Thomas Henry Huxley wanted scientists to be both financially rewarded specialists and the new cultural elites capable of shaping public opinion. Yet arguably, the development of funding bodies and formalized teaching institutions throughout the nineteenth century led scientists to gain greater internal prestige and monetary incentives at the cost of sequestering themselves away from the very public that Huxley saw as the basis of securing the financial freedom and cultural importance of the scientist.

His victory was a partial one that would have profound implications for the relationship between science and the media. The varied interests of popular journals, newspapers, radio, and television has remained more or less steady, what changed was how the scientists themselves interacted with these forms of media.

While there were many important scientific popularisers in the 19th century, there were also plenty of practicing scientists whose professional writings were also targeted at a popular audience. It’s not that the popular media itself has changed, what changed was the reasons for scientists to actively participate in broader discussions about science, and the range of venues in which such discussions happened.

~mb~

how has the scholarly/popular perspective about the relationship between language & thought changed over time?


~bdm~

I think that in the history of the study of language we see several interpenetrating traditions that circle around some fairly fundamental questions: do we create language or does language create us? Are the limits of language the limits of thought? How does language relate to the world? Where does language come from? What is common about language? Where are the differences?

I say that the various philosophical, religious, cultural, etc. traditions that have thought about language are interpenetrating because no society has ever just had one answer to these questions. They’re not dichotomies so much as they are continuums. Because of this there is no one arrow of change, but a web of interrelated changes.

Despite this, starting around the time of the modern research, university disciplinary trends seem to be increasingly set on turning these questions into dichotomies for the purposes of teaching them in a formalized manner that could be used to process an ever growing number of students. In this regard many of the problems facing the study of language are the problems of modern professionalisation more broadly. That makes it difficult to talk about how the discussions differ between scholarly and popular perspectives, for part of these discussions are what makes this dichotomy in the first place. Here I’ll refer readers to Tuska Benes' In Babel's Shadow: Language, Philology, and the Nation in Nineteenth-Century Germany and William Clark's Academic Charisma and the Origins of the Research University.

One of the most important consequences of this is that questions of the relationship between language and thought are caught up in the problems that plague debates about the relationship between the subjective and objective more broadly in science and society. This is one particular point at which the study of language stands to gain the most from observing trends in the history and philosophy of science, which has been trying to wrestle with these issue for a very long time. See, for instance, Lorraine Daston's and Peter Galison's work Objectivity.

~mb~

what ought to be the division of labour between metaphysics and epistemology in the study of mind?

~bdm~

I think that epistemology is what allows us to understand our limits, while metaphysics is how we act creatively within those limits. Anything deserving the name of knowledge requires both. The error, and the conflation of the two, comes from thinking that we can use epistemology to come to any one certain and specific answer about the structure of the world, or, in this instance, of the mind. What epistemology can do is bring us consistently to a place where we can realize the necessity of having a metaphysics, but not the content of those metaphysics.

We can lament and gnash our teeth at the uncertainties of our finite existence, or see ourselves as skilled and living artists capable of producing whole ecologies of knowledge and meaning. This needn’t lead us to the boogyman of an “anything goes” style relativism, but a more refined relativism that can show us how there are still many important and shared structures and forms of evaluating the world that are common to the human, even if we can never prove that they are transcendental absolutes. And this is a good thing, for the absolute is inimical to life; it’s incapable of motion or growth. Epistemically, an ecology of absolutes is a monocultural wasteland, and no ecology at all.

~mb~

often in our conversations you invoke the voice of Nietzsche. what do you think it entails about the nature of our minds (language & thought) that it is possible to reliably adopt the style of reasoning and language of another person?


~bdm~

*laughs* I can’t claim that it’s ever reliable to adopt the style of reasoning and language of another person. Indeed, I would warn against thinking that, or of adopting any one other person’s ideas too completely, but it is productive to study some things, or people, deeply. You have to be aware though that what you study changes you. It can be incredibly enriching, but also limiting in its way. I think of it as a process much akin to aging, or at least aging well. Again, you’re finite, so you have to make choices, and those choices leave their mark, because you have a history.

I guess what I am trying to say is: be careful what you research!

.

.

.

The End.

Wednesday, October 15, 2014

Memory, Learning, and Modularity: An Interview with Randy Gallistel

Have you ever asked yourself, how does the brain store a number in memory? 

It might surprise you to hear that the neuroscience community doesn’t really have a story about how this basic operation happens. Or so one fellow by the name of Gallistel is saying. 

I caught up with Randy Gallistel in May to chat about his new book Memory & the Computational Brain: Why cognitive science will transform neuroscience

The reason that I think linguists ought to be paying attention to this chap is that he’s been working out the details of symbolic/representational/computational theories of learning & perception for decades now, and the results seem promising (from a generative grammar point of view). 


This is a longer interview and it covers a lot of ground, but I’d like to pick at a couple of things that stood out for me in text. (A long interview deserves an equally long ranty post?)

Thing the first: the relationship between learning & memory 

Under the associanistic view, learning and memory are of a kind: learning is forming and strengthening associations; memory is a reflection of the strengths of those associations. You can gloss the above in your contemporary associanistic jargon of choice (e.g. connection strengths). I think that this way of dividing the labour is deeply confused. I won’t go into it here, though standby for a post devoted to the topic. 

Under the computational view, learning is the extracting of information from experience. And memory is the carrying forward of information through time. This way of carving up the apple seems sound to me. But not without a couple of caveats. 

Fair Warning: if you continue past this point, you’re in for it.  

Caveat about “experience”: the concept of experience should be approached with caution. This is because there’s a difference between an organism being exposed to some stimuli, and an exposure to stimuli that strikes an organism as an experience. That is to say, the sort of stimulation that a particular creature will treat as an experience depends on the kind of a creature it is, and the kind of perceiving and thinking it is capable of. 

Maybe the most common example of this distinction in linguistics is that of synthetic speech. It appears that if you expose a human to synthetic speech (when they are under the impression that what they are hearing is not speech) they report hearing whistles and squeaks. In other words, they do not have a linguistic experience. Alternatively, if you tell the subject that they are listening to speech, they will hear the same stimuli as speech. Now, they have a linguistic experience.1 

Or maybe you’d prefer the following example: supposedly, the ape auditory system is attuned to the same distinctive-features as humans. But observe that even though they organize the acoustic stimuli along the lines of features, the data doesn’t seem to result in anything; they do not have a linguistic experience per se.

Anyway, if you allow yourself to be puzzled by how it is that infants identify language-related data in the environment to begin with, you will see why this caveat is neat. To put it another way, why is it that my pet cat, (or monkey, or seal) doesn’t have a “linguistic experience” from the same stimuli as my niece. Or to put it yet another way, if you expose your eyeballs to acoustic data, we can say that this constitutes a type of stimuli, but it is not a visual experience (which is what eyes are all about, isn’t it?). Only visual stimuli of the proper kind will cause us to have a visual experience from which we can then draw information. 

More importantly, this observation isn’t limited to perceptual systems such as vision. It can be extended to encompass the conceptual apparatus (as Fodor has been trying to do for the past million years or so.) This is a timely issue, especially as work in generative grammar shifts to investigating the Conceptual-Intentional Interface. To re-gloss: a conceptual repertoire depends on how things in the world strike a particular creature (and often what intentional history a thing strikes them as having). Anyway, look out for more on this in the nebulous future. 

Caveat about learning: at some point in the interview, Gallistel says that learning is contiguous with perception, but it seems to me that this is not always the case. First, let’s say, for the sake of argument, that learning is the process by which we fix our beliefs about the world. Well there are patently at least two kinds of beliefs to be fixed: perceptual beliefs about the status of the world, and conceptual-intensional beliefs that are fixed by comparing new potential beliefs with your current belief system. The whole belief system. 

The former kind of belief fixation is mediated by modules. Modules are fast, dumb, topic-specific, informationally encapsulated, and generally quite badass. Consider for instance human perceptual performance under laboratory conditions. Reportedly, even when visual and auditory stimuli are presented at many times the speed at which humans would generally be exposed to them in natural experience, memory recall and analysis is exceedingly reliable. That is, if you have a subject listen to a stream of speech being produced at speeds that aren’t physically possible for a human to produce, the subject is able to parse and understand the utterance. Mutatis Mutandis visual scenes. I haven’t the references on hand, but I will likely edit this post to include them in the near future. 

Digression for B.D. Mitchell: The kind of learning that modules mediate (of course) reflects the innate architecture of the modules, and so the information they can extract is contingent, and peculiar; providing us with richly structure perspectives from which to view the world (as Noam is wont to put it). 

Returning to the main point, this kind of learning is contiguous with perception. Your language module demands, all things being equal, that you hear the sentence you’re paying attention to. You can’t not hear the sentence. Mutatis Mutandis, your language module demands that you learn that all human grammars have the property of being structure-dependent. Your perception and your learning, in these instances, is reflexive, virtually instantaneous, and generalizable among the species. 

The latter kind of belief fixation is mediated by...who-knows-what?  
It is slow(er?), and holistic. I think Lila Gleitman once called this kind of learning hell on wheels. Consider for instance the act of doing science. Hypothesis testing and confirmation of this variety is long, arduous, and generally speaking can be quite lame. Or consider, if you prefer, the far less glamorous example of the popular distinction between a simple linguistic joke & an intellectual joke. This is something that occurred to me at the last linguistics salon that we host here in the wonderful city of Toronto. 

I observe that people generally make a distinction between linguistic jokes that rely on a tacit knowledge of one’s grammar (such as those that play on homophony), and intellectual jokes. The former aren’t taken to reflect on one’s intellectual prowess but rather they are taken to reflect on one’s linguistic aptitude. The latter require you to check the information presented in the joke against your entire belief system. What you’ve got in your central belief box, and how fast you can search it, seems to be what makes ‘getting’ an intellectual joke impressive compared with a simple linguistic joke, which any human can do with ease. 

Compare for example: 
  1. linguistic joke (Aarons, 121) :
    "If it ducks like a quack it probably is one." 
  1. Intellectual joke:
    “Werner Heisenberg, Kurt Gödel, and Noam Chomsky walk into a bar. Heisenberg turns to the other two and says, ‘Clearly this is a joke, but how can we figure out if it's funny or not?’ Gödel replies, ‘We can't know that because we're inside the joke.’ Chomsky says, ‘Of course it's funny. You're just telling it wrong.’”
Thing the second: memory (& attention) & modularity 

There are two ways of thinking about the relationship between memory & modularity (if you buy the modularity story). The first is that modules carry out their proprietary business by drawing on general resources of memory and attention (they all draw on one and the same memory mechanism). 

The second is that modules carry out their proprietary business by deploying proprietary mechanisms of memory & attention. As Gallistel rightly points out in the interview, the neurological evidence strongly suggests that there is one memory mechanism at the neurological level. Although this leaves open, in my opinion, the possibility that there are different deployment conditions (or what have you) at the computational level. Franz Joseph Gall held something similar, and Fodor’s Modularity of Mind has a great discussion of this topic. 

The importance of this distinction becomes apparent when we consider syntactic theory. I suppose that just about everybody in the house believes that the language faculty builds mental representations. And that these mental representations are bona fide mental particulars with all the rights and responsibilities accorded to such things (they have causal powers & they are subject to peculiar conditions on well-formedness). 

One such condition is that of locality. (think displacement in all its various instantiations: binding, raising, control, feature checking, etc). This condition is interesting because there are a number of people that have attempted to explain its presence by appealing to facts about the memory mechanism. (I think maybe I read a paper by Gary Marcus that was trying do this?) 

I’m generally in favour of this approach, but I think that the facts about the memory limitations of the language faculty (and the conditions it imposes such as the kind locality we find in natural language) are going to turn out to be specific to that faculty. I think this is so not only because there isn’t any a priori reason to think otherwise, occasional empiricist kink notwithstanding, but also because it seems to me that different modules build, store, and address different varieties of mental representations. I’m quite open to having my mind changed about this, but I have a strong intuition that the visual system builds mental representations that have forms which reflect the demands of the task at hand. 

To be clear: I am not saying that there are fundamentally different species of mental representations. Maybe all mental representations are formed by the same operations (Merge, Label?), and all share the notion of locality that is implicit in all computational-symbolic representational systems at play today. But any additional constraints on well-formedness, (like maybe having uninterpretable features) are, I think, likely to be peculiar to their domains. And thus, paying attention to and keeping a record of those various structures when all of your modules are firing at once seems to me to suggest a variegated memory/attention mechanism. At least at the computational level. 

Okay... that’s all for now. Feel free to share your thoughts about all this in the comments section below. 

Notes, Admissions, Qualifications, Apologies: 
  1. Fodor, Jerry. (1983) Modularity of Mind. Page 49
  2. The post above draws mostly on my recent reading of Gallistel, Hornstein, Chomsky, and Fodor. Though I doubt they'd approve of their ideas being run together quite like this.  
  3. Apologies for some unintended noise in the recording. I’m not sure where it’s coming from, or how to get rid of it. 
  4. The linguistic joke above is taken from Jokes and the Linguistic Mind by Debra Aarons (2014)
  5. If you'd like to hear more Gallistel, check out this bloggingheads video interview.

Wednesday, October 1, 2014

The Fodorgraph Redux

Hi.

In this post we bring you a brief addendum to our interview with JFo.

After we ended out chat about linguistics, the conversation took a turn for the political. Since we enjoyed the outcome quite a bit, we've decided to make a habit of asking all our interviewees about their beliefs regarding the politics of science doing.



If the value of this topic isn't self-evident to you, consider the following:
  • science is done by people
  • science is done by people working within massive institutions 
  • massive institutions are closely networked (read: funded) with states & industries
  • (therefore) the structure of these institutions is going to reflect the interests of the states & industries in which they are embedded, at least in part 
This way of organizing our science world not only has the consequence of shaping the values of our universities, but it also has the consequence of narrowing down which individuals get to participate by filtering out those persons that don't share those values. 

Of course, the political economy of the university system is not solely determined by institutional pressures from outside; for all sorts of historical and economic reasons it is still the case that a significant part of the House of Higher Education is run by those that work in it. 

The practical outcomes of this arrangement (for science) aren't unambiguously negative. For instance, our state system still reflects (some shred of) a commitment to the public welfare. Thus our universities produce scholars that are often civic minded. One could easily imagine (and in a number of cases witness) an arrangement in which universities are places where scholars are totally divorced from a concern about the popular applications of their work. (Needless to say, what it means to be civic minded is hotly contested; see Noam's Liberal Scholarship and Objectivity for an in-depth discussion of the topic...sort of). 

On the other hand, this arrangement often leads to attenuating the deep questions into superficial engineering questions. Typically, this happens under the pressure of industry for new sources of profit. For those of us interested in brain-y things, the more egregious case of this is probably the study of artificial intelligence. That is, at some point in its development as a field, A.I. went from being concerned with understanding organic intelligence by studying the nature of computational, representational theories of mind to being concerned with building better ipods. If you'd like to hear someone with more clout on the topic say this, check out this (fairly) recent symposium on artificial intelligence held at MIT which included the fields founding figures. Specifically, check out what Patrick H. Winston has to say (time index 1:39:00)   

More specifically for linguistics, the era of the corporate university has had some dubious affects on linguistics. For example, the field has been inundated with quantitative studies wherein the motive for descriptive analysis of language overrides any interest in the big questions: such as what the hell is it that we're studying? what is language? (see here for a more personal account of this sort of change by Sascha Felix). 

As JFo says in the interview: the corollary to the desire for short-term profit in industry is the desire for discoveries at the 0.05 significance levels in the academy. In other words, the political climate inclines people to take less risks. (You can find Sydney Brenner saying something analogous in this recent interview).

Maybe where this whole big-picture discussion meets the everyday life of the researcher is at the laboratory. It is now common to hear that a researcher or research institute will spend half of their time on writing grant proposals. It is equally common to hear serious grumblings about the peer-review system, and the publish-or-perish era as a whole. In sum, there seems to be a general crapification of academic life that has measurable consequences. 

Homework Question: what happens when you build an enormous infastructure that is dependent on state and industry funding and then you take away the state?


Side-notes:
  1. Apologies for some glitches in the recording. Stand by for new recording equipment.
  2. Tune in two weeks from today to hear our interview with C. Randy Gallistel.   

Tuesday, September 16, 2014

The Fodorgraph

A fodorgraph is an explicit representation which is what is left when you take a literal physical image, subtract the spatial array of colored marks, and then throwaway the paper.
The Philosophical Lexicon





Hello Again,

In this post, the PB team brings you this interview featuring both a thoroughly novice interviewer and the fabulous Jerry Fodor.

During this phone conversation, we chatted about Fodor's recent heretical forays into the debate about the notion of natural selection among other such light topics as Bayesian & probabilistic models of mind, modularity, nativism, generative grammar, & neuroscience.

I'd like to highlight three aspects of the interview that stood out for me.

First, the words modularity & nativism aren't part of google's legitimate word list. At present they are appearing on my screen with blaring, red squiggly lines beneath them (perhaps Peter Norvig had some say in this state of affairs?). Simultaneously exerting a centripetal and centrifugal force on people in the field of linguistics, these terms are strongly associated with Fodor. And rightly so: he has spent a lifetime working out and defending a representational theory of mind that crucially presupposes them. For what it's worth I think that they have been given a bad rap in some of the recent literature, and are due for a second fitting, which is exactly why I thought it relevant to invoke them during this interview.

I won't go into a full blown defence of either, save to say if you haven't directly read Fodor discuss these topics you are missing out. The style of argumentation is in and of itself so illuminating about what a good exposition of a set of arguments looks like that the effort of reading him is worth it on those grounds alone.

Second, Fodor thinks that neuroscience has achieved virtually zilch with respect to furthering our understanding of the mind. This may sound like an exotic viewpoint, but it does echo a stirring in linguistics with respect to what the proper orientation is between neuroscience & cognitive science should be. This concern is, to varying degrees, noticeable in the work of people like Norbert Hornstein, Randy Gallistel, and Noam Chomsky. For an example of something in print on this topic that readily comes to mind see this or this.

Third, a comment about history – which features heavily, though not by any means primarily, in Fodor's recent argument about natural selection (as the prototypical adaptationist thinks of the notion). History is not a scientific level of explanation. Why?
If I follow Fodor's line of thinking, he claims something to the effect that it is impossible to specify a materialist-deterministic mechanism which, given a particular constellation of factors, determines why a person or thing ended up where they did and not some other place. And for better or for worse, materialistic-deterministic stories are the only ones we presently accept in the sciences.

For instance, consider the notion of natural selection and behaviourist learning theory. As mentioned in the interview, both of these strands of thought have something deeply in common. In essence, if the only thing we've got in these narratives is a history of the selective pressures on an organism then they will fail to be predictive for the same reasons as narrow behaviourist narratives fail to be anything but post hoc. That is, they present a history of, figuratively speaking, selective pressures. Fodor's argument on this topic runs quite a bit further. In any event, I won't spoil your fun in teasing apart the whole argument as it exists in print. Or, if you desire, in video-debate. The interesting idea that caught my attention was the claim in his book What Darwin Got Wrong that in the 19th century history was widely considered an adequate level of explanation. Intuitively, this contextualizes some of the explanatory inadequacies in both Darwin, Marx, and the neogrammarians. More importantly, if you buy the argument, it militates against attempted revivals of neogrammerian doctrines (see for example Evolutionary Phonology by Juliette Blevins). For a specific treatment of the problem of dealing in historical explanations in linguistics see Purnell (2009) in this book (chapter 17).

This post has become unwieldy so I think I had better stop here. But at the risk of wearing out my welcome I think I had better say one last thing. Why does all this jazz about natural selection matter? The answer is that inquiry into the evolution of language has become very popular these days. And also very polemical (for example see here; particularly see the comments). I think a lot of the work in evolution of language is trying to get a free lunch from genetics, adaptationist lit, and evolutionary biology. I also think that this is likely to lead to a lot of bruised egos and not much of a lunch.

The End.

Tuesday, September 2, 2014

what's within

Hello World,

This is officially the first post of the polemical brain (hereafter abbreviations are surely to appear). 

On this web-space, Max Baru & Co. intend to post an open-ended series of recorded interviews with a variety of thinkers who think about minds/brains and language. As the mood strikes us, we will post additional articles about related topics. Expect, initially at least, a generous amount of experimentation and fairly impressionistic hosts. 

I (Max) am a lowly undergraduate linguist studying at York University, Toronto. I am not a professional journalist, public speaker, or even a particularly confident private speaker. As Uncle Noam is often wont to note, this is probably for the best as it is a disservice to oneself to be swayed by rhetoric and charisma. Since, in deployment of my discursive capacities, I typically have little of either, you, listener/reader, are in no danger. 

My primary partner-in-crime for this blog is Selena Phillips-Boyle, a linguist pursuing her MA at York University. All audio-visual material is prepared by her. No complaints about sound quality--we're on an instant-noodles budget, and besides, grainy audio builds character. Or something. 

The title of the blog is chosen to represent two things: (1) the topic of the blog: brains at various levels of abstraction and (2) that we are aware that the field of linguistics is typically in a polemical mood. Part of our aim is thus to erode the animosity between the sects of the profession. We thus begin with ourselves: I am a card-carrying generativist, and Selena is a dyed-in-the-wool sociolinguist. Initial reservations aside, we have found our worldviews to be largely compatible when the misinformation is weeded out. In fact, we strongly believe that the primary epistemological, and metaphysical commitments of generative grammar and sociolinguistics are compatible. Moreover, we feel that many of the topics these fields purport to cover are either orthogonal to each other or else complementary. There are, undoubtedly, important debates to be had, and irreconcilable theories to be chosen between, but the majority of disagreements in and out of the university seem to be of negligible substance once you shine a light on them.    

To sum up: we seek to ignite the linguistics community in a feeling of being in a Common Enterprise.

The End.