Linguistics and Language

-- Gennaro Chierchia

  1. Language and Cognition
  2. Language Structure
    1. 2.1. Words and Sounds
    2. 2.2. Phrases
    3. 2.3. Interfaces
    4. 2.4. Meaning
  3. Language Use
    1. 3.1. Language in Context
    2. 3.2. Language in Flux
    3. 3.3. Language in the Mind
  4. Concluding Remarks

1  Language and Cognition

Why is the study of language central to cognition? The answer lies in the key properties of language as they manifest themselves in the way speakers use it. The best way to get a sense of the centrality of language in understanding cognitive phenomena is through some examples. In the rest of this introduction I illustrate some features of language that display surprising regularities. Among the many ways in which an efficient communication code could be designed, natural languages seem to choose quite peculiar ones. The question is why. We consider some of the answers that modern linguistics gives to this question, which lead us into a scenic (if necessarily brief) tour of its main problematics. In particular, section 2 is devoted to language structure and its main articulations. Section 3 is devoted to language use, its interplay with language structure, and the various disciplines that deal with these matters. We then close, in section 4, with a few short remarks on the place of linguistics within cognitive science.

Languages are made of words. How many words do we know? This is something that can be estimated quite accurately (see Pinker 1994: 149 ff.). To set a base line, consider that Shakespeare in his works uses roughly 15,000 different words. One would think that the vocabulary of, say, a high school student, is considerably poorer. Instead, it turns out that a high school senior reliably understands roughly 45,000 words out of a lexicon of 88,500 unrelated words. It might be worth mentioning how one arrives at this estimate. One samples randomly the target corpus of words and performs simple comprehension tests on the sample. The results are then statistically projected to the whole corpus. Now, the size of the vocabulary of a high school senior entails that from when the child starts learning words at a few months of age until the age of eighteen, he or she must be learning roughly a word every hour and half when awake. We are talking here of learning arbitrary associations of sound patterns with meanings. Compare this with the effort it takes to learn an even short poem by heart, or the names of a handful of basketball players. The contrast is striking. We get to understand 45,000 words with incomparably less effort, to the point of not even being aware of it. This makes no sense without the assumption that our mind must be especially equipped with something, a cognitive device of some sort, that makes us so successful at the task of learning words. This cognitive device must be quite specialized for such a task, as we are not as good at learning poems or the names of basketball players (cf. WORD MEANING, ACQUISITION OF

The world of sounds that make up words is similarly complex. We all find the sounds of our native language easy to distinguish. For example, to a native English speaker the i-sounds in "leave" and "live" are clearly different. And unless that person is in especially unfavorable conditions, he or she will not take one for the other. To a native English speaker, the difficulty that an Italian learning English (as an adult) encounters in mastering such distinctions looks a bit mysterious. Italians take revenge when English speakers try to learn the contrast between words like "fato" 'fate' vs. "fatto" 'fact.' The only difference between them is that the t-sound in fatto sounds to the Italian speaker slightly longer or tenser, a contrast that is difficult for a speaker of English to master. These observations are quite commonplace. The important point, however, is that a child exposed to the speech sounds of any language picks them up effortlessly. The clicks of Zulu (sounds similar to the "tsk-tsk" of disapproval) or the implosive sounds of Sindhi, spoken in India and Pakistan (sounds produced by sucking in air, rather than ejecting it -- see ARTICULATION) are not harder for the child to acquire than the occlusives of English. Adults, in contrast, often fail to learn to produce sounds not in their native repertoire. Figuring out the banking laws or the foods of a different culture is generally much easier. One would like to understand why.

Behind its daily almost quaint appearance, language seems to host many remarkable regularities of the sort just illustrated. Here is yet another example taken from a different domain, that of pronouns and ANAPHORA. Consider the following sentence:

(1)
John promised Bill to wash him.
Any native speaker of English will agree that the pronoun "him" in (1) can refer to "Bill" (the object -- see GRAMMATICAL RELATIONS), but there is no way it can refer to "John" (the subject). If we want a pronoun that refers to "John" in a sentence like (1), we have to use a reflexive:
(2)
John promised Bill to wash himself.
The reflexive "himself" in (2) refers to "John." It cannot refer to "Bill." Compare now (1) with (3):
(3)
John persuaded Bill to wash him
Here "him" can refer to the subject, but not to the object. If we want a pronoun to refer to "Bill" we have to use
(4)
John persuaded Bill to wash himself.
The reflexive "himself" in (4) must refer to the object. It cannot refer to the subject. By comparing (1) and (2) with (3) and (4), we see that the way pronouns work with verbs like "promise" appears to be the opposite of verbs like "persuade." Yet the structure of these sentences appears to be identical. There must be a form of specialized, unconscious knowledge we have that makes us say "Yes, 'him' can refer to the subject in (1) but not in (3)." A very peculiar intuition we have grown to have.

What is common to these different aspects of language is the fact that our linguistic behavior reveals striking and complex regularities. This is true throughout the languages of the worlds. In fact the TYPOLOGY of the world"s languages reveals significant universal tendencies. For example, the patterns of word order are quite limited. The most common basic orders of the major sentence constituents are subject-verb-object (abbreviated as SVO) and SOV. Patterns in which the object precedes the subject are quite rare. Another language universal one might mention is that all languages have ways of using clauses to modify nouns (as in "the boy that you just met," where the relative clause "that you just met" modifies the noun "boy"). Now structural properties of this sort are not only common to all known spoken languages but in fact can be found even in SIGN LANGUAGES, that is, visual-gestural languages typically in use in populations with impaired verbal abilities (e.g., the deaf). It seems plausible to maintain that universal tendencies in language are grounded in the way we are; this must be so for speaking is a cognitive capacity, that capacity in virtue of which we say that we "know" our native language. We exercise such capacity in using language. A term often used in this connection is "linguistic competence." The way we put such competence to use in interacting with our environment and with each other is called "performance."

The necessity to hypothesize a linguistic competence can be seen also from another point of view. Language is a dynamic phenomenon, dynamic in many senses. It changes across time and space (cf. LANGUAGE VARIATION AND CHANGE). It varies along social and gender dimensions (cf. LANGUAGE AND GENDER; LANGUAGE AND CULTURE). It also varies in sometimes seemingly idiosyncratic ways from speaker to speaker. Another important aspect of the dynamic character of language is the fact that a speaker can produce and understand an indefinite number of sentences, while having finite cognitive resources (memory, attention span, etc.). How is this possible? We must assume that this happens by analogy with the way we, say, add two numbers we have never added before. We can do it because we have mastered a combinatorial device, an ALGORITHM. But the algorithm for adding we have learned through explicit training. The one for speaking appears to grow spontaneously in the child. Such an algorithm is constitutive of our linguistic competence.

The fact that linguistic competence does not develop through explicit training can be construed as an argument in favor of viewing it as a part of our genetic endowment (cf. INNATENESS OF LANGUAGE). This becomes all the more plausible if one considers how specialized the knowledge of a language is and how quickly it develops in the child. In a way, the child should be in a situation analogous to that of somebody who is trying to break the mysteries of an unknown communication code. Such a code could have in principle very different features from that of a human language. It might lack a distinction between subjects and objects. Or it might lack the one between nouns and verbs. Many languages of practical use (e.g., many programming languages) are designed just that way. The range of possible communication systems is huge and highly differentiated. This is part of the reason why cracking a secret code is very hard -- as hard as learning an unfamiliar language as an adult. Yet the child does it without effort and without formal training. This seems hard to make sense of without assuming that, in some way, the child knows what to look for and knows what properties of natural speech he or she should attend to in order to figure out its grammar. This argument, based on the observation that language learning constitutes a specialized skill acquired quickly through minimal input, is known as the POVERTY OF THE STIMULUS ARGUMENT. It suggests that linguistic competence is a relatively autonomous computational device that is part of the biological endowment of humans and guides them through the acquisition of language. This is one of the planks of what has come to be known as GENERATIVE GRAMMAR, a research program started in the late 1950s by Noam Chomsky, which has proven to be quite successful and influential.

It might be useful to contrast this view with another one that a priori might be regarded as equally plausible (see CONNECTIONIST APPROACHES TO LANGUAGE). Humans seem to be endowed with a powerful all-purpose computational device that is very good at extracting regularities from the environment. Given that, one might hypothesize that language is learned the way we learn any kind of algorithm: through trial and error. All that language learning amounts to is simply applying our high-level computational apparatus to linguistic input. According to this view, the child acquires language similarly to how she learns, say, doing division, the main difference being in the nature of the input. Learning division is of course riddled with all sorts of mistakes that the child goes through (typical ones involve keeping track of rests, misprocessing partial results, etc.). Consider, in this connection, the pattern of pronominalization in sentences (1) through (4). If we learn languages the way we learn division, the child ought to make mistakes in figuring out what can act as the antecedent of a reflexive and what cannot. In recent years there has been extensive empirical investigation of the behavior of pronominal elements in child language (see BINDING THEORY; SYNTAX, ACQUISITION OF; SEMANTICS, ACQUISITION OF). And this was not what was found. The evidence goes in the opposite direction. As soon as reflexives and nonreflexive pronouns make their appearance in the child's speech, they appear to be used in an adult-like manner (cf. Crain and McKee 1985; Chien and Wexler 1990; Grodzinsky and Reinhart 1993).

Many of the ideas we find in generative grammar have antecedents throughout the history of thought (cf. LINGUISTICS, PHILOSOPHICAL ISSUES). One finds important debates on the "conventional" versus "natural" origins of language already among the presocratic philosophers. And many ancient grammarians came up with quite sophisticated analyses of key phenomena. For example the Indian grammarian Panini (fourth to third century B.C.) proposed an analysis of argument structure in terms of THEMATIC ROLES (like agent, patient, etc.), quite close in spirit to current proposals. The scientific study of language had a great impulse in the nineteenth century, when the historical links among the languages of the Indo-European family, at least in their general setup, were unraveled. A further fundamental development in our century was the structuralist approach, that is the attempt to characterize in explicit terms language structure as it manifests itself in sound patterns and in distributional patterns. The structuralist movement started out in Europe, thanks to F. DE SAUSSURE and the Prague School (which included among it protagonists N. Trubeckoj and R. JAKOBSON) and developed, then, in somewhat different forms, in the United States through the work of L. BLOOMFIELD, E. SAPIR, Z. Harris (who was Chomsky's teacher), and others. Structuralism, besides leaving us with an accurate description of many important linguistic phenomena, constituted the breeding ground for a host of concepts (like "morpheme," "phoneme," etc.) that have been taken up and developed further within the generative tradition. It is against this general background that recent developments should be assessed.

See also

2  Language Structure

Our linguistic competence is made up of several components (or "modules," see MODULARITY AND LANGUAGE) that reflect the various facets of language, going from speech sounds to meaning. In this section we will review the main ones in a necessarily highly abbreviated from. Language can be thought of as a LEXICON and combinatiorial apparatus. The lexicon is constituted by the inventory of words (or morphemes) through which sentences and phrases are built up. The combinatorial apparatus is the set of rules and principles that enable us to put words together in well-formed strings, and to pronounce and interpret such strings. What we will see, as we go through the main branches of linguistics, is how the combinatorial machinery operates throughout the various components of grammar. Meanwhile, here is a rough road map of major modules that deal with language structure.

Figure 1

See also

2.1 Words and Sounds

We already saw that the number of words we know is quite remarkable. But what do we mean by a "word"? Consider the verb "walk" and its past tense "walked." Are these two different words? And how about "walk" versus "walker"? We can clearly detect some inner regular components to words like "walked" namely the stem "walk" (which is identical to the infinitival form) and the ending "-ed," which signals "past." These components are called "morphemes;" they constitute the smallest elements with an identifiable meaning we can recognize in a word. The internal structure of words is the object of the branch of linguistics known as MORPHOLOGY. Just like sentences are formed by putting words together, so words themselves are formed by putting together morphemes. Within the word, that is, as well as between words, we see a combinatorial machinery at work. English has a fairly simple morphological structure. Languages like Chinese have even greater morphological simplicity, while languages like Turkish or Japanese have a very rich morphological structure. POLYSYNTHETIC LANGUAGES are perhaps the most extreme cases of morphological complexity. The following, for example, is a single word of Mohawk, a polysynthetic North American Indian language (Baker 1996: 22):

(5)
ni-mic-tomi-maka
first person-second person-money-give
'I'll give you the money.'

Another aspect of morphology is compounding, which enables one to form complex words by "glomming" them together. This strategy is quite productive in English, for example, blackboard, blackboard design, blackboard design school, and so on. Compounds can be distinguished from phrases on the basis of a variety of converging criteria. For example, the main stress on compounds like "blackboard" is on "black," while in the phrase "black board" it is on "board" (cf. STRESS, LINGUISTIC METER AND POETRY). Moreover syntax treats compounds as units that cannot be separated by syntactic rules. Through morphological derivation and compounding the structure of the lexicon becomes quite rich.

So what is a word? At one level, it is what is stored in our mental lexicon and has to be memorized as such (a listeme). This is the sense in which we know 45,000 (unrelated) words. At another, it is what enters as a unit into syntactic processes. In this second sense (but not in the first) "walk" and "walked" count as two words. Words are formed by composing together smaller meaningful units (the morphemes) through specific rules and principles.

Morphemes are, in turn, constituted by sound units. Actually, speech forms a continuum not immediately analyzable into discrete units. When exposed to an unfamiliar language, we can not tell where, for example, the word boundaries are, and we have difficulty in identifying the sounds that are not in our native inventory. Yet speakers classify their speech sound stream into units, the phonemes. PHONETICS studies speech sounds from an acoustic and articulatory point of view. Among other things, it provides an alphabet to notate all of the sounds of the world's languages. PHONOLOGY studies how the range of speech sounds are exploited by the grammars of different languages and the universal laws of the grammar of sounds. For example, we know from phonetics that back vowels (produced by lifting the rear of the tongue towards the palate) can be rounded (as in "hot") or unrounded (as in "but") and that this is so also for front vowels (produced by lifting the tongue toward the front of the vocal tact). The i-sound in "feet" is a high, front, unrounded vowel; the sound of the corresponding German word "füsse" is also pronounced raising the tongue towards the front, but is rounded. If a language has rounded front vowels it also has rounded back vowels. To illustrate, Italian has back rounded vowels, but lacks altogether unrounded back vowels. English has both rounded and unrounded back vowels. Both English and Italian lack front rounded vowels. German and French, in contrast, have them. But there is no language that has in its sound inventory front rounded vowels without also having back rounded ones. This is the form that constraints on possible systems of phonemes often take.

As noted in section 1, the type of sounds one finds in the world's languages appear to be very varied. Some languages may have relatively small sound inventories constituted by a dozen phonemes (as, for example, Polynesian); others have quite large ones with about 140 units (Khoisan). And there are of course intermediate cases. One of the most important linguistic discoveries of this century has been that all of the wide variety of phonemes we observe can be described in terms of a small universal set of DISTINCTIVE FEATURES (i.e., properties like "front," "rounded," "voiced," etc.). For example, /p/ and /b/ (bilabial stops) have the same feature composition except for the fact that the former is voiceless (produced without vibration of the vocal cords) while the latter is voiced. By the same token, the phoneme /k/, as in "bake," and the final sound of the German word "Bach" are alike, except in one feature. In the former the air flux is completely interrupted (the sound is a stop) by lifting the back of the tongue up to the rear of the palate, while in the latter a small passage is left which results in a turbulent continuous sound (a fricative, notated in the phonetic alphabet as /x/). So all phonemes can be analyzed as feature structures.

There is also evidence that features are not just a convenient way to classify phonemes but are actually part of the implicit knowledge that speakers have of their language. One famous experiment that provides evidence of this kind has to do with English plurals. In simplified terms, plurals are formed by adding a voiced alveolar fricative /z/ after a voiced sound (e.g., fad[z]) and its voiceless counterpart /s/ after a voiceless one (e.g., fat[s]). This is a form of assimilation, a very common phonological process (see PHONOLOGICAL RULES AND PROCESSES). If a monolingual English speaker is asked to form the plural of a word ending in a phoneme that is not part of his or her native inventory and has never been encountered before, that speaker will follow the rule just described; for example, the plural of the word "Bach" will be [baxs] not [baxz]. This means that in forming the plural speakers are actually accessing the featural make up of the phonemes and analyzing phonemes into voiced verus voiceless sets. They have not just memorized after which sounds /s/ goes and after which /z/ goes (see Akmajian et al. 1990: chapter 3 and references therein).

Thus we see that even within sound units we find smaller elements, the distinctive features, combined according to certain principles. Features, organized in phonemes, are manipulated by rule systems. Phonemes are in turn structured into larger prosodic constituents (see PROSODY AND INTONATION, which constitute the domains over which stress and TONE are determined. On the whole we see that the world of speech sounds is extremely rich in structure and its study has reached a level of remarkable theoretical sophistication (for recent important developments, see OPTIMALITY THEORY.

See also

2.2 Phrases

The area where we perhaps most clearly see the power of the combinatorial machinery that operates in language is SYNTAX, the study of how words are composed into phrases. In constructing sentences, we don"t merely put words into certain sequences, we actually build up a structure. Here is a simple illustration.

English is an SVO language, whose basic word order in simple sentences is the one in (6a).

(6)
a. Kim saw Lee
b. *saw Lee Kim    b'. Ha visto Lee Kim (Italian)
c. *Kim Lee saw    c'. Kim-ga Lee-o mita (Japanese)

Alternative orders, such as those in (6b - c), are ungrammatical in English. They are grammatical in other languages; thus (6b"), the word-by-word Italian translation of (6b), is grammatical in Italian; and so is (6c"), the Japanese translation of (6c). A priori, the words in (6a) could be put together in a number of different ways, which can be represented by the following tree diagrams:

Figure 2

The structure in (7a) simply says that "Kim," "Lee," and "saw" are put together all at once and that one cannot recognize any subunit within the clause. Structure (7b) says that there is a subunit within the clause constituted by the subject plus the verb; (7c) that the phrasing actually puts together the verb plus the object. The right analysis for English turns out to be (7c), where the verb and the object form a unit, a constituent called the verb phrase (VP), whose "center," or, in technical terms, whose "head" is the verb. Interestingly, such an analysis turns out to be right also for Japanese and Italian, and, it seems, universally. In all languages, the verb and the object form a unit. There are various ways of seeing that it must be so. A simple one is the following: languages have proforms that is elements that lack an inherent meaning and get their semantic value from a linguistic antecedent (or, in some cases, the extralinguistic context). Personal pronouns like "he" or "him" are a typical example:

(8)
A tall boy came in. Paul greeted him warmly.
Here the antecedent of "him" is most naturally construed as "a tall boy". "Him" is a noun phrase (NP), that is, it has the same behavior as things like "Kim" or "a tall boy," which can act as its antecedent. Now English, as many other languages, also has proforms that clearly stand for V+object sequences:
(9)
Kim saw Lee. Mary swears that Paul did too
"Did" in (9) is understood as "saw Kim." This means that the antecedent of "did" in (9) is the verb+object sequence of the previous sentence. This makes sense if we assume that such sequences form a unit, the VP (just like "a tall boy" forms an NP).

Notice that English does not have a proform that stands for the subject plus a transitive verb. There is no construction of the following sort:

(10)
Kim saw Lee. Mary swears that PROed John too.
[meaning: "Mary swears that Kim saw John too"]
The hypothetical element "PROed" would be an overt morpheme standing for a subject+transitive verb sequence. From a logical point of view, a verb+subject proform doesn"t look any more complex than a verb+object proform. From a practical point of view, such a proform could be as useful and as effective for communication as the proform "did." Yet there is nothing like "PROed," and not just in English. In no known language does such a proform appear. This makes sense if we assume that proforms must be constituents of some kind and that verb + object (in whatever order they come) forms a constituent. If, instead, the structure of the clause were (7a) there would be no reason to expect such asymmetry. And if the structure were (7b), we would expect proforms such as "PROed" to be attested.

A particularly interesting case is constituted by VSO languages, such as Irish, Breton, and many African languages, etc. Here is an Irish example (Chung and McCloskey 1987: 218):

(11)
Ni olan se bainne ariamh
Neg drink-PRES. he milk ever
He never drinks milk.
In this type of language the V surfaces next to the subject, separated from the object. If simple linear adjacency is what counts, one might well expect to find in some language of this form a verbal proform that stands for the verb plus the subject. Yet no VSO language has such a proform. This peculiar insistence on banning a potentially useful item even where one would expect it to be readily available can be understood if we assume that VSO structures are obtained by moving the verbal head out of a canonical VP as indicated in what follows:

Figure 3

The process through which (11) is derived is called HEAD MOVEMENT and is analogous to what one observes in English alternations of the following kind:

(13)
a. Kim has seen Lee.
b. Has Kim seen Lee?
In English, yes-no questions are formed by fronting the auxiliary. This process that applies in English to questions applies in Irish more generally, and is what yields the main difference in basic word order between these languages (see Chung and McCloskey 1987 for evidence and references).

Summing up, there is evidence that in sentences like (6a) the verb and the object are tied together by an invisible knot. This abstract structure in constituents manifests itself in a number of phenomena, of which we have discussed one: the existence of VP proforms, in contrast with the absence of subject+verb proforms. The latter appears to be a universal property of languages and constitutes evidence in favor of the universality of the VP. Along the way, we have also seen how languages can vary and what mechanisms can be responsible for such variations (cf. X-BAR THEORY). Generally speaking, words are put together into larger phrases by a computational device that builds up structures on the basis of relatively simple principles (like: "put a head next to its complement" or "move a head to the front of the clause"). Aspects of this computational device are universal and are responsible for the general architecture that all languages share; others can vary (in a limited way) and are responsible for the final form of particular languages.

There is converging evidence that confirms the psychological reality of constituent structure, that is, the idea that speakers unconsciously assign a structure in constituents to sequences of words. A famous case that shows this is a series of experiments known as the "click" experiments (cf. Fodor, Bever, and Garret 1974). In these experiments, subjects were presented with a sentence through a headphone. At some stage during this process a click sound was produced in the headphone and subjects were then asked at which point of the presentation the click occurred. If the click occurred at major constituent breaks (such as the one between the subject and the VP) the subjects were accurate in recalling when it occurred. If, however, the click occurred within a constituent, subjects would make systematic mistakes in recalling the event. They would overwhelmingly displace the click to the closest constituent break. This behavior would be hard to explain if constituent structure were not actually computed by subjects in processing a sentence (see Clark and Clark 1977 for further discussion).

Thus, looking at the syntax of languages we discover a rich structure that reveals fundamental properties of the computational device that the speaker must be endowed with in order to be able to speak (and understand). There are significant disagreements as to the specifics of how these computational devices are structured. Some frameworks for syntactic analysis (e.g., CATEGORIAL GRAMMAR HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR LEXICAL FUNCTIONAL GRAMMAR) emphasize the role of the lexicon in driving syntactic computations. Others, like MINIMALISM, put their emphasis on the economical design of the principles governing how sentences are built up (see also OPTIMALITY THEORY). Other kinds of disagreement concern the choice of primitives (e.g., RELATIONAL GRAMMAR and COGNITIVE LINGUISTICS). In spite of the liveliness of the debate and of the range of controversy, most, maybe all of these frameworks share a lot. For one thing, key empirical generalizations and discoveries can be translated from one framework to the next. For example, all frameworks encode a notion of constituency and ways of fleshing out the notion of "relation at a distance" (such as the one we have described above as head movement). All frameworks assign to grammar a universal structural core and dimensions along which particular languages may vary. Finally, all major modern frameworks share certain basic methodological tenets of formal explicitness, aimed at providing mathematical models of grammar (cf. FORMAL GRAMMARS).

See also

2.3 Interfaces

Syntax interacts directly with all other major components of grammar. First, it draws from the lexicon the words to be put into phrases. The lexical properties of words (e.g. whether they are verbs or nouns, whether and how many complements they need, etc.) will affect the kind of syntactic structures that a particular selection of words can enter into. For example, a sentence like "John cries Bill" is ungrammatical because "cry" is intransitive and takes no complement. Second, syntax feeds into phonology. At some point of the syntactic derivation we get the words in the order that we want to pronounce them. And third, syntax provides the input to semantic interpretation.

To illustrate these interfaces further, consider the following set of sentences:

(14)
a. John ignores that Mary saw who.
b. John ignores who Mary saw t.
c. who does John ignore that Mary saw t.
Here we have three kinds of interrogative structures. Sentence (14a) is not acceptable as a genuine question. It is only acceptable as an "echo" question, for example in reaction to an utterance of the form "John ignores that Mary saw so and so" where we do not understand who "so and so" is. Sentence (14b) contains an embedded question. In it, the wh-pronoun appears in place of the complementizer "that;" in other terms, in (14b), the pronoun "who" has been dislocated to the beginning of the embedded clause and "t" marks the site that it was moved from. Finally, sentence (14c), with the wh-pronoun moved to the beginning, constitutes a canonical matrix question (see WH-MOVEMENT). Now, the interpretations of (14b) and (14c) can be given roughly as follows:
(15)
a. John ignores (the answer to the question) for which x Mary saw x.
b. (tell me the answer to the question) for which x John ignores that Mary saw x.
The interpretations in (15) are quite close in form to the overt structures of (14b) and (14c) respectively, while the "echo" question (14a) is interpreted roughly as (15b), modulo the special contexts to which it is limited. Thus it seems that the structure of English (non-echo) questions reflects quite closely its interpretation. Wh-pronouns are interpreted as question-forming operators. To make sense of such operators we need to know their scope (i.e., what is being asked). English marks the scope of wh-operators by putting them at the beginning of the clause on which they operate: the embedded clause in (14b), the matrix one in (14c). Now, it is quite telling to compare this with what happens in other languages. A particularly interesting case is that of Chinese (see, in particular, Huang 1982; Cheng 1991) where there is no visible wh-movement. Chinese only has the equivalent of (14a) (Huang 1992).
(16)
a. Zhangsan xian-zhidao [Lisi kanjian shei]
b. Zhangsan ignores [Lisi see who]
Sentence (16) in Chinese is ambiguous. It can either be interpreted as (15b) or as (15c). One way of making sense of this situation is along the following lines. Wh-pronouns must be assigned scope to be interpreted. One of the strategies that grammar makes available is placing the wh-pronoun at the beginning of the clause on which it operates. English uses such a strategy overtly. First the wh-word is fronted, then the result is fed to phonology (and hence pronounced) and to semantics (and hence interpreted). In Chinese, instead, one feeds to phonology the base structure (16); thenwh-movement applies, as a step toward the computation of meaning. This gives rise to two abstract structures corresponding to (14b) and (14c) respectively:
(17)
a. Zhangsan xian-zhidao shei [ Lisi kanjian t ]
b. shei Zhangsan xian-zhidao [ Lisi kanjian t ]
The structures in (17) are what is fed to semantic interpretation. The process just sketched can be schematized as follows:

Figure 4

In rough terms, in Chinese one utters the sentence in its basic form (which is semantically ambiguous -- see AMBIGUITY) then one does scoping mentally. In English, one first applies scoping (i.e., one marks what is being asked), then utters the result. This way of looking at things enables us to see question formation in languages as diverse as English and Chinese in terms of a uniform mechanism. The only difference lies in the level at which scoping applies. Scope marking takes place overtly in English (i.e., before the chosen sequence of words is pronounced). In Chinese, by contrast, it takes place covertly (i.e., after having pronounced the base form). This is why sentence (16) is ambiguous in Chinese.

There are other elements that need to be assigned a scope in order to be interpreted. A prime case is constituted by quantified NPs like "a student" or "every advisor" (see QUANTIFIERS). Consider (19):

(19)
Kim introduced a new student to every advisor.
This sentence has roughly the following two interpretations:
(20)
a. There is a student such that Kim introduced him to every advisor.
b. Every advisor is such that Kim introduced a (possibly different) student to him.
With the help of variables, these interpretation can also be expressed as follows:
(21)
a. There is some new student y such that for every advisor x, Kim introduced y to x.
b. For every advisor x, there is some new student y such that Kim introduced y to x.

Now we have just seen that natural language marks scope in questions by overt or covert movement. If we assume that this is the strategy generally made available to us by grammar, then we are led to conclude that also in cases like (19) scope must be marked via movement. That is, in order to interpret (19), we must determine the scope of the quantifiers by putting them at the beginning of the clause they operate on. For (19), this can be done in two ways:

(22)
a. [a new studenti every advisorj [Kim introduced ti to tj ]]
b. [every advisorj a new studenti [Kim introduced ti to tj ]]

Both (22a) and (22b) are obtained out of (19). In (22a) we move "a new student" over "every advisor." In (22b) we do the opposite. These structures correspond to the interpretations in (21a) and (21b), respectively. In a more standard logical notation, they would be expressed as follows:

(23)
a. [∃ xi xi a new student] [∀ xj xj an advisor] [Kim introduces xi to xj ]
b. [∀ xj xj an advisor] [∃ xi xi a new student] [Kim introduces xi to xj ]

So in the interpretation of sentences with quantified NPs, we apply scoping to such NPs. Scoping of quantifiers in English is a covert movement, part of the mental computation of MEANING, much like scoping of wh-words in Chinese. The result of scoping (i.e., the structures in [22], which are isomorphic to [23]) is what gets semantically interpreted and is called LOGICAL FORM.

What I just sketched in very rough terms constitutes one of several views currently being pursued. Much work has been devoted to the study of scope phenomena, in several frameworks. Such study has led to a considerable body of novel empirical generalizations. Some important principles that govern the behavior of scope in natural language have been identified (though we are far from a definitive understanding). Phenomena related to scope play an important role at the SYNTAX-SEMANTICS INTERFACE. In particular, according to the hypothesis sketched previously, surface syntactic representations are mapped onto an abstract syntactic structure as a first step toward being interpreted. Such an abstract structure, logical form, provides an explicit representation of scope, anaphoric links, and the relevant lexical information. These are all key factors in determining meaning. The hypothesis of a logical form onto which syntactic structure is mapped fits well with the idea that we are endowed with a LANGUAGE OF THOUGHT, as our main medium for storing and retrieving information, reasoning, and so on. The reason why this is so is fairly apparent. Empirical features of languages lead linguists to detect the existence of a covert level of representation with the properties that the proponents of the language of thought hypothesis have argued for on the basis of independent considerations. It is highly tempting to speculate that logical form actually is the language of thought. This idea needs, of course, to be fleshed out much more. I put it forth here in this "naive" form as an illustration of the potential of interaction between linguistics and other disciplines that deal with cognition.

See also

2.4 Meaning

What is meaning? What is it to interpret a symbolic structure of some kind? This is one of the hardest question across the whole history of thought and lies right at the center of the study of cognition. The particular form it takes within the picture we have so far is: How is logical form interpreted? A consideration that constrains the range of possible answers to these questions is that our knowledge of meaning enables us to interpret an indefinite number of sentences, including ones we have never encountered before. To explain this we must assume, it seems, that the interpretation procedure is compositional (see COMPOSITIONALITY). Given the syntactic structure to be interpreted, we start out by retrieving the meaning of words (or morphemes). Because the core of the lexicon is finite, we can memorize and store the meaning of the lexical entries. Then each mode of composing words together into phrases (i.e., each configuration in a syntactic analysis tree) corresponds to a mode of composing meanings. Thus, cycling through syntactic structure we arrive eventually at the meaning of the sentence. In general, meanings of complex structures are composed by putting together word (or morpheme) meanings through a finite set of semantic operations that are systematically linked to syntactic configurations. This accounts, in principle, for our capacity of understanding a potential infinity of sentences, in spite of the limits of our cognitive capacities.

Figuring out what operations we use for putting together word meanings is one of the main task of SEMANTICS. To address it, one must say what the output of such operations is. For example, what is it that we get when we compose the meaning of the NP "Pavarotti" with the meaning of the VP "sings 'La Boheme' well"? More generally, what is the meaning of complex phrases and, in particular, what is the meaning of clauses? Although there is disagreement here (as on other important topics) on the ultimate correct answer, there is agreement on what it is that such an answer must afford us. In particular, to have the information that Pavarotti sings "La Boheme" well is to have also the following kind of information:

(24)
a. Someone sings "La Boheme" well.
b. Not everyone sings "La Boheme" poorly.
c. It is not the case that nobody sings "La Boheme" well.
Barring performance errors or specific pathologies, we do not expect to find a competent speaker of English who sincerely affirms that Pavarotti sings "La Boheme" well and simultaneously denies that someone does (or denies any of the sentences in [24]). So sentence meaning must be something in virtue of which we can compute how the information associated with the sentence in question is related to the information of other sentences. Our knowledge of sentence meaning enables us to place sentences within a complex network of semantic relationships with other sentences.

The relation between a sentence like "Pavarotti sings well" and "someone sings well" (or any of the sentences in [24]), is called "entailment". Its standard definition involves the concept of truth: A sentence A entails a sentence B if and only if whenever A is true, then B must also be true. This means that if we understand under what conditions a sentence is true, we also understand what its entailments are. Considerations such as these have lead to a program of semantic analysis based on truth conditions. The task of the semantic component of grammar is viewed as that of recursively spelling out the truth conditions of sentences (via their logical form). The truth conditions of simple sentences like "Pavarotti sings" are given in terms of the reference of the words involved (cf. REFERENCE, THEORIES OF). Thus "Pavarotti sings" is true (in a certain moment t) if Pavarotti is in fact the agent of an action of singing (at t). Truth conditions of complex sentences (like "Pavarotti sings or Domingo sings") involve figuring out the contributions to truth conditions of words like "or." According to this program, giving the semantics of the logical form of natural language sentences is closely related to the way we figure out the semantics of any logical system.

Entailment, though not the only kind of important semantic relation, is certainly at the heart of a net of key phenomena. Consider for example the following pair:

(25)
a. At least two students who read a book on linguistics by Chomsky were in the audience.
b. At least two students who read a book by Chomsky were in the audience.
Clearly, (25a) entails (25b). It cannot be the case that (25a) is true while simultaneously (25b) is false. We simply know this a priori. And it is perfectly general: if "at least two As B" is the case and if the Cs form a superset of the As (as the books by Chomsky are a superset of the books on linguistics by Chomsky), then "at least two Cs B" must also be the case. This must be part of what "at least two" means. For "at most two" the opposite is the case:
(26)
a. At most two students who read a book on linguistics by Chomsky were in the audience.
b. At most two students who read a book by Chomsky were in the audience.
Here, (26a) does not entail (26b). It can well be the case that no more than two students read a book on linguistics by Chomsky, but more than two read books (on, say, politics) by Chomsky. What happens is that (26b) entails (26a). That is, if (26b) is the case, then (26a) cannot be false. Now there must be something in our head that enables us to converge on these judgments. That something must be constitutive of our knowledge of the meaning of the sentences in (25) and (26). Notice that our entailment judgment need not be immediate. To see that in fact (26b) entails (26a) requires some reflection. Yet any normal speaker of English will eventually converge in judging that in any situation in which (26b) is true, (26a) has also got to be.

The relevance of entailment for natural language is one of the main discoveries of modern semantics. I will illustrate it in what follows with one famous example, having to do with the distributional properties of words like "any" (cf. Ladusaw 1979, 1992 and references therein). A word like "any" has two main uses. The first is exemplified in (27a):

(27)
a. You may pick any apple.
b. A: Can I talk to John or is he busy with students now?
c. B: No, wait. *He is talking to any student.
c'. B: No, wait. He is talking to every student.
c''. B: Go ahead. He isn't talking to any student right now.
The use exemplified by (27a) is called free choice "any." It has a universal interpretation: sentence (27a) says that for every apple x, you are allowed to pick x. This kind of "any" seems to require a special modality of some kind (see e.g., Dayal 1998 and references therein). Such a requirement is brought out by the strangeness of sentences like (27c) (the asterisk indicates deviance), which, in the context of (27b), clearly describes an ongoing happening with no special modality attached. Free choice "any" seems incompatible with a plain descriptive mode (and contrasts in this with "every;" cf. [27c"]). The other use of "any" is illustrated by (27c"). Even though this sentence, understood as a reply to (27b), reports on an ongoing happening, it is perfectly grammatical. What seems to play a crucial role is the presence of negation. Nonfree choice "any" seems to require a negative context of some kind and is therefore called a negative polarity item. It is part of a family of expressions that includes, for example, things like "ever" or "give a damn":
(28)
a. *John gives a damn about linguistics.
b. John doesn't give a damn about linguistics.
c. *For a long time John ever ate chicken.
d. For a long time, John didn't ever eat chicken.
In English the free choice and the negative polarity senses of "any" are expressed by the same morphemes. But in many languages (e.g., most Romance languages) they are expressed by different words (for example, in Italian free choice "any" translates as "qualunque," and negative polarity "any" translates as "alcuno"). Thus, while the two senses might well be related, it is useful to keep them apart in investigating the behavior of "any." In what follows, we will concentrate on negative polarity "any" (and thus the reader is asked to abstract away from imagining the following examples in contexts that would make the free choice interpretation possible).

The main puzzle in the behavior of words like "any" is understanding what exactly constitutes a "negative" context. Consider for example the following set of sentences:

(29)
a. *Yesterday John read any book.
b. Yesterday John didn't read any book.
c. *A student who read any book by Chomsky will want to miss his talk.
d. No student who read any book by Chomsky will want to miss his talk.
In cases such as these, we can rely on morphology: we actually see there the negative morpheme "no" or some of its morphological derivatives. But what about the following cases?
(30)
a. *At least two students who read any book by Chomsky were in the audience.
b. At most two students who read any book by Chomsky were in the audience.
In (30b), where "any" is acceptable, there is no negative morpheme or morphological derivative thereof. This might prompt us to look for a different way of defining the notion of negative context, maybe a semantic one. Here is a possibility: A logical property of negation is that of licensing entailments from sets to their subsets. Consider for example the days in which John read a book by Chomsky. They must be subsets of the days in which he read. This is reflected in the fact that (31a) entails (31b):
(31)
a. It is not the case that yesterday John read a book.
b. It is not the case that yesterday John read a book by Chomsky.
In (30) the entailment goes from a set (the set of days in which John read book) to its subsets (e.g., the set of days in which John read a book by Chomsky). Now this seems to be precisely what sentential negation, negative determiners like "no" and determiners like "at most n" have in common: they all license inferences from sets to subsets thereof. We have already seen that "at most" has precisely this property. To test whether our hypothesis is indeed correct and fully general, we should find something seemingly utterly "non-negative," which, however, has the property of licensing entailments from sets to subsets. The determiner "every" gives us what we need. Such a determiner does not appear to be in any reasonable sense "negative," yet, within a noun phrase headed by "every," the entailment clearly goes from sets to subsets:
(32)
a. Every employee who smokes will be terminated.
b. Every employee who smokes cigars will be terminated.
If (32a) is true, then (32b) must also be. And the set of cigar smokers is clearly a subset of the set of smokers. If "any" wants to be in an environment with these entailment properties, then it should be grammatical within an NP headed by "every." This is indeed so:
(33)
Every student who read any book by Chomsky will want to come to his talk.
So the principle governing the distribution of "any" seems to be:
(34)
"any" must occur in a context that licenses entailments from sets to their subsets.
Notice that within the VP in sentences like (32), the entailment to subsets does not hold.
(35)
a. Every employee smokes.
b. Every employee smokes cigars.
Sentence (35a) does not entail sentence (35b); in fact the opposite is the case. And sure enough, within the VP "any" is not licensed (I give also a sentence with "at most n" for contrast):
(36)
a. *Every student came to any talk by Chomsky.
b. At most two students came to any talk by Chomsky.

Surely no one explicitly taught us these facts. No one taught us that "any" is acceptable within an NP headed by "every," but not within a VP of which an "every"-headed NP is subject. Yet we come to have convergent intuitions on these matters. Again, something in our mental endowment must be responsible for such judgments. What is peculiar to the case at hand is that the overt distribution of a class of morphemes like "any" appears to be sensitive to the entailment properties of their context. In particular, it appears to be sensitive to a specific logical property, that of licensing inferences from sets to subsets, which "no," "at most n" and "every" share with sentential negation. It is worth noting that most languages have negative polarity items and their properties tend to be the same as "any," with minimal variations (corresponding to degrees of "strength" of negativity). This illustrates how there are specific architectural features of grammar that cannot be accounted for without a semantic theory of entailment for natural language. And it is difficult to see how to build such a theory without resorting to a compositional assignment of truth conditions to syntactic structures (or something that enables to derive the same effects -- cf. DYNAMIC SEMANTICS). The case of negative polarity is by no means isolated. Many other phenomena could be used to illustrate this point (e.g. FOCUS TENSE AND ASPECT). But the illustration just given will have to suffice for our present purposes. It is an old idea that we understand each other because our language, in spite of its VAGUENESS, has a logic. Now this idea is no longer just an intriguing hypothesis. The question on the table is no more whether this is true. The question is what the exact syntactic and semantic properties of this logic are.

See also

3  Language Use

Ultimately, the goal of a theory of language is to explain how language is used in concrete communicative situations. So far we have formulated the hypothesis that at the basis of linguistic behavior there is a competence constituted by blocks of rules or systems of principles, responsible for sound structure, morphological structure, and so on. Each block constitutes a major module of our linguistic competence, which can in turn be articulated into further submodules. These rule systems are then put to use by the speakers in speech acts. In doing so, the linguistic systems interact in complex ways with other aspects of our cognitive apparatus as well as with features of the environment. We now turn to a consideration of these dimensions.

3.1 Language in Context

The study of the interaction of grammar with the CONTEXT of use is called PRAGMATICS. Pragmatics looks at sentences within both the extralinguistic situation and the DISCOURSE of which it is part. For example, one aspect of pragmatics is the study of INDEXICALS AND DEMONSTRATIVES (like "I," "here," "now," etc.) whose meaning is fixed by the grammar but whose reference varies with the context. Another important area is the study of PRESUPPOSITION, that is, what is taken for granted in uttering a sentence. Consider the difference between (37a) and (37b):

(37)
a. John ate the cake.
b. It is John that ate the cake.
How do they differ? Sentence (37a) entails that someone ate a cake. Sentence (37b), instead, takes it for granted that someone did and asserts that that someone is John. Thus, there are grammatical constructs such as clefting, exemplified in (37b), that appear to be specially linked to presupposition. Just like we have systematic intuitions about entailments, we do about presuppositions and how they are passed from simple sentences to more complex ones.

Yet another aspect of pragmatics is the study of how we virtually always go beyond what is literally said. In ordinary conversational exchanges, one and the same sentence, for example, "the dog is outside," can acquire the illocutionary force of a command ("go get it"), of a request ("can you bring it in?"), of an insult ("you are a servant; do your duty"), or can assume all sort of metaphorical or ironical colorings, and so on, depending on what the situation is, what is known to the illocutionary agents, and so on. A breakthrough in the study of these phenomena is due to the work of P. GRICE . Grice put on solid grounds the commonsense distinction between literal meaning, that is, the interpretation we assign to sentences in virtue of rules of grammar and linguistic conventions, and what is conveyed or implicated, as Grice puts it, beyond the literal meaning. Grice developed a theory of IMPLICATURE based on the idea that in our use of grammar we are guided by certain general conversational norms to which we spontaneously tend to conform. Such norms instruct us to be cooperative, truthful, orderly, and relevant (cf. RELEVANCE AND RELEVANCE THEORY). These are norms that can be ignored or even flouted. By exploiting both the norms and their violations systematically, thanks to the interaction of literal meaning and mutually shared information present in the context, the speaker can put the hearer in the position of inferring his communicative intentions (i.e., what is implicated). Some aspects of pragmatics (e.g., the study of deixis or presupposition) appear to involve grammar-specific rule systems, others, such as implicature, more general cognitive abilities. All of them appear to be rule governed.

See also

3.2 Language in Flux

Use of language is an important factor in language variation. Certain forms of variation tend to be a constant and relatively stable part of our behavior. We all master a number of registers and styles; often a plurality of grammatical norms are present in the same speakers, as in the case of bilinguals. Such coexisting norms affect one another in interesting ways (see CODESWITCHING). These phenomena, as well as pragmatically induced deviations from a given grammatical norm, can also result in actual changes in the prevailing grammar. Speakers" creative uses can bring innovations about that become part of grammar. On a larger scale, languages enter in contact through a variety of historical events and social dynamics, again resulting in changes. Some such changes come about in a relatively abrupt manner and involve simultaneously many aspects of grammar. A case often quoted in this connection is the great vowel shift which radically changed the vowel space of English toward the end of the Middle English period. The important point is that the dynamic of linguistic change seems to take place within the boundaries of Universal Grammar as charted through synchronic theory (cf. LINGUISTIC UNIVERSALS AND UNIVERSAL GRAMMAR). In fact, it was precisely the discovery of the regularity of change (e.g., Grimm's laws) that led to the discovery of linguistic structure.

A particularly interesting vantage point on linguistic change is provided by the study of CREOLES (Bickerton 1975, 1981). Unlike most languages that evolve from a common ancestor (sometimes a hypothesized protolanguage, as in the case of the Indoeuropean family), Creoles arise from communities of speakers that do not share a native language. A typical situation is that of slaves or workers brought together by a dominating group that develop an impoverished quasi-language (a pidgin) in order to communicate with one another. Such quasi-languages typically have a small vocabulary drawn from several sources (the language of the dominating group or the native languages of the speakers), no fixed word order, no inflection. The process of creolization takes place when such a language starts having its own native speakers, that is, speakers born to the relevant groups that start using the quasi-language of their parents as a native language. What typically happens is that all of a sudden the characteristics of a full-blown natural language come into being (morphological markers for agreement, case endings, modals, tense, grammaticized strategies for focusing, etc.). This process, which in a few lucky cases has been documented, takes place very rapidly, perhaps even within a single generation. This has led Bickerton to formulate an extremely interesting hypothesis, that of a "bioprogram," that is, a species-specific acquisition device, part of our genetic endowment, that supplies the necessary grammatical apparatus even when such an apparatus is not present in the input. This raises the question of how such a bioprogram has evolved in our species, a topic that has been at the center of much speculation (see EVOLUTION OF LANGUAGE). A much debated issue is the extent to which language has evolved through natural selection, in the ways complex organs like the eye have. Although not much is yet known or agreed upon on this score, progress in the understanding of our cognitive abilities and of the neurological basis of language is constant and is likely to lead to a better understanding of language evolution (also through comparisons of the communication systems of other species; see ANIMAL COMMUNICATION PRIMATE LANGUAGE).

See also

3.3 Language in the Mind

The cognitive turn in linguistics has brought together in a particularly fruitful manner the study of grammar with the study of the psychological processes at its basis on the one hand and the study of other forms of cognition on the other. PSYCHOLINGUISTICS deals with how language is acquired (cf. LANGUAGE ACQUISITION) and processed in its everyday uses (cf. NATURAL LANGUAGE PROCESSING SENTENCE PROCESSING). It also deals with language pathology, such as APHASIA and various kinds of developmental impairments (see LANGUAGE IMPAIRMENT, DEVELOPMENTAL).

With regard to acquisition, the available evidence points consistently in one direction. The kind of implicit knowledge at the basis of our linguistic behavior appears to be fairly specialized. Among all the possible ways to communicate and all the possible structures that a system of signs can have, those that are actualized in the languages of the world appear to be fairly specific. Languages exploit only some of the logically conceivable (and humanly possible) sound patterns, morphological markings, and syntactic and semantic devices. Here we could give just a taste of how remarkable the properties of natural languages are. And it is not obvious how such properties, so peculiar among possible semiotic systems, can be accounted for in terms of, say, pragmatic effectiveness or social conventions or cultural inventiveness (cf. SEMIOTICS AND COGNITION). In spite of this, the child masters the structures of her language without apparent effort or explicit training, and on the basis of an often very limited and impoverished input. This is clamorously so in the case of creolization, but it applies to a significant degree also to "normal" learning. An extensive literature documents this claim in all the relevant domains (see WORD MEANING, ACQUISITION OF PHONOLOGY, ACQUISITION OF SYNTAX, ACQUISITION OF SEMANTICS, ACQUISITION OF). It appears that language "grows into the child," to put it in Chomsky's terms; or that the child "invents" it, to put it in Pinker's words. These considerations could not but set the debate on NATIVISM on a new and exciting standing. At the center of intense investigations there is the hypothesis that a specialized form of knowledge, Universal Grammar, is part of the genetic endowment of our species, and thus constitutes the initial state for the language learner. The key to learning, then, consists in fixing what Universal Grammar leaves open (see PARAMETER SETTING APPROACHES TO ACQUISITION, CREOLIZATION AND DIACHRONY). On the one hand, this involves setting the parameters of variation, the "switches" made available by Universal Grammar. On the other hand, it also involves exploiting, for various purposes such as segmenting the stream of sound into words, generalized statistical abilities that we also seem to have (see Saffran, Aslin, and Newport 1996). The interesting problem is determining what device we use in what domain of LEARNING. The empirical investigation of child language proceeds in interaction with the study of the formal conditions under which acquisition is possible, which has also proven to be a useful tool in investigating these issues (cf. ACQUISITION, FORMAL THEORIES OF).

Turning now to processing, planning a sentence, building it up, and uttering it requires a remarkable amount of cognitive work (see LANGUAGE PRODUCTION). The same applies to going from the continuous stream of speech sounds (or, in the case of sign languages, gestures) to syntactic structure and from there to meaning (cf. PROSODY AND INTONATION, PROCESSING ISSUES SPEECH PERCEPTION SPOKEN WORD RECOGNITION VISUAL WORD RECOGNITION). The measure of the difficulty of this task can in part be seen by how partial our progress is in programming machines to accomplish related tasks such as going from sounds to written words, or to analyze an actual text, even on a limited scale (cf. COMPUTATIONAL LINGUISTICS COMPUTATIONAL LEXICONS). The actual use of sentences in an integrated discourse is an extremely complex set of phenomena. Although we are far from understanding it completely, significant discoveries have been made in the last decades, also thanks to the advances in linguistic theory. I will illustrate it with one well known issue in sentence processing.

As is well known, the recursive character of natural language syntax enables us to construct sentences of indefinite length and complexity:

(38)
a. The boy saw the dog.
b. The boy saw the dog that bit the cat.
c. The boy saw the dog that bit the cat that ate the mouse.
d. The boy saw the dog that bit the cat that ate the mouse that stole the cheese.
In sentence (38b), the object is modified by a relative clause. In (38c) the object of the first relative clause is modified by another relative clause. And we can keep doing that. The results are not particularly hard to process. Now, subjects can also be modified by relative clauses:
(39)
The boy that the teacher called on saw the dog.
But try now modifying the subject of the relative clause. Here is what we get:
(40)
The boy that the teacher that the principal hates called on saw the dog.
Sentence (40) is hard to grasp. It is formed through the same grammatical devices we used in building (39). Yet the decrease in intelligibility from (39) to (40) is quite dramatic. Only after taking the time to look at it carefully can we see that (40) makes sense. Adding a further layer of modification in the most embedded relative clause in (40) would make it virtually impossible to process. So there is an asymmetry between adding modifiers to the right (in English, the recursive side) and adding it to the center of a clause (center embedding). The phenomenon is very general. What makes it particularly interesting is that the oddity, if it can be called such, of sentences like (40) does not seem to be due to the violation of any known grammatical constraint. It must be linked to how we parse sentences, that is, how we attach to them a syntactic analysis as a prerequisite to semantic interpretation. Many theories of sentence processing address this issue in interesting ways. The phenomenon of center embedding illustrates well how related but autonomous devices (in this case, the design of grammar vis a vis the architecture of the parser) interact in determining our behavior.

See also

4  Concluding Remarks

Language is important for many fairly obvious and widely known reasons. It can be put to an enormous range of uses; it is the main tool through which our thought gets expressed and our modes of reasoning become manifest. Its pathologies reveal important aspects of the functioning of the brain (cf. LANGUAGE, NEURAL BASIS OF); its use in HUMAN-COMPUTER INTERACTION is ever more a necessity (cf. SPEECH RECOGNITION IN MACHINES SPEECH SYNTHESIS). These are all well established motivations for studying it. Yet, one of the most interesting things about language is in a way independent of them. What makes the study of language particularly exciting is the identification of regularities and the discovery of the laws that determine them. Often unexpectedly, we detect in our behavior, in our linguistic judgments or through experimentation, a pattern, a regularity. Typically, such regularities present themselves as intricate, they concern exotic data that are hidden in remote corners of our linguistic practice. Why do we have such solid intuitions about such exotic aspects of, say, the functioning of pronouns or the distribution of negative polarity items? How can we have acquired such intuitions? With luck, we discover that at the basis of these intricacies there are some relative simple (if fairly abstract) principles. Because speaking is a cognitive ability, whatever principles are responsible for the relevant pattern of behavior must be somehow implemented or realized in our head. Hence, they must grow in us, will be subject to pathologies, and so on. The cognitive turn in linguistics, through the advent of the generative paradigm, has not thrown away traditional linguistic inquiry. Linguists still collect and classify facts about the languages of the world, but in a new spirit (with arguably fairly old roots) -- that of seeking out the mental mechanisms responsible for linguistic facts. Hypotheses on the nature of such mechanisms in turn lead to new empirical discoveries, make us see things we had previously missed, and so on through a new cycle. In full awareness of the limits of our current knowledge and of the disputes that cross the field, it seems impossible to deny that progress over the last 40 years has been quite remarkable. For one thing, we just know more facts (facts not documented in traditional grammars) about more languages. For another thing, the degree of theoretical sophistication is high, I believe higher than it ever was. Not only for the degree of formalization (which, in a field traditionally so prone to bad philosophizing, has its importance), but mainly for the interesting ways in which arrays of complex properties get reduced to ultimately simple axioms. Finally, the cross-disciplinary interaction on language is also a measure of the level the field is at. Abstract modeling of linguistic structure leads quite directly to psychological experimentation and to neurophysiological study and vice versa (see, e.g., GRAMMAR, NEURAL BASIS OF LEXICON, NEURAL BASIS OF BILINGUALISM AND THE BRAIN). As Chomsky puts it, language appears to be the first form of higher cognitive capacity that is beginning to yield. We have barely begun to reap the fruits of this fact for the study of cognition in general.

See also

Additional links

References

Akmajian, A., R. Demers, A. Farmer, and R. Harnish. (1990). Linguistics. An Introduction to Language and Communication. 4th ed. Cambridge, MA: MIT Press.

Baker, M. (1996). The Polysynthesis Parameter. Oxford: Oxford University Press.

Bickerton, D. (1975). The Dynamics of a Creole System. Cambridge: Cambridge University Press.

Bickerton, D. (1981). Roots of Language. Ann Arbor, MI: Karoma.

Chien, Y.-C., and K. Wexler. (1990). Children's knowledge of locality conditions in binding as evidence for the modularity of syntax and pragmatics. Language Acquisition 1:225-295.

Crain, S., and C. McKee. (1985). The acquisition of structural restrictions on anaphora. In S. Berman, J. Choe, and J. McDonough, Eds., Proceedings of the Eastern States Conference on Linguistics. Ithaca, NY: Cornell University Linguistic Publications.

Clark, H., and E. Clark. (1977). The Psychology of Language. New York: Harcourt Brace Jovanovich.

Cheng, L. (1991). On the Typology of Wh-Questions. Ph.D. diss., MIT. Distributed by MIT Working Papers in Linguistics.

Chung, S., and J. McCloskey. (1987). Government, barriers and small clauses in Modern Irish. Linguistic Inquiry 18:173-238.

Dayal, V. (1998). Any as inherent modal. Linguistics and Philosophy.

Fodor, J. A., T. Bever, and M. Garrett. (1974). The Psychology of Language. New York: McGraw-Hill.

Grodzinsky, Y., and T. Reinhart. (1993). The innateness of binding and coreference. Linguistic Inquiry 24:69-101.

Huang, J. (1982). Grammatical Relations in Chinese. Ph.D. diss., MIT. Distributed by MIT Working Papers in Linguistics.

Ladusaw, W. (1979). Polarity Sensitivity as Inherent Scope Relation. Ph.D. diss., University of Texas, Austin. Distributed by IULC, Bloomington, Indiana (1980).

Ladusaw, W. (1992). Expressing negation. SALT II. Ithaca, NY: Cornell Linguistic Circle.

Pinker, S. (1994). The Language Instinct. New York: William Morrow.

Saffran, J., R. Aslin, and E. Newport. (1996). Statistical learning by 8-month-old infants. Science 274:1926-1928.

Further Readings

Aronoff, M. (1976). Word Formation in Generative Grammar. Cambridge, MA: MIT Press.

Atkinson, M. (1992). Children's Syntax. Oxford: Blackwell.

Brent, M. R. (1997). Computational Approaches to Language Acquisition. Cambridge, MA: MIT Press.

Chierchia, G., and S. McConnell-Ginet. (1990). Meaning and Grammar. An Introduction to Semantics. Cambridge, MA: MIT Press.

Chomsky, N. (1981). Lectures on Government and Binding. Dordrecht: Foris.

Chomsky, N. (1987). Language and Problems of Knowledge: The Managua Lectures. Cambridge, MA: MIT Press.

Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.

Chomsky, N., and M. Halle. (1968). The Sound Pattern of English. New York: Harper and Row.

Elman, J. L., E. A. Bates, M. H. Johnson, A. Karmiloff-Smith, D. Parisi, and K. Plunkett. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MA: MIT Press.

Gleitman, L., and B. Landau, Eds. (1994). The Acquisition of the Lexicon. Cambridge, MA: MIT Press.

Haegeman, L. (1990). An Introduction to Government and Binding Theory. 2nd ed. Oxford: Blackwell.

Hauser, M. D. (1996). The Evolution of Communication. Cambridge, MA: MIT Press.

Jusczyk, P. W. (1997). The Discovery of Spoken Language. Cambridge, MA: MIT Press.

Kenstowicz, M., and C. Kisseberth. (1979). Generative Phonology: Description and Theory. New York: Academic Press.

Ladefoged, P. (1982). A Course in Phonetics. 2nd ed. New York: Harcourt Brace Jovanovich.

Levinson, S. (1983). Pragmatics. Cambridge: Cambridge University Press.

Lightfoot, D. (1991). How to Set Parameters: Arguments from Language Change. Cambridge, MA: MIT Press.

Ludlow, P., Ed. (1997). Readings in the Philosophy of Language. Cambridge, MA: MIT Press.

Osherson, D., and H. Lasnik. (1981). Language: An Invitation to Cognitive Science. Cambridge, MA: MIT Press.

Stevens, K. N. (1998). Acoustic Phonetics. Cambridge, MA: MIT Press .