Language and Mind:

Current Thoughts on Ancient Problems

(Part II)*


Noam Chomsky


Yesterday, I discussed two basic questions about language, one internalist and the other externalist. The internalist question asks what kind of a system language is. The externalist question asks how language relates to other parts of the mind and to the external world, including problems of unification and of language use. The discussion kept to a very general level, trying to sort out the kinds of problems that arise and the ways it seems to make sense to deal with them. I would now like to look a little more closely at some current thinking about the internalist question.

To review the context, the study of language took a somewhat different path about 40 years ago as part of the so-called “cognitive revolution” of the 1950s, which revived and reshaped traditional questions and concerns about many topics, including language and its use and the significance of these matters for the study of the human mind. Earlier attempts to explore these questions had run up against conceptual barriers and limits of understanding. By mid-century, these had to some extent been overcome, making it possible to proceed in a more fruitful way. The basic problem was to find some way to resolve the tension between the conflicting demands of descriptive and explanatory adequacy. The research program that developed led finally to a picture of language that was a considerable departure from the long and rich tradition; the Principles-and-Parameters approach, which is based on the idea that the initial state of the language faculty consists of invariant principles and a finite array of choices as to how the whole system can function. A particular language is determined by making these choices in a specific way. We have at least the outlines of a genuine theory of language, which might be able to satisfy the conditions of descriptive and explanatory adequacy, and approach the logical problem of language acquisition in a constructive way.

Since this picture took form about 15 years ago, the major research effort has been directed to trying to discover and make explicit the principles and the parameters. Inquiry has extended very rapidly both in depth, in individual languages, and in scope, as similar ideas were applied to languages of a very broad typological range. The problems that remain are considerable, to put it mildly. The human mind/brain is perhaps the most complex object in the universe, and we barely begin to comprehend the ways it is constituted and functions. Within it, language seems to occupy a central place, and at least on the surface, the variety and complexity are daunting. Nevertheless, there has been a good deal of progress, enough so that it seems reasonable to consider some more far-reaching questions about the design of language, in particular, questions about optimality of design. I dropped the matter at this point yesterday, turning to other topics. Let us now return to it, and see where inquiry into these questions might lead.

We are now asking how well language is designed. How closely does language resemble what a superbly competent engineer might have constructed, given certain design specifications. To study the question, we have to say more about these specifications. Some are internal and general, having to do with conceptual naturalness and simplicity, notions that are hardly crystal clear but can be sharpened in many ways. Others are external and specific, having to do with the conditions imposed by the systems of the mind/brain with which the faculty of language interacts. I suggested that the answer to the question might turn out to be that language is very well designed, perhaps close to “perfect” in satisfying external conditions.

If there is any truth to this conclusion, it is rather surprising, for several reasons. First, languages have often been assumed to be such complex and defective objects as to be hardly worth studying from a stern theoretical perspective. They require reform or regimentation, or replacement by something quite different, if they are to serve some purpose other than the confused and intricate affairs of daily life. That is the leading idea that inspired traditional attempts to devise a universal perfect language, or on theological assumptions, to recover the original Adamic language; and something similar has been taken for granted in much modern work from Frege to the present. Second, one might not expect to find such design properties in biological systems, which evolve over long periods through incremental changes under complicated and accidental circumstances, making the best of difficult and murky contingencies.

Suppose nonetheless that we turn aside initial skepticism and try to formulate some reasonably clear questions about optimality of language design. The “minimalist program”, as it has come to be called, is an effort to examine such questions. It is too soon to offer a judgment about the project with any confidence. My own judgment is that early results are promising, but only time will tell.

Note that the minimalist program is a PROGRAM, not a theory, even less so than the Principles-and-Parameters approach. There are minimalist questions, but no specific minimalist answers. The answers are whatever is found by carrying out the program: perhaps that some of the questions have no interesting answers, while others are premature. There might be no interesting answers because human language is a case of what Nobel laureate Francois Jacob once called “bricolage”; evolution is an opportunist, an inventor that takes whatever materials are at hand and tinkers with them, introducing slight changes so that they might work a bit better than before.

This is, of course, intended only as a picturesque image. There are other factors to consider. Uncontroversially, evolution proceeds within a framework established by the laws of physics and chemistry and the properties of complex systems, about which very little is known. Within this physical channel, natural selection plays a role that may range from zero to quite substantial.

From the Big Bang to large molecules, design results from the operation of physical law; the properties of Helium or snowflakes, for example. The effects of selection begin to appear with more complex organic forms, though understanding declines as complexity increases, and one must be wary of what evolutionary biologists Richard Lewontin, Stuart Kauffman, and others, have called “Just So Stories” - stories about how things might have happened, or maybe not. Kauffman, for example, has argued that many of the properties of “the genomic regulatory system that constrains into useful behavior the patterns of gene activity” during the growth of organisms “are SPONTANEOUS, SELF- ORGANIZED features of complex control systems which required almost no selection at all”, suggesting that “we must rethink evolutionary biology” and look for “sources of order outside selection.” It is a rare evolutionary biologist who dismisses such ideas as unworthy of attention. Looking beyond, it is generally assumed that such phenomena as the polyhedral shells of viruses, or the appearance in organic forms of properties of a well-known arithmetical series called the Fibonacci series (“phyllotaxis”), probably fall together with snowf lakes rather than the distribution of dark and light moths or the neck of a giraffe. Uncontroversially, for any case one studies it has to be determined how the physical channel constrains outcomes and what options it allows.

Furthermore, there are independent issues that have to be disentangled. What looks like wonderful design may well be a paradigm example of gradualism that is independent of the function in question. The ordinary use of language, for example, relies on bones of the inner ear that migrated from the jaws of reptiles. The process is currently believed to be the consequence of growth of the neocortex in mammals, and “sets true mammals apart from every other vertebrate” (SCIENCE, Dec. 1 1995). An engineer would find that this “delicate sound-amplifying system” is superbly designed for language function, but Mother Nature did not have that in mind when the process began 160 million years ago, nor is there any known selectional effect of the takeover of the system for language use.

Human language lies well beyond the limits of serious understanding of evolutionary processes, though there are suggestive speculations. Let us add another. Suppose we make up a “Just So Story” with imagery derived from snowflakes rather than colors of moths and necks of giraffes, with design determined by natural law rather than bricolage through selection. Suppose that there was an ancient primate with the whole human mental architecture in place, but no language faculty. The creature shared our modes of perceptual organization, our beliefs and desires, our hopes and fears, insofar as these are not formed and mediated by language. Perhaps it had a “language of thought” in the sense of Jerry Fodor and others, but no way to form linguistic expressions associated with the thoughts that this LINGUA MENTZS makes available.

Suppose a mutation took place in the genetic instructions for the brain, which was then reorganized in accord with the laws of physics and chemistry to install a faculty of language. Suppose the new system was, furthermore, beautifully designed, a near-perfect solution to the conditions imposed by the general architecture of the mind-brain in which it is inserted, another illustration of how natural laws work out in wondrous ways; or if one prefers, an illustration of how the evolutionary tinkerer could satisfy complex design conditions with very simple tools.

To be clear, these are fables. Their only redeeming value is that they may not be more implausible than others, and might even turn out to have some elements of validity. The imagery serves its function if it helps us pose a problem that could turn out to be meaningful and even significant: basically, the problem that motivates the minimalist program, which explores the intuition that the outcome of the fable might be accurate in interesting ways.

Notice a certain resemblance to the logical problem of language acquisition, a reformulation of the condition of explanatory adequacy as a device that converts experience to a language, taken to be a state of a component of the brain. The operation is instantaneous, though the process plainly is not. The serious empirical question is how much distortion is introduced by the abstraction. Rather surprisingly, perhaps, it seems that little if any distortion is introduced: it is AS IF the language appears instantaneously, by selection of the options available in the initial state. Despite great variation in experience, outcomes seem to be remarkably similar, with shared interpretations, often of extreme delicacy, for linguistic expressions of kinds that have little resemblance to anything experienced. That is not what we would expect if the abstraction to instantaneous acquisition introduced severe distortions. Perhaps the conclusion reflects our ignorance, but the empirical evidence seems to support it. Independently of that, insofar as it has been possible to account for properties of individual languages in terms of the abstraction, we have further evidence that the abstraction does capture real properties of a complex reality.

The issues posed by the minimalist program are somewhat similar. Plainly, the faculty of language was not instantaneously inserted into a mind/brain with the rest of its architecture fully intact. But we are now asking how well it is designed on that counterfactual assumption. How much does the abstraction distort a vastly more complex reality? We can try to answer the question much as we do the analogous one about the logical problem of language acquisition.

To pursue the program we have to have to sharpen ideas considerably, and there are ways to proceed. The faculty of language is embedded within the broader architecture of the mind/brain. It interacts with other systems, which impose conditions that language must satisfy if it is to be usable at all. We might think of these as “legibility conditions”, called “bare output conditions” in the technical literature. The systems within which the language faculty is embedded must be able to “read” the expressions of the language and use them as “instructions” for thought and action. The sensorimotor systems, for example, have to be able to read the instructions having to do with sound. The articulatory and perceptual apparatus have specific design that enables them to interpret certain properties, not others. These systems thus impose legibility conditions on the generative processes of the faculty of language, which must provide expressions with the proper “phonetic representation.”

The same is true of conceptual and other systems that make use of the resources of the faculty of language. They have their intrinsic properties, which require that the expressions generated by the language have certain kinds of “semantic representations”, not others.

We can therefore rephrase the initial question in a somewhat more explicit form. We now ask to what extent language is a “good solution” to the legibility conditions imposed by the external systems with which it interacts. If the external systems were perfectly understood, so that we knew exactly what the legibility conditions were, the problem we are raising would still require clarification; we would have to explain more clearly what we mean by “optimal design”, not a trivial matter, though not hopeless either. But life is never that easy. The external systems are not very well understood, and in fact, progress in understanding them goes hand-in-hand with progress in understanding the language system that interacts with them. So we face the daunting task of simultaneously setting the conditions of the problem and trying to satisfy them, with the conditions changing as we learn more about how to satisfy them. But that is what one expects in trying to understand the nature of a complex system. We therefore tentatively establish whatever ground seems reasonably firm, and try to proceed from there, knowing well that the ground is likely to shift.

The minimalist program requires that we subject conventional assumptions to careful scrutiny. The most venerable of these is that language has sound and meaning. In current terms, that translates to the thesis that the faculty of language engages other systems of the mind/brain at two “interface levels”, one related to sound, the other to meaning. A particular expression generated by the language contains a phonetic representation that is legible to the sensorimotor systems, and a semantic representation that is legible to conceptual and other systems of thought and action, and may consist just of these paired objects.

If this much is correct, we next have to ask just where the interface is located. On the sound side, it has to be determined to what extent, if any, sensorimotor systems are language-specific, hence within the faculty of language; there is considerable disagreement about the matter. On the meaning side, the questions have to do with the relations between the faculty of language and other cognitive systems - the relations between language and thought. On the sound side, the questions have been studied intensively with sophisticated technology for half a century, but the problems are hard, and understanding remains limited. On the meaning side, the questions are much more obscure. Far less is known about the language-external systems; much of the evidence about them is so closely linked to language that it is notoriously difficult to determine when it bears on language, when on other systems (insofar as they are distinct). And direct investigation of the kind possible for sensorimotor systems is in its infancy. Nonetheless, there is a huge amount of data about how expressions are used and understood in particular circumstances, enough so that natural language semantics is one of the liveliest areas of study of language, and we can make at least some plausible guesses about the nature of the interface level and the legibility conditions it must meet.

With some tentative assumptions about the interface, we can proceed to further questions. We ask how much of what we are attributing to the faculty of language is really motivated by empirical evidence, and how much is a kind of technology, adopted in order to present data in a convenient form while covering up gaps of understanding. Not infrequently, accounts that are offered in technical work turn out on investigation to be of roughly the order of complexity of what is to be explained, and involve assumptions that are not independently very well-grounded. That is not problematic as long as we do not mislead ourselves into thinking that useful and informative descriptions, which may provide stepping stones for further inquiry, are something more than that.

Such questions are always appropriate in principle, but often not worth posing in practice; they may be premature, because understanding is just too limited. Even in the hard sciences, in fact even mathematics, questions of this kind have commonly been put to the side. But the questions are nevertheless real, and with a more plausible concept of the general character of language at hand, perhaps worth exploring.

Let us turn to the question of optimality of language design: How good a solution is language to the general conditions imposed by the architecture of the mind/brain? This question too might be premature, but unlike the problem of distinguishing between principled assumptions and descriptive technology, it might have no answer at all: as I mentioned, there is no good reason to expect that biological systems will be well-designed in anything like this sense.

Let us tentatively assume that both of these questions are appropriate ones, in practice as well as principle. We now proceed to subject postulated principles of language to close scrutiny to see if they are empirically justified in terms of legibility conditions. I will mention a few examples, apologizing in advance for the use of some technical terminology, which I’ll try to keep to a minimum, but have no time here to explain in any satisfactory way.

One question is whether there are levels other than the interface levels: Are there levels “internal” to the language, in particular, the levels of deep and surface structure that have played a substantial role in modern work? The minimalist program seeks to show that everything that has been accounted for in terms of these levels has been misdescribed, and is as well or better understood in terms of legibility conditions at the interface: for those of you who know the technical literature, that means the projection principle, binding theory, Case theory, the chain condition, and so on.

We also try to show that the only computational operations are those that are unavoidable on the weakest assumptions about interface properties. One such assumption is that there are word-like units: the external systems have to be able to interpret such items as “man” and “tall.” Another is that these items are organized into larger expressions, such as “tall man.” A third is that the items have properties of sound and meaning: the word “man” in English begins with closure of the lips and is used to refer to persons, a subtle notion. The language therefore involves three kinds of elements: the properties of sound and meaning, called “features”; the items that are assembled from these properties, called “lexical items”; and the complex expressions constructed from these “atomic” units. It follows that the computational system that generates expressions has two basic operations: one assembles features into lexical items, the second forms larger syntactic objects out of those already constructed, beginning with lexical items.

We can think of the first operation as essentially a list of lexical items. In traditional terms, this list, called the lexicon, is the list of “exceptions”, arbitrary associations of sound and meaning and particular choices among the morphological properties made available by the faculty of language. I will keep here to what are traditionally called “inflectional features”, which indicate that nouns and verbs are plural or singular, that nouns have nominative or accusative case while verbs have tense and aspect, and so on. These inflectional features turn out to play a central role in computation.

Optimal design would introduce no new features in the course of computation. There should be no phrasal units or bar levels, hence no phrase structure rules or X-bar theory; and no indices, hence no binding theory using indices. We also try to show that no structural relations are invoked other than those forced by legibility conditions or induced in some natural way by the computation itself. In the first category we have such properties as adjacency at the phonetic level, and at the semantic level, argument structure and quantifier-variable relations. In the second category, we have elementary relations between two syntactic objects joined together in the course of computation; the relation holding between one of these and the parts of the other is a fair candidate; it is, in essence, the relation of c-command, as Samuel Epstein has pointed out, a notion that plays a central role throughout language design and has been regarded as highly unnatural, though it falls into place in a natural way from this perspective. Similarly, we can use very local relations between features; the most local, hence the best, are those that are internal to word-like units constructed from lexical items. But we exclude government and proper government, binding relations internal to the derivation of expressions, and a variety of other relations and interactions.

As anyone familiar with recent work will be aware, there is ample empirical evidence to support the opposite conclusion throughout. Worse yet, a core assumption of the work within the Principles-and-Parameters framework, and its fairly impressive achievements, is that everything I have just proposed is false - that language is highly “imperfect” in these respects, as might well be expected. So it is no small task to show that such apparatus is eliminable as unwanted descriptive technology; or even better, that descriptive and explanatory force are extended if such “excess baggage” is shed. Nevertheless, I think that work of the past few years suggests that these conclusions, which seemed out of the question a few years ago, are at least plausible, quite possibly correct.

Languages plainly differ, and we want to know how. One respect is in choice of sounds, which vary within a certain range. Another is in the association of sound and meaning, essentially arbitrary. These are straightforward and need not detain us. More interesting is the fact that languages differ in inflectional systems: case systems, for example. We find that these are fairly rich in Latin, even more so in Sanskrit or Finnish, but minimal in English and invisible in Chinese. Or so it appears; considerations of explanatory adequacy suggest that here too appearance may be misleading; and in fact, recent work indicates that these systems vary much less than the surface forms suggest. Chinese and English, for example, may have the same case system as Latin, but a different phonetic realization, though the effects show up in other ways. Furthermore, it seems that much of the variety of language can be reduced to properties of inflectional systems. If this is correct, then language variation is located in a narrow part of the lexicon.

Inflectional features differ from those that constitute lexical items. Consider any word, say the verb “see.” Its phonetic and semantic properties are intrinsic to it, as is its lexical category as a verb.

But it may appear with either singular or plural inflection. Typically a verb has one value along this inflectional dimension, but it is not part of its intrinsic nature. The same is true fairly generally of the substantive categories noun, verb, adjective, sometimes called “open classes” because new elements can be added to them rather freely, in contrast to inflectional systems, which are fixed early in language acquisition. There are second-order complexities and refinements, but the basic distinction between the substantive categories and the inflectional devices is reasonably clear not only in language structure, but also in acquisition and pathology, and recently there is even some suggestive work on brain imaging. We can put the complications to the side, and adopt an idealization that distinguishes sharply between substantive lexical items like “see” and “house”, and the inflectional features that are associated with them but are not part of their intrinsic nature.

Legibility conditions impose a three-way division among the features assembled into lexical items:

(1) semantic features, interpreted at the semantic interface

(2) phonetic features, interpreted at the phonetic interface

(3) features that are not interpreted at either interface


We assume that phonetic and semantic features are interpretable uniformly in all languages: the external systems at the interface are invariant; again, a standard assumption, though by no means an obvious one.

Independently, features are subdivided into the “formal features” that are used by the computatational operations that construct the derivation of an expression, and others that are not accessed directly, but just “carried along.” A natural principle that would sharply restrict language variation is that only inflectional properties are formal features: only these are accessed by the computational processes. That may well be correct, an important matter that I will only be able to touch on briefly and inadequately. A still stronger condition would be that all inflectional features are formal, accessible in principle by the computational processes, and still stronger conditions can be imposed, topics that are now under active investigation, often pursuing sharply different intuitions.

One standard and shared assumption, which seems correct and principled, is that phonetic features are neither semantic nor formal: they receive no interpretation at the semantic interface and are not accessed by computational operations. Again, there are second-order complexities, but we may put them aside. We can think of phonetic features as being “stripped away” from the derivation by an operation that applies to the syntactic object already formed. This operation activates the phonological component of the grammar, which converts the syntactic object to a phonetic form. With the phonetic features stripped away, the derivation continues, but using the stripped-down residue lacking phonetic features, which is converted to the semantic representation. One natural principle of optimal design is that operations can apply anywhere, including this one. Assuming so, we can make a distinction between the OVERT operations that apply before the phonetic features are stripped away, and COVERT operations that carry the residue on to semantic representation. Covert operations have no effect on the sound of an expression, only on what it means.

Another property of optimal design is that the computation from lexical items to semantic representation is uniform: the same operations should apply throughout, whether covert or overt. There seems to be an important sense in which that is true. Although covert and overt operations have different properties, with interesting empirical consequences, these distinctions may be reducible to legibility conditions at the sensorimotor interface. If so, they are “extrinsic” to core language design in a fundamental way. I’ll try to explain what I mean by that later on.

We assume, then, that in a particular language, features are assembled into lexical items, and then the fixed and invariant computational operations construct semantic representations from these in a uniform manner. At some point in the derivation, the phonological component accesses the derivation, stripping away the phonetic features and converting the syntactic object to phonetic form, while the residue proceeds to semantic representation by covert operations. We also assume that the formal features are inflectional, not substantive, so not only the phonetic features but also the substantive semantic features are inaccessible to the computation. The computational operations are therefore very restricted and elementary in character, and the apparent complexity and variety of languages should reduce essentially to inflectional properties.

Though the substantive semantic features are not formal, formal features may be semantic, with an intrinsic meaning. Take the inflectional property of number. A noun or a verb may be singular or plural, an inflectional property, not part of its intrinsic nature. For nouns, the number assigned has a semantic interpretion: the sentences “He sees the book” and “He sees the books” have different meanings. For the verb, however, the number has no semantic interpretation; it adds nothing that is not already determined by the expression in which it appears, in this case, its grammatical subject “He.” On the surface, what I just said seems untrue, for example, in sentences that seem to lack a subject, a common phenomenon in the Romance languages and many others. But a closer look gives strong reason to believe that subject is actually there, heard by the mind though not by the ear.

The importance of the distinction between interpretable and uninterpretable formal features was not recognized until very recently, in the course of pursuit of the minimalist program. It seems to be central to language design.

In a perfectly designed language, each feature would be semantic or phonetic, not merely a device to create a position or to facilitate computation. If so, there would be no uninterpretable features. But as we have just seen, that that is too strong a requirement. Nominative and accusative case features violate the condition, for example. These have no interpretation at the semantic interface, and need not be expressed at the phonetic level. The same is true of inflectional properties of verbs and adjectives, and there are others as well, which are not so obvious on the surface. We can therefore consider a weaker though still quite strong requirement approaching optimal design: each feature is either semantic or is accessible to the phonological component, which may (and sometimes does) use the feature in question to determine the phonetic representation. In particular, formal features are either interpretable or accessible to the phonological component. Case features are uninterpretable but may have phonetic effects, though they need not, as in Chinese and generally English, or even sometimes in languages with more visible inflection, like Latin. The same is true of other uninterpretable formal features. Let us assume (controversially) that this weaker condition holds. We are left with one imperfection of language design: the existence of uninterpretable formal features, which we now assume to be inflectional features only.

There seems to be a second and more dramatic imperfection in language design: the “displacement property” that is a pervasive aspect of language: phrases are interpreted as if they were in a different position in the expression, where similar items sometimes do appear and are interpreted in terms of natural local relations. Take the sentence “Clinton seems to have been elected.” We understand the relation of “elect” and “Clinton” as we do when they are locally related in the sentence “It seems that they elected Clinton”: “Clinton” is the direct object of “elect”, in traditional terms, though “displaced” to the position of subject of “seems.” The subject “Clinton” and the verb “seems” agree in inflectional features in this case, but have no semantic relation; the semantic relation of the subject is to the remote verb “elect.”

We now have two “imperfections”: uninterpretable formal features, and the displacement property. On the assumption of optimal design, we would expect them to reduce to the same cause, and that seems to be the case: uninterpretable formal features provide the mechanism that implements the displacement property.

The displacement property is never built into the symbolic systems that are designed for special purposes, called “languages” or “formal languages” in a metaphoric usage that has been highly misleading, I think: “the language of arithmetic”, or “computer languages”, or “the languages of science.” These systems also have no inflectional systems, hence no uninterpreted formal features. Displacement and inflection are special properties of human language, among the many that are ignored when symbolic systems are designed for other purposes, free to disregard the legibility conditions imposed on human language by the architecture of the mind/brain.

Why language should have the displacement property is an interesting question, which has been discussed for many years without resolution. One early proposal is that the property reflects processing conditions. If so, it may in part be reducible to properties of the articulatory and perceptual apparatus, hence forced by legibility conditions at the phonetic interface. I suspect that another part of the reason may have to do with phenomena that have been described in terms of surface structure interpretation: topic-comment, specif icily, new and old information, the agentive force that we find even in displaced position, and so on. These seem to require particular positions in temporal linear order, typically at the edge of some construction. If so, then the displacement property also reflects legibility conditions at the semantic interface; it is motivated by interpretive requirements that are externally imposed by our systems of thought, which have these special properties, so it appears. These questions are currently being investigated in interesting ways, which I cannot go into here.

From the origins of generative grammar, the computational operations were assumed to be of two kinds; phrase structure rules that form larger syntactic objects from lexical items, and transformational rules that express the displacement property. Both have traditional roots; their first moderately clear formulation was in the influential Port Royal grammar of 1660. But it was quickly found that the operations differ substantially from what had been supposed, with unsuspected variety and complexity - conclusions that had to be false for the reasons I discussed yesterday. The research program sought to show that the complexity and variety are only apparent, and that the two kinds of rules can be reduced to simpler form. A “perfect” solution to the problem of phrase structure rules would be to eliminate them entirely in favor of the irreducible operation that takes two objects already formed and attaches one to the other, forming a larger object with just the properties of the target of attachment: the operation we can call Merge. That goal may be attainable, recent work indicates, in a system called “bare phrase structure.”

Assuming so, the optimal computational procedure consists of the operation Merge and operations to express the displacement property: transformational operations or some counterpart. The second of the two parallel endeavors sought to reduce these to the simplest form, though unlike phrase structure rules, they seem to be ineliminable. The end result was the thesis that for a core set of phenomena, there is just a single operation Move - basically, move anything anywhere, with no properties specific to languages or particular constructions. How the operation Move applies is determined by general principles of language interacting with the specific parameter choices that determine a particular language.

The operation Merge takes two distinct objects X and Y and attaches Y to X. The operation Move takes a single object X and an object Y that is part of X, and merges Y to X. In both cases, the new unit has the properties of the target, X. The object formed by the operation Move includes two occurrences of the moved element Y: in technical terms, the CHAIN consisting of these two occurrences of Y. The occurrence in the original position is called THE TRACE. There is strong evidence that both positions enter into semantic interpretation in many ways. Both, for example, enter into scopal relations and binding relations with anaphoric elements, reflexives and pronouns. When longer chains are constructed by successive steps of movement, the intermediate positions also enter into such relations. To determine just how this works is a very live research topic, which, on minimalist assumptions, should be restricted to interpretive operations at the semantic interface; again, a highly controversial thesis.

The next problem is to show that uninterpretable formal features are indeed the mechanism that implements the displacement property, so that the two basic imperfections of the computational system reduce to one. If it turns out further that the displacement property is motivated by legibility conditions imposed by external systems, as I just suggested, then the two imperfections are eliminated completely and language design turns out to be optimal after all: uninterpreted formal features are required as a mechanism to satisfy legibility conditions imposed by the general architecture of the mind/brain, properties of the processing apparatus and the systems of thought.

The unification of uninterpretable formal features and the displacement property is based on quite simple ideas, but to explain them coherently would go beyond the scope of these remarks. The basic intuition rests on an empirical fact coupled with a design principle.

The fact is that: uninterprefcable formal features have to be erased for the expression to be legible at the semantic Interface; the design principle Is that erasure requires a local relation between the offending feature and a matching feature. Typically these two features are remote from one another, for reasons having to do with semantic interpretation. For example, in the sentence “Clinton seems to have been elected”, semantic interpretation requires that “elect” and “Clinton” be locally related in the phrase “elect Clinton” for the construction to be properly interpreted, as if the sentence were actually “seems to have been elected Clinton”. The main verb of the sentence, “seems”, has inflectional features that are uninterpretable, as we have seen: its number and person, for example. These offending features of “seems” therefore have to be erased in a local relation with the matching features of the phrase “Clinton.” The matching features are attracted by the offending features of the main verb “seems”, which are then erased under local matching. The traditional descriptive term for the phenomenon we are looking at is “agreement”, but we have to give it explicit content, and as usual, unexpected properties come to the fore when we do so.

If this can be worked out properly, we conclude that a particular language consists of a lexicon, a phonological system, and two computational operations: Merge and Attract. Attract is driven by the principle that uninterpretable formal features must be erased in a local relation, and something similar extends to Merge.

Note that only the FEATURES of “Clinton” are attracted; we still have not dealt with the overtly visible displacement property, the fact that the full phrase in which the features appear, the word “Clinton” in this case, is carried along with the formal inflectional features that erase the target features. Why does the full phrase move, not just the features? The natural idea is that the reasons have to do with the poverty of the sensorimotor system, which is unable to “pronounce” or “hear” isolated features separated from the words of which they are a part. Hence in such sentences as “Clinton seems to have been elected”, the full phrase “Clinton” moves along as a reflex of the attraction of the formal features of “Clinton.” In the sentence “an unpopular candidate seems to have been elected”, the full phrase “an unpopular candidate” is carried along as a reflex of the attraction of the formal features of “candidate.” There are much more complex examples.

Suppose that the phonological component is inactivated. Then the features alone raise, and alongside of the sentence “an unpopular candidate seems to have been elected”, with overt displacement, we have the corresponding expression “seems to have been elected an unpopular candidate”; here the remote phrase “an unpopular candidate” agrees with the verb “seems”, which means that its features have been attracted to a local relation with “seem” while leaving the rest of the phrase behind.

Such inactivation of the phonological component in fact takes place. For other reasons, we do not see exactly this pattern with definite noun phrases like “Clinton”, but it is standard with indefinite ones such as “an unpopular candidate.” Thus we have, side by side, the two sentences “an unpopular candidate seems to have been elected” and

“seems to have been elected an unpopular candidate.” The latter expression is normal in many languages, including most of the Romance languages. English, French, and other languages have them them too, though it is necessary for other reasons to introduce a semantically empty element as apparent subject; In English, the word “there”, so that we have the sentence “there seems to have been elected an unpopular candidate.” It is also necessary in English, though not in closely related languages, to carry out an inversion of order, for quite interesting reasons that hold much more generally for the language; hence what we actually say in English is the sentence “there seems to have been an unpopular candidate elected.”

Taking a slightly closer look, suppose that X is a feature that is uninterpretable and therefore must erase. It therefore attracts the closest feature Y that matches it. Y attaches to X, and the attractor X erases. Y will also erase if uninterpretable, and will remain if it is interpretable. This is the source of successive-cyclic movement, among other properties. Note that we have to explain what we mean by “closest”, another question with interesting ramifications.

For covert movement, that is all there is to say: features attract, and erase when they must. Covert operations should be pure feature attraction, with no visible movement of phrases, though with effects on such matters as agreement, control and binding, again a topic that been studied in the past few years, with some interesting results. If the sound system has not been inactivated, we have the reflex that raises a full phrase, placing it as close as possible to the attracted feature Y; in technical terms, this translates to movement of the phrase to the specifier of the head to which Y has attached. The operation is a generalized version of what has been called “pied-piping” in the technical literature. The proposal opens very substantial and quite difficult empirical problems, which have only been very partially examined. The basic problem is to show that the choice of the phrase that is moved is determined by other properties of the language, within minimalist assumptions. Insofar as these problems are solved, we have a mechanism that implements core aspects of the displacement property in a natural way.

In a large range of cases, the apparent variety and complexity is superficial, reducing to minor parametric differences and a straightforward legibility condition: uninterpretable formal features must be erased, and on assumptions of optimal design, erased in a local relation with a matching feature. The displacement property that is required for semantic interpretation at the interface follows as a reflex, induced by the primitive character of modes of sensory interpretation.

Combining these various ideas, some still highly speculative, we can envisage both a motivation and a trigger for the displacement property. Note that these have to be distinguished. An embryologist studying the development of the eye may take note of the fact that for an organism to survive, it would be helpful for the lens to contain something that protects it from damage and something that refracts light; and looking further, would discover that crystallin proteins have both these properties and also seem to be ubiquitous components of the lens of the eye, showing up on independent evolutionary paths. The first property has to do with “motivation” or “functional design”, the second with the trigger that yields the right functional design. There is an indirect and important relation between them, but it would be an error to confound them. Thus a biologist accepting all of this would not offer the functional design property as the mechanism of embryological development of the eye.

Similarly, we do not want to confound functional motivations for properties of language with the specific mechanisms that implement them. We do not want to confound the fact that the displacement property is required by external systems with the mechanisms of the operations Attract and its reflex.

The phonological component is responsible for other respects in which the design of language is “imperfect.” It includes operations beyond those that are required for any language-like system, and these introduce new features and elements that are not in lexical items: intonational features, narrow phonetics, perhaps even temporal order, in a version of ideas developed by Richard Kayne. “Imperfections” in this component of language would not be very surprising: for one reason, because direct evidence is available to the language learner; for another, because of special properties of sensorimotor systems. If the overt manifestation of the displacement property also reduces to special features of the sensorimotor system, as I just suggested, then a large range of imperfections may have to do with the need to “externalize” language. If we could communicate by telepathy, they would not arise. The phonological component is in a certain sense “extrinsic” to language, and the locus of a good part of its imperfection, so one might speculate.

At this point, we are moving to questions that go far beyond anything I can try to discuss here. To the extent that the many problems fall into place, it will follow that language is a good - maybe very good - solution to the conditions imposed by the general architecture of the mind/brain, an unexpected conclusion if true, hence an intriguing one. And like the Principles-and-Parameters approach more generally, whether it turns out to be on the right track or not, it is currently serving to stimulate a good deal of empirical research with sometimes surprising results, and a host of new and challenging problems, which is all that one can ask.

Editorial note.
Words are CAPITALIZED when I mean them to be underlined, to indicate italics. Also, I use ' for (French) acute accent, so the name "Koyre'" should have "e" with acute accent.