Merge (linguistics)
Merge (usually capitalized) is one of the basic operations in the Minimalist Program, a leading approach to generative syntax, when two syntactic objects are combined to form a new syntactic unit (a set). Merge also has the property of recursion in that it may apply to its own output: the objects combined by Merge are either lexical items or sets that were themselves formed by Merge. This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. As Noam Chomsky (1999) puts it, Merge is "an indispensable operation of a recursive system ... which takes two syntactic objects A and B and forms the new object G={A,B}" (p. 2).[1]
Mechanisms of Merge
Within the Minimalist Program, syntax is derivational, and Merge is the structure-building operation. Merge is assumed to have certain formal properties constraining syntactic structure, and is implemented with specific mechanisms.
Binary branching
Merge takes two objects α and β and combines them, creating a binary structure.
Feature checking
In some variants of the Minimalist Program Merge is triggered by feature checking, e.g. the verb eat selects the noun cheesecake because the verb has an uninterpretable N-feature [uN] ("u" stands for "uninterpretable"), which must be checked (or deleted) due to full interpretation.[2] By saying that this verb has a nominal uninterpretable feature, we rule out such ungrammatical constructions as *eat beautiful (the verb selects an adjective). Schematically it can be illustrated as:
External and Internal Merge
Chomsky (2001) distinguishes between external and internal Merge: if A and B are separate objects then we deal with external Merge; if either of them is part of the other it is internal Merge.[3]
Three controversial aspects of Merge
Standard Merge (i.e. as it is commonly understood) encourages one to adopt three key assumptions about the nature of syntactic structure and the faculty of language: 1) sentence structure is generated bottom up in the mind of speakers (as opposed to top down or left to right), 2) all syntactic structure is binary branching (as opposed to n-ary branching) and 3) syntactic structure is constituency-based (as opposed to dependency-based). While these three assumptions are taken for granted for the most part by those working within the broad scope of the Minimalist Program, other theories of syntax reject one or more of them.
Merge is commonly seen as merging smaller constituents to greater constituents until the greatest constituent, the sentence, is reached. This bottom-up view of structure generation is rejected by representational (non-derivational) theories (e.g. Generalized Phrase Structure Grammar, Head-Driven Phrase Structure Grammar, Lexical Functional Grammar, most dependency grammars, etc.), and it is contrary to early work in Transformational Grammar. The phrase structure rules of context free grammar, for instance, were generating sentence structure top down.
Merge is usually assumed to merge just two constituents at a time, a limitation that results in tree structures in which all branching is binary. While the strictly binary branching structures have been argued for in detail,[4] one can also point to a number of empirical considerations that cast doubt on these strictly binary branching structures, e.g. the results of standard constituency tests.[5] For this reason, most grammar theories outside of Government and Binding Theory and the Minimalist Program allow for n-ary branching.
Merge merges two constituents in such a manner that these constituents become sister constituents and are daughters of the newly created mother constituent. This understanding of how structure is generated is constituency-based (as opposed to dependency-based). Dependency grammars (e.g. Meaning-Text Theory, Functional Generative Description, Word grammar) disagree with this aspect of Merge, since they take syntactic structure to be dependency-based.[6]
Comparison to other approaches
In other approaches to generative syntax, such as Head-driven phrase structure grammar, Lexical functional grammar and other types of unification grammar, the analogue to Merge is the unification operation of graph theory. In these theories, operations over attribute-value matrices (feature structures) are used to account for many of the same facts. Though Merge is usually assumed to be unique to language, the linguists Jonah Katz and David Pesetsky have argued that the harmonic structure of tonal music is also a result of the operation Merge.[7]
This notion of 'merge' may in fact be related to Fauconnier's 'blending' notion in cognitive linguistics.
Phrase Structure Grammar
Phrase structure grammar (PSG) represents immediate constituency relations (i.e. how words group together) as well as linear precedence relations (i.e. how words are ordered). In a PSG, a constituent contains at least one member, but has no upper bound. In contrast, with Merge theory, a constituent contains at most two members. Specifically, in Merge theory, each syntactic object is a constituent.
X-bar theory
X-bar theory is a template that claims that all lexical items project three levels of structure: X, X', and XP. Consequently, there is a three-way distinction between Head, Complement, and Specifier:
- the Head projects its category to each node in the projection;
- the Complement is introduced as sister to the Head, and forms an intermediate projection, labeled X';
- the Specifier is introduced as sister to X', and forms the maximal projection, labeled XP.
While the first application of Merge is equivalent to the Head-Complement relation, the second application of Merge is equivalent to the Specifier-Head relation. However, the two theories differ in the claims they make about the nature of the Specifier-Head-Complement (S-H-C) structure. In X-bar theory, S-H-C is a primitive, an example of this is Kayne's antisymmetry theory. In a Merge theory, S-H-C is derivative.
Notes
- ↑ Chomsky (1999).
- ↑ See Adger (2003).
- ↑ See Chomsky (2001).
- ↑ See Kayne (1981, 1994).
- ↑ Concerning what constituency tests tell us about the nature of branching and syntactic structure, see Osborne (2008: 1126-32).
- ↑ Concerning dependency grammars, see Ágel et al. (2003/6).
- ↑ See Katz and Pesetsky (2009).
References
- Adger, D. 2003. Core syntax: A Minimalist approach. Oxford: Oxford University Press. ISBN 0-19-924370-0.
- Ágel, V., Ludwig Eichinger, Hans-Werner Eroms, Peter Hellwig, Hans Heringer, and Hennig Lobin (eds.) 2003/6. Dependency and valency: An international handbook of contemporary research. Berlin: Walter de Gruyter.
- Chomsky, N. 1999. Derivation by phase. Cambridge, MA: MIT.
- Chomsky, N. 2001. Beyond explanatory adequacy. Cambridge, MA: MIT.
- Katz, J., D. Pesetsky 2009. The identity thesis for language and music. http://ling.auf.net/lingBuzz/000959
- Kayne, R. 1981. Unambiguous paths. In R. May and J. Koster (eds.), Levels of syntactic representation, 143-183. Dordrecht: Kluwer.
- Kayne, R. 1994. The antisymmetry of syntax. Linguistic Inquiry Monograph Twenty-Five. MIT Press.
- Osborne, T. 2008. Major constituents: And two dependency grammar constraints on sharing in coordination. Linguistics 46, 6, 1109-1165.