Poverty of the stimulus
In linguistics, the poverty of the stimulus (POS) is the assertion that natural language grammar is unlearnable given the relatively limited data available to children learning a language, and therefore that this knowledge is supplemented with some sort of innate linguistic capacity.[1]
Nativists claim that humans are born with a specific representational adaptation for language that both funds and limits their competence to acquire specific types of natural languages over the course of their cognitive development and linguistic maturation. The argument is now generally used to support theories and hypotheses of generative grammar. The term "poverty of the stimulus" was coined by Noam Chomsky in his work Rules and Representations.[2] The thesis emerged from several of Chomsky's writings on the issue of language acquisition. The argument has long been controversial within the field of linguistics, forming the backbone for the theory of universal grammar. Arguments in support of poverty of stimulus are not attempting to appeal to innate principles in exchange for learning appellates of universal grammar.[3]
Summary
History
Although Chomsky officially coined the "poverty of the stimulus" theory in 1980, the concept is directly linked to another Chomskyan approach named Plato's Problem. He outlined this philosophical approach in the first chapter of the "Aspects of the Theory of Syntax" in 1965. Chomsky asserts that there is a physiological component in the brain that develops in children, and thus, they are able to acquire language universally.[4] Plato's Problem traces back to "Meno", a Socratic Dialogue. In Meno, Socrates "undigs" mathematical knowledge of a servant who was never explicitly taught the geometry concepts.[5] Plato's Problem directly parallels the idea of the innateness of language, universal grammar, and more specifically the poverty of the stimulus argument because Socrates discovered people's innate ability to fully understand foreign concepts that they are never exposed to. Chomsky illustrates that children are not exposed to all structures of language, yet they fully achieve the necessary linguistic knowledge at an early age.
Premises
Though Chomsky and his supporters have reiterated the argument in a variety of different manners (indeed Pullum and Scholz (2002) provide no less than 13 different "sub arguments" that can optionally form part of a poverty-of-stimulus argument),[6] one frequent structure to the argument can be summed up as follows:
- Premises:
- There are patterns in all natural languages that cannot be learned by children using positive evidence alone. Positive evidence is the set of grammatical sentences that the language learner has access to, as a result of observing the speech of others. Negative evidence, on the other hand, is the evidence available to the language learner about what is not grammatical. For instance, when a parent corrects a child's speech, the child acquires negative evidence.
- Children are only ever presented with positive evidence for these particular patterns. For example, they only hear others speaking using sentences that are "right", not those that are "wrong".
- Children do learn the correct grammars for their native languages.
- Conclusion: Therefore, human beings must have some form of innate linguistic capacity that provides additional knowledge to language learners. Essentially, stimulus is not an entirely adequate way to explain the process of learning.[7] Essentially, the poverty of stimulus argument attempts to explain how native speakers form a capacity to identify possible and impossible interpretations through ordinary experience. Thus, "language acquisition is not merely a matter of acquiring a capacity to associate word strings with interpretations. Much less is it a mere process of acquiring a (weak generative) capacity to produce just the valid word strings of the language."[8]
Proposed evidence
For the argument
Several patterns in language have been claimed to be unlearnable from positive evidence alone. One example is the hierarchical nature of languages. The grammars of human languages produce hierarchical tree structures and some linguists argue that human languages are also capable of infinite recursion (see context-free grammar). For any given set of sentences generated by a hierarchical grammar capable of infinite recursion, there are an indefinite number of grammars that could have produced the same data. This would make learning any such language impossible. Indeed, a proof by E. Mark Gold showed that any formal language that has hierarchical structure capable of infinite recursion is unlearnable from positive evidence alone,[9] in the sense that it is impossible to formulate a procedure that will discover with certainty the correct grammar given any arbitrary sequence of positive data in which each utterance occurs at least once.[10] However, this does not preclude arriving at the correct grammar using typical input sequences rather than particularly malicious sequences or arrive at an almost perfect approximation to the correct grammar. Indeed, it has been proposed that under very mild assumptions (ergodicity and stationarity), the probability of producing a sequence that renders language learning impossible is in fact zero.[11]
Another example of language pattern claimed to be unlearnable from positive evidence alone is subject-auxiliary inversion in questions, i.e.:
- You are happy.
- Are you happy?
There are two hypotheses the language learner might postulate about how to form questions: (1) The first auxiliary verb in the sentence (here: 'are') moves to the beginning of the sentence, or (2) the 'main' auxiliary verb in the sentence moves to the front. In the sentence above, both rules yield the same result since there is only one auxiliary verb. But, the difference is apparent in this case:
- Anyone who is interested can see me later.
- * Is anyone who interested can see me later?
- Can anyone who is interested see me later?
Of course, the result of rule (1) is ungrammatical while the result of rule (2) is grammatical. So, rule (2) is (approximately) what we actually have in English, not rule (1). The claim, then, first is that children don't see sentences as complicated as this one enough to witness a case where the two hypotheses yield different results, and second that just based on the positive evidence of the simple sentences, children could not possibly decide between (1) and (2). Moreover, even sentences such as (1) and (2) are compatible with a number of incorrect rules (such as "front any auxiliary).[12] Thus, if rule (2) was not innately known to infants, we would expect half of the adult population to use (1) and half to use (2). Since that doesn't occur, rule (2) must be innately known. (See Pullum 1996 for the complete account and critique.)[13]
The last premise, that children successfully learn language, is considered to be evident in human speech. Though people occasionally make mistakes, human beings rarely speak ungrammatical sentences, and generally do not label them as such when they say them -- ungrammatical in this case referring to the descriptive sense, rather than the prescriptive.
To apply the idea of Universal Grammar and POS to a real-life situation, one can look to second-language acquisition. If an L2 learner acquires information and knowledge about a second language that they did not gain from either language input they have experienced or from their first language, they must have gained it from their UG.[14] This means L2 learners have an innate principle for learning, which supports the poverty of the stimulus theory. The main example of "such innate knowledge is the principle of structure-dependency".[14] Not all languages are structure-dependent; therefore it is extremely helpful to use this concept as a test for the issue of innate knowledge.[14] If a person shows understanding of structure-dependency in the L2 and they were not directly taught it, nor was their first language structure-dependent, this supports the concept of innate knowledge, and therefore the concept of the poverty of the stimulus. A study done by Cook provides three types of sentences to L2 learners: A: Joe is [the dog that is black]. B: Is Joe [the dog that is black]? C Is Joe is [the dog that black]? [15] The subjects of the study were 35 native speakers of English and 140 L2 speakers of English, with the L1 languages being Polish, Finnish, Dutch, Japanese, Chinese, and Arabic. The subjects each read a list of 96 sentences and rated each as either OK, not OK, or not sure.[15] the subjects were not taught structure-dependency in relation to English prior to the study. The results showed that in regards to sentences structured like example C above, speakers of Polish, Finnish, Dutch and Japanese all answered 95% or above correct; Arabic speakers answered 87.1% correct; and Chinese speakers answered 86.8% correct. This shows that L2 speakers follow the poverty of the stimulus theory when structure-dependency is concerned.[15]
Against the argument
Notable figures in the philosophical and empirical study of the mind have challenged the various aspects of the poverty of stimulus argument. Much of the criticism comes from researchers who study language acquisition and computational linguistics. Additionally, some connectionist researchers have refuted aspects of Chomsky's model, owing to premises that are at odds with connectionist beliefs about the structure of cognition. Constructionists are theorists who do not believe Chomskyan arguments and believe language is learned through some kind of functional distributional analysis (Tomasello 1992). One problem in language is called the no negative evidence problem. This is basically that children can use only positive evidence to learn language. Constructionists appeal to statistical and social learning mechanisms, which they claim can overcome a lack of negative evidence, whereas nativists simply use linguistic constraint theories (Baker 1979, Jackendoff 1975).
One common critique is that positive evidence is actually enough to learn the various patterns that Chomskyan linguists claim are unlearnable by positive evidence alone. A common argument is that the brain's mechanisms of statistical pattern recognition could solve many of the difficulties stated by the argument. For example, researchers using neural networks and other statistical methods have programmed computers to learn rules such as (2) cited above, and have claimed to have successfully extracted hierarchical structures, all using positive evidence alone.[16][17] Indeed, Klein & Manning (2002)[18] report constructing a computer program that is able to retrieve 80% of all correct syntactic analyses of text in the Wall Street Journal Corpus using a statistical learning mechanism (unsupervised grammar induction), demonstrating a clear move away from "toy" grammars. In another study, a probabilistic type of computer without any programmed preconceptions about grammar was presented with many newspaper articles. Despite the fact that the scientists had censored all articles containing the sentence "colorless green ideas sleep furiously", the computer, after "reading" thousands of articles, deemed that sentence 10000 times more probable than a scrambled ungrammatical version. This has been suggested as proof that statistical analysis without preconceptions can reveal general grammatical rules at a human-like accuracy.[19] Also supporting the idea of learning through statistical reasoning is the Bayesian Model of language acquisition. This model provides a rational approach to language learning, suggesting that a person does not need to experience all the structures and concepts of a language in order to learn them.[20][21]
Another suggested flaw with the poverty of the stimulus argument is that preliminary research shows that some languages in the world, for instance Daniel Everett's observations of the Pirahã language in the Amazon, seem to violate the rules of some of the specific precepts of Chomsky's models for universal grammar.[22] Creoles and pidgins were thought to support the Universal Grammar Hypothesis, but research demonstrates that pidgin learners systematize the language based on the probability and frequency of forms, not by universal grammar.[23] This criticizes Universal Grammar on the basis that languages are dynamic and not fixed. However, Chomsky has, in fact, presented a proposed solution to such possible outliers that do not fall in with his theory, claiming that, the fact that a language does not display certain factors of Universal Grammar and Poverty of the Stimulus does not mean that the fundamentals of these ideas do not exist in its speakers’ brains. These supposed missing factors just may not present themselves due to extrinsic constraints.[24] Chomsky claims that all humans do possess the capabilities of learning and using language based on Universal Grammar and Poverty of the Stimulus; if a group of speakers fail to do so, they are not lacking in ability, but rather simply in manifestation. This argument has become a topic of dispute and skepticism due to several scholars' criticisms of Everett's work and the validity of the data.[25]
There is also criticism about whether negative evidence is really so rarely encountered by children. Pullum argues that learners probably do get certain kinds of negative evidence. In addition, if one allows for statistical learning, negative evidence is abundant. It has been proposed that if a language pattern is never encountered, but its probability of being encountered would be very high were it acceptable, then the language learner might be right in considering absence of the pattern as negative evidence.[13] Chomsky accepts that this kind of negative evidence plays a role in language acquisition, terming it "indirect negative evidence", though he does not think that indirect negative evidence is sufficient for language acquisition to proceed without Universal Grammar.[26] However, contra this claim, Ramscar and Yarlett (2007) designed a learning model that successfully simulates the learning of irregular plurals based on negative evidence, and backed the predictions of this simulation in empirical tests of young children. Ramscar and Yarlett suggest that failures of expectation function as forms of implicit negative feedback that allow children to correct their errors.[27]
As for the argument based on Gold's proof, it's not clear that human languages are truly capable of infinite recursion. No speaker can ever in fact produce a sentence with an infinite recursive structure, and in certain cases (for example, center embedding), people are unable to comprehend sentences with only a few levels of recursion. Chomsky and his supporters have long argued that such cases are best explained by restrictions on working memory, since this provides a principled explanation for limited recursion in language use. Some critics argue that this removes the falsifiability of the premise. It is questionable whether Gold's research actually has any bearing on the question of natural language acquisition at all, since what Gold showed is that there are certain classes of formal languages for which some language in the class cannot be learned given positive evidence alone. Some have drawn the conclusion that it is not clear that natural languages fall in such a class, and that they may not be amongst those that are not learnable.[28]
Finally, it has been argued that people may not learn exactly the same grammars as each other. If this is the case, then only a weak version of the third premise is true, as there would be no fully "correct" grammar to be learned. However, in many cases, poverty of the stimulus arguments do not in fact depend on the assumption that there is only one correct grammar, but rather that there is only one correct class of grammars. For example, the poverty of the stimulus argument from question formation depends only on the assumption that everyone learns a structure-dependent grammar.
See also
References
- ↑ Johnson, K., "Introduction to Transformational Grammar". University of Massachusetts Amherst. p. 2.
- ↑ Chomsky, N. (1980). Rules and representations. Oxford: Basil Blackwell.
- ↑ "UCL Psychology and Language Sciences". Ucl.ac.uk. 2016-01-15. Retrieved 2016-01-21.
- ↑ James McGilvray. The Cambridge Companion to Chomsky. Books.google.com. p. 42. Retrieved 2016-01-21.
- ↑ J. Holbo; B. Waring (2002). "Plato's Meno" (PDF). Idiom.ucsd.edu. Retrieved 2016-01-21.
- ↑ Pullum, Geoffrey K.; Scholz, Barbara C. (2002). "Empirical assessment of stimulus poverty arguments" (PDF). The Linguistic Review. 19: 9–50. doi:10.1515/tlir.19.1-2.9.
- ↑ "Notes from Two Scientific Psychologists: Poverty of Stimulus and Ecological Laws". Psychsciencenotes.blogspot.com. 2010-03-25. Retrieved 2016-01-21.
- ↑ "Faculty of Language: Poverty of Stimulus Redux". Facultyoflanguage.blogspot.com. 2012-11-15. Retrieved 2016-01-21.
- ↑ E.M.Gold, "Language Identification in the Limit, Information and Control, 10(5): 447–474, 1967
- ↑ Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
- ↑ Clark, A. (2001). "Unsupervised Language Acquisition: Theory and Practice", DPhil thesis. PDF
- ↑ Lasnik, Howard; Uriagereka, Juan (2002). "On the Poverty of the Challenge" (PDF) (19). The Linguistic Review: 147–150.
- 1 2 Pullum, Geoffrey K. (1996). Learnability, hyperlearning, and the poverty of the stimulus. In Proceedings of the 22nd Annual Meeting of the Berkeley Linguistics Society: General Session and Parasession on the Role of Learnability in Grammatical Theory, ed. J. Johnson, M.L. Juge, and J.L. Moxley, 498–513. Berkeley, California. HTML
- 1 2 3 "The poverty of the stimulus argument and structure-dependency in L2 users of English". Homepage.ntlworld.com. Retrieved 2016-01-21.
- 1 2 3 Vivian Cook. "Poverty of the stimulus effects in second language acquisition" (PDF). Idiom.ucsd.edu. Retrieved 2016-01-21.
- ↑ Bates, E.; Elman, J. (1996). "Learning Revisited". Science. 274 (5294): 1849–1850. doi:10.1126/science.274.5294.1849. PMID 8984644.
- ↑ Solan, Z.; Horn, D.; Ruppin, E.; Edelman, S. (2005). "Unsupervised learning of natural languages". Proceedings of the National Academy of Sciences. 102 (33): 11629–11634. doi:10.1073/pnas.0409746102.
- ↑ Klein, D. & Manning, C. (2002). A generative constituent-context model for improved grammar induction. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. Philadelphia. 128–135.
- ↑ "On Chomsky and the Two Cultures of Statistical Learning". Norvig.com. Retrieved 2016-01-21.
- ↑ Amy Perfors; Joshua B. Tenenbaum; Terry Regier. "Poverty of the Stimulus? A Rational Approach" (PDF). Web.mit.edu. Retrieved 2016-01-21.(
- ↑ "Indirect Evidence and the Poverty of the Stimulus: The Case of Anaphoric One - Foraker - 2009 - Cognitive Science". Onlinelibrary.wiley.com. Retrieved 2016-01-21.
- ↑ Daniel L. Everett. "Pirahã culture and grammar: A response to some criticisms" (PDF). Ling.auf.net. Retrieved 2016-01-21.
- ↑ "Getting it right by getting it wrong: When learners change languages". Ncbi.nlm.nih.gov. Retrieved 2016-01-21.
- ↑ "Noam Chomsky: You Ask The Questions", interview in The Independent, 28 August 2006
- ↑ Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (June 2009). "Pirahã Exceptionality: A Reassessment". Language. 85 (2): 355–404. doi:10.1353/lan.0.0107.
- ↑ Harnad, Stevan (2008). "Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard". Behavioral and Brain Sciences. 31: 524–525. doi:10.1017/s0140525x08005153.
- ↑ Ramscar, M.; Yarlett, D. (2007). "Linguistic self-correction in the absence of feedback: A new approach to the logical problem of language acquisition" (PDF). Cognitive Science. 31: 927–960. doi:10.1080/03640210701703576. PMID 21635323.
- ↑ Johnson, Kent (2004). "Gold's theorem and cognitive science". Philosophy of Science. 71 (4): 571–592. doi:10.1086/423752.
Further reading
- Chomsky, N. (1988). Language and problems of knowledge. Cambridge, Massachusetts: MIT Press. ISBN 0-262-03133-7.
- Cowie, F. (2008). "Innateness and Language". Stanford Encyclopedia of Philosophy.
- Clark, A.; Lappin, S. (2010). Linguistic Nativism and the Poverty of the Stimulus. Wiley-Blackwell. ISBN 978-1-4051-8784-8.
- Kaplan, F.; Oudeyer, P-Y; Bergen, B. (2008). "Computational models in the debate over language learnability" (PDF). Infant and Child Development. 17 (1): 55–80. doi:10.1002/icd.544.
- Laurence, Stephen; Margolis, Eric (2001). "The Poverty of the Stimulus Argument". The British Journal for the Philosophy of Science. 52 (2): 217–276. doi:10.1093/bjps/52.2.217.
- Marcus, Gary F. (1993). "Negative evidence in language acquisition". Cognition. 46 (1): 53–85. doi:10.1016/0010-0277(93)90022-N. PMID 8432090.
- Legate, Julie; Yang, Charles (2002). "Empirical re-assessment of stimulus poverty arguments" (PDF). The Linguistic Review (19): 151–162.
- Reich, P. (1969). "The finiteness of natural language". Language. 45: 831–843. doi:10.2307/412337.
External links
- Essay critiquing the poverty of stimulus argument
- Essay on Gold's proof, learnability and feedback
- Encyclopedia Entry on Poverty of the Stimulus Arguments