http://www.You4Dating.com 100% Free Dating website! 1.Our Website - is a great way to find new friends or partners, for fun, dating and long term relationships. Meeting and socializing with people is both fun and safe.
2.Common sense precautions should be taken however when arranging to meet anyone face to face for the first time.
3.You4Dating Free Online Dating ,You4Dating is a Free 100% Dating Site, There are No Charges ever. We allow You to Restrict who can Contact You, and Remove those unfit to Date.
4. You4Dating is Responsible for Creating Relationships per Year proving it is possible to Find Love Online. It will Quickly become a Leader in the Internet Dating Industry because of its Advanced Features and matching Systems,and most of all,Because is a 100% Free-There are No Charges Ever.
5. You4Dating is an International Dating Website Serving Single Men and Single Women Worldwide. Whether you're seeking Muslim,Christian,Catholic, Singles Jewish ,Senor Dating,Black Dating, or Asian Dating,You4Dating is a Right Place for Members to Browse through, and Potentially Find a Date.Meet more than 100000 Registred Users
6. Multy Language Dating Site.
http://www.You4Dating.com

Sunday 7 December 2008

The Linguist’s Tale 63

keep
62 The Demise of Definitions, Part I
these questions distinct if you care about the structure of concepts. It’s
especially important if what you care about is whether “kill”, “eat”, and
the like have definitions; i.e. whether KILL, EAT, and the like are complex
concepts or conceptual primitives. To say, in the present context, that there
are semantic features is just to say that semantic facts can have syntactic
reflexes: what an expression means (partially) determines the contexts in
which it is syntactically well-formed. To say that there is a semantic level
is to make a very much stronger claim: viz. that there is a level of
representation at which only the semantic properties of expressions are
specified, hence at which synonymous expressions get the same
representations, hence at which the surface integrity of lexical items is not
preserved. I am, as no doubt the reader will have gathered, much inclined
to deny both these claims; but never mind that for now. My present
concern is just to emphasize the importance of the difference between
them.
For many of the familiar tenets of lexical semantics flow from the
stronger claim but not from the weaker one. For example, since everybody
thinks that the concepts expressed by phrases are typically complex, and
since, by definition, representations at the semantic level abstract from the
lexical and syntactic properties that distinguish phrases from their lexical
synonyms, it follows that if there is a semantic level, then the concepts
expressed by single words are often complex too. However, this conclusion
does not follow from the weaker assumption: viz. that lexical entries
contain semantic features. Linguistic features can perfectly well attach to
a lexical item that is none the less primitive at every level of linguistic
description.14 And it’s only the weaker assumption that the facts about
dative movement and the like support, since the most these data show is
that the syntactic behaviour of lexical items is determined by their
semantics inter alia; e.g. by their semantic features together with their
morphology. So Pinker’s argument for definitions doesn’t work even on the
assumption that ‘denotes a prospective possession’ and the like are bona fide
semantic representations.
THE MORAL: AN ARGUMENT FOR LEXICAL SEMANTIC
FEATURES IS NOT IPSO FACTO AN ARGUMENT THAT THERE
IS LEXICAL SEMANTIC DECOMPOSITION!!! Pardon me if I seem to
shout; but people do keep getting this wrong, and it does make a litter of
the landscape.
The Linguist’s Tale 63
14 Compare: no doubt, the lexical entry for ‘boy’ includes the syntactic feature +Noun.
This is entirely compatible with ‘boy’ being a lexical primitive at every level of linguistic
description.
Saying that lexical items have features is one thing; saying that lexical items are feature
bundles is quite another. Do not conflate these claims.
Well, but has Pinker made good even the weaker claim? Suppose we
believe the semantic bootstrapping story about language learning; and
suppose we pretend to understand notions like prospective possession,
attribute, and the like; and suppose we assume that these are, as it were,
really semantic properties and not mere shadows of distributional facts
about the words that express them; and suppose we take for granted the
child’s capacity for finding such semantic properties in his input; and
suppose that the question we care about is not whether there’s a semantic
level, but just whether the mental lexicon (ever) represents semantic
features of lexical items. Supposing all of this, is there at least a
bootstrapping argument that, for example, part of the lexical entry for
‘eat’ includes the semantic feature ACTION.
Well, no. Semantic bootstrapping, even if it really is semantic, doesn’t
require that lexical entries ever specify semantic properties. For even if the
child uses the knowledge that ‘eat’ denotes an action to bootstrap the
syntax of ‘snails eat leaves’, it doesn’t follow that “denoting an action” is
a property that “eat” has in virtue of what it means. All that follows—
hence all the child needs to know in order to bootstrap—is that ‘eat’
denotes eating and that eating is a kind of acting. (I’m indebted to Eric
Margolis for this point.) Indeed, mere reliability of the connection between
eating and acting would do perfectly well for the child’s purposes;
“semantic bootstrapping” does not require the child to take the connection
to be semantic or even necessary. The three-year-old who thinks (perhaps
out of Quinean scruples) that ‘eating is acting’ is true but contingent will
do just fine, so long as he’s prepared to allow that contingent truths can
have syntactic reflexes.
So much for the bootstrapping argument. I really must stop this
grumbling about lexical semantics. And I will, except for a brief,
concluding discussion of Pinker’s handling of (what he calls) ‘Baker’s
Paradox’ (after Baker 1979). This too amounts to a claim that ontogenetic
theory needs lexical semantic representations; but it makes quite a different
sort of case from the one we’ve just been looking at.
The ‘Baker’s Paradox’ Argument
Pinker thinks that, unless children are assumed to represent ‘eat’ as an
action verb (mutatis mutandis, ‘give’ as a verb of prospective possession,
etc.). Baker’s Paradox will arise and make the acquisition of lexical syntax
unintelligible. I’ll tell you what Baker’s Paradox is in a moment, but I want
to tell you what I think the bottom line is first. I think that Baker’s Paradox
is a red herring in the present context. In fact, I think that it’s two red
herrings: on Pinker’s own empirical assumptions, there probably isn’t a
64 The Demise of Definitions, Part I
Baker’s Paradox about learning the lexicon; and, anyhow, assuming that
there is one provides no argument that lexical items have semantic
structure. Both of these points are about to emerge.
Baker’s Paradox, as Pinker understands it, is a knot of problems that
turn on the (apparent) fact that children (do or can) learn the lexical syntax
of their language without much in the way of overt parental correction.
Pinker discerns, “three aspects of the problem [that] give it its sense of
paradox”, these being the child’s lack of negative evidence, the
productivity of the structures the child learns (“if children simply stuck
with the argument structures that were exemplified in parental speech . . .
they would never make errors . . . and hence would have no need to figure
out how to avoid or expunge them”), and the “arbitrariness” of the
linguistic phenomena that the child is faced with (specifically “near
synonyms [may] have different argument structures” (1989: 8–9)). If, for
example, the rule of dative movement is productive, and if it is merely
arbitrary that you can say ‘John gave the library the book’ but not *‘John
donated the library the book’, how, except by being corrected, could the
child learn that the one is OK and the other is not?
That’s a good question, to be sure; but it bears full stress that the three
components do not, as stated and by themselves, make Baker’s Paradox
paradoxical. The problem is an unclarity in Pinker’s claim that the rules
the child is acquiring are ‘productive’. If this means (as it usually does in
linguistics) just that the rules are general (they aren’t mere lists; they go
‘beyond the child’s data’) then we get no paradox but just a standard sort
of induction problem: the child learns more than the input shows him,
and something has to fill the gap. To get a paradox, you have to throw in
the assumption that, by and large, children don’t overgeneralize; i.e. that,
by and large, they don’t apply the productive rules they’re learning to
license usages that count as mistaken by adult standards. For suppose that
assumption is untrue and the child does overgeneralize. Then, on
anybody’s account, there would have to be some form of correction
mechanism in play, endogenous or otherwise, that serves to expunge the
child’s errors. Determining what mechanism(s) it is that serve(s) this
function would, of course, be of considerable interest; especially on the
assumption that it isn’t parental correction. But so long as the child does
something that shows the world that he’s got the wrong rule, there is
nothing paradoxical in the fact that information the world provides
ensures that he eventually converges on the right one.
To repeat, Baker’s Paradox is a paradox only if you add ‘no overgeneralizations’
to Pinker’s list. The debilitated form of Baker’s Paradox
that you get without this further premiss fails to do what Pinker very much
wants Baker’s Paradox to do; viz. “[take] the burden of explaining learning
The Linguist’s Tale 65
66 The Demise of Definitions, Part I
out of the environmental input and [put] it back into the child” (1989:
14–15). Only if the child does not overgeneralize lexical categories is there
evidence for his “differentiating [them] a priori” (ibid.: 44, my emphasis);
viz. prior to environmentally provided information.
Pinker’s argument is therefore straightforwardly missing a premiss. The
logical slip seems egregious, but Pinker really does make it, as far as I can
tell. Consider:
[Since there is empirical evidence against the child’s having negative information,
and there is empirical evidence for the child’s rules being productive,] the only way
out of Baker’s Paradox that’s left is . . . rejecting arbitrariness. Perhaps the verbs
that do or don’t participate in these alterations do not belong to arbitrary lists
after all . . . [Perhaps, in particular, these classes are specifiable by reference to
semantic criteria.] . . . If learners could acquire and enforce criteria delineating
the[se] . . . classes of verbs, they could productively generalize an alternation to
verbs that meet the criteria without overgeneralizing it to those that do not.
(ibid.: 30)
Precisely so. If, as Pinker’s theory claims, the lexical facts are non-arbitrary
and children are sensitive to their non-arbitrariness, then the right
prediction is that children don’t overgeneralize the lexical rules.
Which, however, by practically everybody’s testimony, including
Pinker’s, children reliably do. On Pinker’s own account, children aren’t
“conservative” in respect of the lexicon (see 1989: 19–26, sec. 1.4.4.1 for
lots and lots of cases).15 This being so, there’s got to be something wrong
with the theory that the child’s hypotheses “differentiate” lexical classes a
priori. A priori constraints would mean that false hypotheses don’t even get
tried. Overgeneralization, by contrast, means that false hypotheses do get
tried but are somehow expunged (presumably by some sort of information
that the environment supplies).
At one point, Pinker almost ’fesses up to this. The heart of his strategy
for lexical learning is that “if the verbs that occur in both forms have some
[e.g. semantic] property . . . that is missing in the verbs that occur [in the
input data] in only one form, bifurcate the verbs . . . so as to expunge
nonwitnessed verb forms generated by the earlier unconstrained version of
the rule if they violate the newly learned constraint” (1989: 52). Pinker
admits that this may “appear to be using a kind of indirect negative
evidence: it is sensitive to the nonoccurrence of certain kinds of verbs”. To
be sure; it sounds an awful lot like saying that there is no Baker’s Paradox
for the learning of verb structure, hence no argument for a priori semantic
15 Though the facts are a little labile, to be sure. For some recent data, see Marcus et al.
1992.

No comments:

Followers