http://www.You4Dating.com 100% Free Dating website! 1.Our Website - is a great way to find new friends or partners, for fun, dating and long term relationships. Meeting and socializing with people is both fun and safe.
2.Common sense precautions should be taken however when arranging to meet anyone face to face for the first time.
3.You4Dating Free Online Dating ,You4Dating is a Free 100% Dating Site, There are No Charges ever. We allow You to Restrict who can Contact You, and Remove those unfit to Date.
4. You4Dating is Responsible for Creating Relationships per Year proving it is possible to Find Love Online. It will Quickly become a Leader in the Internet Dating Industry because of its Advanced Features and matching Systems,and most of all,Because is a 100% Free-There are No Charges Ever.
5. You4Dating is an International Dating Website Serving Single Men and Single Women Worldwide. Whether you're seeking Muslim,Christian,Catholic, Singles Jewish ,Senor Dating,Black Dating, or Asian Dating,You4Dating is a Right Place for Members to Browse through, and Potentially Find a Date.Meet more than 100000 Registred Users
6. Multy Language Dating Site.
http://www.You4Dating.com

Sunday 7 December 2008

The Linguist’s Tale 67

constraints on the child’s hypotheses about lexical syntax.What happens,
on this view, is that the child overgeneralizes, just as you would expect,
but the overgeneralizations are inhibited by lack of positive supporting
evidence from the linguistic environment and, for this reason, they
eventually fade away. This would seem to be a perfectly straightforward
case of environmentally determined learning, albeit one that emphasizes
(as one might have said in the old days) ‘lack of reward’ rather than
‘punishment’ as the signal that the environment uses to transmit negative
data to the learner. I’m not, of course, suggesting that this sort of story is
right. (Indeed Pinker provides a good discussion of why it probably isn’t,
see section 1.4.3.2.) My point is that Pinker’s own account seems to be no
more than a case of it. What is crucial to Pinker’s solution of Baker’s
Paradox isn’t that he abandons arbitrariness; it’s that he abandons ‘no
negative data’.
Understandably, Pinker resists this diagnosis. The passage cited above
continues as follows:
This procedure might appear to be using a kind of indirect negative evidence; it
is sensitive to the nonoccurrence of certain kinds of forms. It does so, though,
only in the uninteresting sense of acting differently depending on whether it hears
X or doesn’t hear X, which is true of virtually any learning algorithm . . . It is not
sensitive to the nonoccurrence of particular sentences or even verb-argument
structure combinations in parental speech; rather it is several layers removed from
the input, looking at broad statistical patterns across the lexicon. (1989: 52)
I don’t, however, think this comes to anything much. In the first place, it’s
not true (in any unquestion-begging sense) that “virtually any learning
algorithm [acts] differently depending on whether it hears X or doesn’t
hear X”. To the contrary, it’s a way of putting the productivity problem
that the learning algorithm must somehow converge on treating infinitely
many unheard types in the same way that it treats finitely many of the
heard types (viz. as grammatical) and finitely many heard types in the same
way that it treats a different infinity of the unheard ones (viz. as
ungrammatical). To that extent, the algorithm must not assume that either
being heard or not being heard is a projectible property of the types.
On the other hand, every treatment of learning that depends on the
feedback of evidence at all (whether it supposes the evidence to be direct
or indirect, negative or positive, or all four) must “be several layers
removed from the input, looking at broad statistical patterns across the
lexicon”; otherwise the presumed feedback won’t generalize. It follows
that, on anybody’s account, the negative information that the environment
provides can’t be “the nonoccurrence of particular sentences” (my
emphasis); it’s got to be the non-occurrence of certain kinds of sentences.
The Linguist’s Tale 67
This much is common ground to any learning theory that accounts for
the productivity of what is learned.
Were we’ve gotten to now: probably there isn’t a Baker’s Paradox about
lexical syntax; you’d need ‘no overgeneralization’ to get one, and ‘no
overgeneralization’ is apparently false of the lexicon. Even if, however,
there were a Baker’s Paradox about the lexicon, that would show that the
hypotheses that the child considers when he makes his lexical inductions
must be tightly endogenously constrained. But it wouldn’t show, or even
suggest, that they are hypotheses about semantic properties of lexical
items. No more than the existence of a bona fide Baker’s Paradox for
sentential syntax—which it does seem that children hardly ever overgeneralize—
shows, or even suggests, that it’s in terms of the semantic properties
of sentences that the child’s hypotheses about their syntax are
defined.
So much for Pinker’s two attempts at ontogenetic vindications of lexical
semantics. Though neither seems to work at all, I should emphasize a
difference between them: whereas the ‘Baker’s Paradox’ argument
dissolves upon examination, there’s nothing wrong with the form of the
bootstrapping argument. For all that I’ve said, it could still be true that
lexical syntax is bootstrapped from lexical semantics. Making a convincing
case that it is would require, at a minimum, identifying the straps that the
child tugs and showing that they are bona fide semantic; specifically, it
would require showing that the lexical properties over which the child
generalizes are typically among the ones that semantic-level lexical
representations specify. In principle, we could get a respectable argument
of that form tomorrow; it’s just that, so far, there aren’t any. So too, in my
view, with the other ‘empirical’ or ‘linguistic’ arguments for lexical
decomposition; all that’s wrong with them is that they aren’t sound.
Oh, well, so be it. Let’s go see what the philosophers have.
68 The Demise of Definitions, Part I
IT’S a sad truth about definitions that even their warm admirers rarely
loved them for themselves alone. Cognitive scientists (other than linguists;
see Chapter 3) cared about definitions because they offered a handy
construal of the thesis that many concepts are complex; viz. the concepts
in a definition are the constituents of the concept it defines. And cognitive
scientists liked many concepts being complex because then many concepts
could be learned by assembling them from their parts. And cognitive
scientists liked many concepts being learned by assembling them from
their parts because then only the primitive parts have to be unlearned.
We’ll see, in later chapters, how qualmlessly most of cognitive science
dropped definitions when it realized that it could have complex concepts
without them.
Something like that went on in philosophy too. Philosophers cared
about definitions because they offered a handy construal of the thesis that
inferential connections are sometimes intrinsic to the concepts that enter
into them: viz. complex concepts are constituted by their inferential relations
to the concepts in their definitions. Correspondingly, philosophical affection
for definitions waned when intrinsic conceptual connectedness fell into
disrepute (as it did in the US in consequence of Quine’s strictures on
analyticity) and when epistemological construals of intrinsic conceptual
connectedness bade fare to displace semantic ones (as they did in the UK
in the criteriological philosophy of Wittgenstein and his followers).
Philosophers do like the idea of there being lots of intrinsic connections
among concepts; even philosophers who think there aren’t any often sort
of wish that there were. The idea is that an inference that constitutes the
concepts which enter into it can be known a priori to be sound. And

No comments:

Followers