http://www.You4Dating.com 100% Free Dating website! 1.Our Website - is a great way to find new friends or partners, for fun, dating and long term relationships. Meeting and socializing with people is both fun and safe.
2.Common sense precautions should be taken however when arranging to meet anyone face to face for the first time.
3.You4Dating Free Online Dating ,You4Dating is a Free 100% Dating Site, There are No Charges ever. We allow You to Restrict who can Contact You, and Remove those unfit to Date.
4. You4Dating is Responsible for Creating Relationships per Year proving it is possible to Find Love Online. It will Quickly become a Leader in the Internet Dating Industry because of its Advanced Features and matching Systems,and most of all,Because is a 100% Free-There are No Charges Ever.
5. You4Dating is an International Dating Website Serving Single Men and Single Women Worldwide. Whether you're seeking Muslim,Christian,Catholic, Singles Jewish ,Senor Dating,Black Dating, or Asian Dating,You4Dating is a Right Place for Members to Browse through, and Potentially Find a Date.Meet more than 100000 Registred Users
6. Multy Language Dating Site.
http://www.You4Dating.com

Sunday 7 December 2008

The Linguist’s Tale 57

of a problem in learnability theory known as “Baker’s Paradox”. Both
arguments exploit rather deep assumptions about the architecture of
theories of language development, and both have been influential;
sufficiently so to justify taking a detailed look at them. Most of the rest of
this chapter will be devoted to doing that.
The Bootstrapping Argument
A basic idea of Pinker’s is that some of the child’s knowledge of syntactic
structure is “bootstrapped” from knowledge about the semantic properties
of lexical items; in particular, from knowledge about the semantic
structure of verbs. The details are complicated but the outline is clear
enough. In the simplest sorts of sentences (like ‘John runs’, for example),
if you can figure out what syntactic classes the words belong to (that ‘John’
is a noun and ‘runs’ is an intransitive verb) you get the rest of the syntax
of the sentence more or less for free: intransitive verbs have to have NPs
as subjects, and ‘John’ is the only candidate around.
This sort of consideration suggests that a significant part of the child’s
problem of breaking into sentential syntax is identifying the syntax of
lexical items. So far so good. Except that it’s not obvious how properties
like being a noun or being an intransitive verb might signal their presence
in the learner’s input since they aren’t, in general, marked by features of the
data that the child can unquestion-beggingly be supposed to pick up.
There aren’t, for example, any acoustic or phonetic properties that are
characteristic of nouns as such or of verbs as such.
The problem with almost every nonsemantic property that I have heard proposed
as inductive bases [sic] is that the property is itself defined over configurations . . .
that are not part of the child’s input, that themselves have to be learned . . . [By
contrast] how the child comes to know such things, which are not marked
explicitly in the input stream, is precisely what the semantic bootstrapping
hypothesis is designed to explain. (Pinker 1984: 51)
Here’s how the explanation goes. Though (by assumption) the child
can’t detect being a noun, being a verb, being an adjective, etc. in the
“input stream”, he can (still by assumption) detect such putative reliable
semantic correlates of these syntactic properties as being a person or thing,
being an action or change of state, and being an attribute. (For more of
Pinker’s suggested pairings of syntactic properties with their semantic
correlates, see 1984: 41, table 2.1.) Thus, “when the child hears ‘snails eat
leaves,’ he or she uses the actionhood of ‘eat’ to infer that it is a verb, the
agenthood of ‘snails’ to infer that it plays the role of subject, and so on”
(ibid.: 53). In effect, the semantic analysis of the input sentence is
The Linguist’s Tale 57
supposed somehow to be perceptually given; and the correspondence
between such semantic features as expressing a property and such syntactic
features as being an adjective are assumed to be universal. Using the two
together provides the child with his entering wedge.
Now, prima facie at very least, this seems to be a compact example of
two bad habits that lexical semanticists are prone to: kicking the problem
upstairs (‘How does the child detect whatever property it is that ‘attribute’
denotes?’ replaces ‘How does the child detect whatever property it is that
‘adjective’ denotes?’); and a partiality for analyses that need more analysis
than their analysands. One sort of knows what an adjective is, I guess. But
God only knows what’s an attribute, so God only knows what it is for a
term to express one.
The point isn’t that ‘attribute’ isn’t well defined; I suppose theoretical
terms typically aren’t. Rather, the worry is that Pinker has maybe got the
cart before the horse; perhaps the intuition that ‘red’ and ‘12’ both express
“attributes” (the first of, as it might be, hens (cf. ‘red hens’), and the second
of, as it might be, sets (cf. ‘twelve hens’)) isn’t really semantical at all;
perhaps it’s just a hypostatic misconstrual of the syntactic fact that both
words occur as modifiers of nouns.12 It’s undeniable that ‘red’ and ‘twelve’
are more alike than, as it might be, ‘red’ and ‘of’. But it’s a fair question
whether their similarity is semantic or whether it consists just in the
similarity of their syntactic distributions. Answering these questions in
the way that Pinker wants us to (viz. ‘Yes’ to the first, ‘No’ to the second)
depends on actually cashing notions like object, attribute, agent, and the
rest; on saying what exactly it is that the semantics of two words have in
common in so far as both words ‘denote attributes’. So far, however, there
is nothing on offer. Rather, at this point in the discussion, Pinker issues a
kind of disclaimer that one finds very often in the lexical semantics
literature: “I beg the thorny question as to the proper definition of the
various semantic terms I appeal to such as ‘agent,’ ‘physical object’, and
the like” (ibid.: 371 n. 12). Note the tactical similarity to Jackendoff, who,
as we’ve seen, says that ‘keep’ means CAUSE A STATE TO ENDURE,
but is unprepared to say much about what ‘CAUSE A STATE TO
ENDURE’ means (except that it’s ineffable).
Digression on method. You might suppose that in “begging the thorny
question”, Pinker is merely exercising a theorist’s indisputable right not to
58 The Demise of Definitions, Part I
12 For an account of language acquisition in which the horse and cart are assigned the
opposite configuration—syntax bootstraps semantics—see Gleitman 1990. To the extent
that we have some grasp on what concepts terms like ‘S’, ‘NP’, ‘ADJ’ express, the theory that
children learn by syntactic boostrapping is at least better defined than Pinker’s. (And to the
extent that we don’t, it’s not.)
provide a formal account of the semantics of the (meta)language in which
he does his theorizing. But that would misconstrue the logic of intentional
explanations. When Pinker says that the child represents the snail as an
agent, ‘agent’ isn’t just a term of art that’s being used to express a concept
of the theorist’s; it’s also, simultaneously, being used to express a concept
that the theorist is attributing to the child. It serves as part of a de dicto
characterization of the intentional content of the child’s state of mind,
and the burden of the theory is that it’s the child’s being in a state of mind
with that content that explains the behavioural data. In this context, to
refuse to say what state of mind it is that’s being attributed to the child
simply vitiates the explanation. Lacking some serious account of what
‘agent’ means, Pinker’s story and the following are closely analogous:
—Why did Martha pour water over George?
—Because she thinks that George is flurg.
—What do you mean, George is flurg?
—I beg that thorny question.
If a physicist explains some phenomenon by saying ‘blah, blah, blah,
because it was a proton . . .’, being a word that means proton is not a
property his explanation appeals to (though, of course, being a proton is).
That, basically, is why it is not part of the physicist’s responsibility to
provide a linguistic theory (e.g. a semantics) for ‘proton’. But the
intentional sciences are different. When a psychologist says ‘blah, blah,
blah, because the child represents the snail as an agent . . .’, the property
of being an agent-representation (viz. being a symbol that means agent) is
appealed to in the explanation, and the psychologist owes an account of
what property that is. The physicist is responsible for being a proton but not
for being a proton-concept; the psychologist is responsible for being an
agent-concept but not for being an agent-concept-ascription. Both the
physicist and the psychologist is required to theorize about the properties
he ascribes, and neither is required to theorize about the properties of the
language he uses to ascribe them. The difference is that the psychologist
is working one level up. I think confusion on this point is simply rampant
in linguistic semantics. It explains why the practice of ‘kicking semantic
problems upstairs’ is so characteristic of the genre.
We’ve encountered this methodological issue before, and will encounter
it again. I do hate to go on about it, but dodging the questions about the
individuation of semantic features (in particular, about what semantic
features denote) lets lexical semanticists play with a stacked deck. If the
examples work, they count them for their theory. If they don’t work, they
count them as metaphorical extensions. I propose that we spend a couple
of pages seeing how an analysis of this sort plays out.

No comments:

Followers