Contents
Semantics
Pragmatics
Publications
Links
Home
|
Semantics
In specific contexts, natural language utterances carry remarkably
precise content. Consider the last sentence in this discourse:
I want to hold a barbecue. Some vegetarians may be coming. What can
I do for them?
A general gloss for it is unsatisfying:
What actions for me to take towards vegetarians are logically possible?
It means something much more specific:
Assuming vegetarians come to my barbecue, what actions will be
available to me (in virtue of the properties of barbecues and
vegetarians in the actual world) that will contribute to making the
event a success?
Computational semantics is devoted to the specification of an
implemented theory that would describe correct and precise meanings,
such as these, for utterances in context.
Computational semantics shares with formal semantics research in
linguistics and philosophy an absolute commitment to formalizing the
meanings of sentences and discourses exactly. The difference among
these fields reflects their overall enterprises. Linguistic
semantics, for example, is looking for an account of human knowledge
of meaning that accounts for crosslinguistic variation and human
language learnability. Philosophical semantics aims to situate
knowledge of meaning within a general understanding of the
intentionality of human mental states. The distinctive concerns of
computational semantics include the following questions:
-
How can ambiguities and contextual dependencies in the
meanings of sentences be represented compactly?
-
How can they be resolved automatically and efficiently?
-
How can semantic representations be related to other
computational representations of the world?
-
How can we compute the inferential consequences of semantic
representations?
These concerns lead to a focus on ontology and reference
within computational semantics.
-
Our focus on ontology responds to the fundamental challenge
of interpreting language in terms of the language-independent
concepts of some underlying domain of discourse.
-
Our focus on reference reflects the fact that the bridge
between language and the world is not accomplished directly, by
predefined semantic conventions, but rather is mediated by
speakers' particular intentions in particular utterances to
evoke particular elements of an underlying ontology.
This focus identifies a level, suggested by Bonnie Webber and developed
in joint work with Ali Knott, Aravind Joshi and myself,
in which discourse is organized into multi-clause descriptions
of entities, including individuals, sets, eventualities, situations,
etc. This level of discourse exploits the same syntactic and semantic
mechanisms as the clause; its meaning derives from familiar semantic
principles: anaphoric presupposition, compositional semantics, and
defeasible inference. Key questions for this perspective are then:
What kinds of entities do discourses describe? And, what kinds of
linguistic, discourse and inferential resources mediate the
description? Obviously, there is close connection between this
perspective and the perspective of my generation
research. This is no accident.
Publications
Stone 92
|
Matthew Stone.
Or and Anaphora.
Proceedings of SALT 2, 1992, pages 367--385.
This paper argues that an analysis of pronouns as descriptions is
required to account for sentences with disjunctive split antecedents.
Here is one of the catchy examples:
It's interesting what happens if a man calls a woman or a woman calls
a man. Sure, they're nervous about making the call, and they're
suprised to get it. But even today, she waits for him to ask her out.
Today, I might take a more proof-theoretic view.
|
Stone 94
|
Matthew Stone.
The Reference Argument of Epistemic Must.
Proceedings of IWCS 1, 1994, pages 181--190.
This paper argues that an utterance of must p refers to a
salient, justified argument in the context which supports p.
It is the strength of the argument and the speaker's intention in
referring to it that accounts for the strengh of must
in some contexts:
Now [Af] and [Af] must both be tangent points on the T
component in the f-plane; otherwise by Lemma 1 the component
would extend beyond these points.
and the weakness of must in others:
The handsome bird was solitary; its mate must be at home, silently
guarding the nest.
Today, new research on presupposition in semantics and argumentation
in artificial intelligence could be used to make the case stronger and
more precise.
|
Stone and Hardt 97
|
Matthew Stone and Daniel Hardt.
Dynamic Discourse Referents for Tense and Modals.
Proceedings of IWCS 2, 1997, pages 287--299.
Tense and modality are often thought to be anaphoric. In this paper,
we argue that tense and modals, just like all other discourse
anaphors, participate in strict-sloppy ambiguities under deletion:
John would give slides if he had to give the presentation. Bill would
just use the chalkboard.
We apply Hardt's theory of dynamic discourse referents to give such
ambiguities an account that exactly parallels Hardt's treatment of
pronouns and verb phrase ellipsis.
This paper has been revised for the conference proceedings book.
Here is the original version that
appeared at the conference.
|
Stone
99
|
Matthew Stone.
Reference to Possible Worlds. RuCCS Report 49, Rutgers University,
April 1999.
In modal subordination, a modal sentence is interpreted relative to a
hypothetical scenario introduced in an earlier sentence:
There may be other 1961 state committee retirements come April 18,
but they will be leaving by choice of the Republican voters.
In this
paper, I argue that this phenomenon reflects the fact that the
interpretation of modals in an anaphoric process, precisely analogous
to the anaphoric interpretation of tense. Modal morphemes introduce
alternative scenarios as entities into the discourse model; their
interpretation depends on evoking scenarios for described, reference
and speech points, and relating them to one another.
The current version is a revision of an earlier paper, The Anaphoric Parallel between Modality
and Tense.
IRCS Report 97-06, University of Pennsylvania, May 1997.
That paper is also available directly from Penn
|
Webber et
al. to appear
|
Bonnie Webber, Matthew Stone, Aravind Joshi, and Alistair Knott.
Anaphora and Discourse Semantics.
To appear in Computational Linguistics.
We argue that many discourse relations can be explained
non-structurally in terms of the grounding of anaphoric
presuppositions. This simplifies discourse structure, allowing it
a straightforward compositional semantics that still realizes a
full range of discourse relations. This
example, with its many cue words:
On the one hand, John loves Barolo. So he ordered three cases
of the '97. On the other hand, because he's broke, he then had to
cancel the order. suggests the advantage of using
multiple devices like structure and presupposition to account for
discourse relations. An earlier version was Discourse Relations: A Structural and
Presuppositional Account Using Lexicalized TAG. Proceedings of
ACL, 1999, pages 41-48.
|
Stone
00
|
Matthew Stone. Towards a Computational Account of Knowledge,
Action and Inference in Instructions. To appear in Journal of
Language and Computation, 2000.
I consider abstract instructions, which provide
indirect descriptions of actions in cases where the speaker has key
information that a hearer can use to identify the right action to
perform, but the speaker alone cannot identify that action. The
communicative effects of such instructions, that the hearer should
know what to do, are in effect implicatures. Here's a typical
abstract instruction:
Enter your name on box one of the form.
This paper sketches a computational framework for constructing
and recognizing communicative intent for abstract instructions, by
reasoning from a computational semantic theory.
|
Links
|