Computational semantics
Computational applications put the problem of the
context-dependence of meaning into particular relief. The
interpretation of utterances rests on general knowledge of meaning,
but reveals specific contributions that reflect the purpose and
direction of the ongoing conversation. Semantic theory describes
these connections by expressing context-dependent meanings in terms
of two constructs: anaphors, semantic variables that take on
salient shared values; and presuppositions, conditions on the
values of these variables that must be supported by salient facts
from the context.
I have explored this theory to develop formal and
computational accounts of the context-dependent interpretation of a
range of forms in English, including disjunction, as realized by
the English word or (Semantics
and Linguistic Theory, 1992); signals of the evidential
status of a conclusion, as realized in English by must (Computational Semantics Workshop,
1994); signals of temporal and hypothetical relationships
within discourse, as realized by morphemes for tense and modality
including would in English (Computing Meaning, 1999, with Daniel
Hardt); and signals of other argumentative and domain
connections between sentences in discourse, as realized by English
markers such as for example, then and
otherwise (as Bonnie Webber, Aravind Joshi and Alistair
Knott and I describe in ACL 1999
and a paper to appear in Computational
Linguistics).
My interest in such dependencies lies in how English speakers
stitch together explanations of actions and plans. My 2000 Journal of Language and
Computation paper provides an implemented case study of how
such context-dependency can prove essential in providing agents
with the information they need to act.
Current and Future
Directions
Advances in computational models of language use set clear
challenges and opportunities for the future of natural language
research. By drawing on existing computational infrastructure, we
can translate NL research results into engaging prototypes that
motivate new, practical dialogue applications. My ongoing research
continues my longstanding commitment to developing such prototypes
(starting with my role in Cassell's Animated Conversation work
reported to SIGGRAPH and to
the Cognitive Science Society
in 1994, and continuing to a paper in the
Computer Animation conference 2002 with colleagues Doug
DeCarlo, Corey Revilla and Jennifer Venditti). At the same
time, these testbeds make it possible to evaluate new NL modules
and algorithms more thoroughly, in the context of working systems,
and to further empirical investigations of language use by
experimental methods and computational analysis. This
methodological crossfertilization is in its infancy (Jennifer Venditti and I have initial results
for the 2002 Conferences on Speech Prosody). But it
suggests a future for the science of language in which a single
program of investigation can not only improve the capabilities of
conversational agents but can make a lasting contribution to the
computational theory of human language use. This future depends on
creating a broad literacy among junior researchers across
disciplines. For my part, I am working to show the relevance of
modern computational ideas in the study of language in writing for
broad audiences: for philosophers (e.g., my
chapter in What is Cognitive Science? Volume 2, 2003),
for psychologists (e.g., my chapter in
World-Situated Language Use 2002) and for linguists
(e.g., my chapter in A Handbook for
Language Engineers 2002).