Symbol grounding
Grounded Interpretation
Context
Collaboration
Communicative intentions and
conversational processes.
Matthew Stone.
In Trueswell
and Tanenhaus's, World-situated Language Use, pages 39-70, 2004.
Microplanning with communicative intentions.
Matthew Stone,
Christine Doran, Bonnie Webber, Tonia Bleam and Martha Palmer.
Computational Intelligence 19:4, pages 311-381, 2003
An
information-state approach to collaborative reference. David
DeVault, Natalia Kariaeva, Anubha Kothari, Iris Oved, and Matthew
Stone. ACL 2005 Proceedings Companion Volume, pages 1-4.
Tools for Building Agents
Speaking with hands.
Matthew Stone, Doug DeCarlo,
Insuk Oh, Christian Rodriguez, Adrian Stere, Alyssa Lees, and Chris
Bregler.
ACM Transactions on Graphics 23(3):506-513, 2004
Facial signals for discourse
Doug DeCarlo, Matthew Stone, Corey Revilla and Jennifer
J. Venditti.
Computer Animation and Virtual Worlds, 15:1,
pages 27-38, 2004.
Full references here.
I work toward the ultimate goal of constructing conversational
agents, programs that can understand and contribute to spoken
dialogue with a human user in ordinary natural language. My
current research explores meaningfulness as an explicit
design goal for these systems.
Meaning is not a module or a level of representation; a system
requires many coordinated modules and representations to act
meaningfully. That's why meaningfulness is a problem for design.
Since the work of Kripke and Putnam in the early 1970s, we've known
that attributions of meaning reflect a complex combined assessement
of an agent's perceptual powers, social relationships, linguistic
knowledge and rational choice. These interrelated abilities work
together to achieve the flexibility and robustness of spontaneous
dialogue. They allow us to resolve misunderstandings and improve
our models of the world, to use vague language to communicate
economically, to tolerate vague language from others, and to draw
on the fine-grained word-by-word semantics we've learned for our
native language to participate in a wide variety of
tasks, including decision-support, negotiation and explanation.
The conversational agents we build will have to realize these
interrelated capacities. In fact, our intuitions about meaning
offer a rich array of precise principles for organizing system
implementations, and for characterizing systems' behavior in
commonsense terms. In particular, meaningful linguistic behavior
in dialogue requires that interlocutors:
-
Be environmentally situated in such a way that their internal
representations are meaningful. This is necessary on the broad
Gricean project of reducing linguistic meaning to psychological
meaning, which is seen as primary. On this view, an agent
cannot produce meaningful linguistic behavior unless it has
psychological or mental meaning. For a computational agent, this
requires that any allegedly meaningful internal symbol be suitably
"grounded" (connected to its content).
-
Make each utterance with a recognizable intention to achieve
certain updates to the shared context. This requirement reflects
work in cognitive science that takes context and changes to context
as central phenomena in dialogue, and that takes these changes to
be mediated by general mechanisms of collaborative agency.
My research in fleshing out meaningful designs thus explores
computational methods for grounding representational content, for
connecting linguistic messages to grounded representations, for
characterizing context and context-change in dialogue, and for
implementing agents that can follow and contribute to collaboration
through face-to-face interactions with human users.
Meaningful agents will include everything from humanoid
caregiver robots and characters in interactive entertainment to
interfaces that relate to their users and virtual peers that kids
are motivated to teach, argue with and listen to. I also work to
create a diverse pool of stakeholders in the future of this
technology through supporting students in interdisciplinary
experiences at Rutgers and creating resources for interdisciplinary
investigation elsewhere.
In designing meaningful agents, interdisciplinary interaction
begins by connecting philosophy and AI. Our untutored intuitions
about conversation are too inchoate to serve as the design of a
computer program. Philosophical argument is the way to sharpen
these intuitions, and considering the behavior of artificial agents
has always been a source of provocative new arguments. Empirical
analysis of human-human conversation, as pursued in linguistics,
can also help to flesh out a detailed picture of meaning because,
one suspects, our intuitions about meaning correctly describe our
linguistic knowledge and communicative behavior. And so can a
critical stance towards intelligent systems because, one suspects,
our intuitions about meaning do real work in allowing us to
introspect about what we say in conversation and make choices that
help us understand one another. We need to put these ingredients
together, to help explain our meaningful abilities in conversation
and to allow interactive agents to reproduce them. Ultimately, as
an AI researcher, I will be particularly excited to be able to
offer sophisticated defenses of the claim that our systems mean
what they say.