Symbol grounding

Grounded Interpretation

Context

Collaboration

Tools for Building Agents

Full references here.

I work toward the ultimate goal of constructing conversational agents, programs that can understand and contribute to spoken dialogue with a human user in ordinary natural language. My current research explores meaningfulness as an explicit design goal for these systems.

Meaning is not a module or a level of representation; a system requires many coordinated modules and representations to act meaningfully. That's why meaningfulness is a problem for design. Since the work of Kripke and Putnam in the early 1970s, we've known that attributions of meaning reflect a complex combined assessement of an agent's perceptual powers, social relationships, linguistic knowledge and rational choice. These interrelated abilities work together to achieve the flexibility and robustness of spontaneous dialogue. They allow us to resolve misunderstandings and improve our models of the world, to use vague language to communicate economically, to tolerate vague language from others, and to draw on the fine-grained word-by-word semantics we've learned for our native language to participate in a wide variety of tasks, including decision-support, negotiation and explanation.

The conversational agents we build will have to realize these interrelated capacities. In fact, our intuitions about meaning offer a rich array of precise principles for organizing system implementations, and for characterizing systems' behavior in commonsense terms. In particular, meaningful linguistic behavior in dialogue requires that interlocutors:

My research in fleshing out meaningful designs thus explores computational methods for grounding representational content, for connecting linguistic messages to grounded representations, for characterizing context and context-change in dialogue, and for implementing agents that can follow and contribute to collaboration through face-to-face interactions with human users.

Meaningful agents will include everything from humanoid caregiver robots and characters in interactive entertainment to interfaces that relate to their users and virtual peers that kids are motivated to teach, argue with and listen to. I also work to create a diverse pool of stakeholders in the future of this technology through supporting students in interdisciplinary experiences at Rutgers and creating resources for interdisciplinary investigation elsewhere.

In designing meaningful agents, interdisciplinary interaction begins by connecting philosophy and AI. Our untutored intuitions about conversation are too inchoate to serve as the design of a computer program. Philosophical argument is the way to sharpen these intuitions, and considering the behavior of artificial agents has always been a source of provocative new arguments. Empirical analysis of human-human conversation, as pursued in linguistics, can also help to flesh out a detailed picture of meaning because, one suspects, our intuitions about meaning correctly describe our linguistic knowledge and communicative behavior. And so can a critical stance towards intelligent systems because, one suspects, our intuitions about meaning do real work in allowing us to introspect about what we say in conversation and make choices that help us understand one another. We need to put these ingredients together, to help explain our meaningful abilities in conversation and to allow interactive agents to reproduce them. Ultimately, as an AI researcher, I will be particularly excited to be able to offer sophisticated defenses of the claim that our systems mean what they say.