This file contains bibliographic citations (with abstracts) for selected
papers produced at the University of Rochester. Most citations end
with a link to a PDF or (formerly) compressed postscript file.
These files are also available via anonymous ftp from
ftp.cs.rochester.edu (user anonymous, password your_name), in the
directory pub/.
Copyright on many of these papers may be owned by organizations other
than the University of Rochester, as indicated in the citations below.
For more information or for help obtaining technical
reports not available online, please contact
tr@cs.rochester.edu.
Keywords: communicative agents; continual planning; deliberate andopportunistic behavior; incomplete knowledge; introspection;self-aware agents; self-motivated cognitive agents.
We present a flexible initial framework for defining self-motivated, self-aware agents in simulated worlds, planning continuously so as to maximize long-term rewards. While such agents employ reasoned exploration of feasible sequences of actions and corresponding states, they also behave opportunistically and recover from failure, thanks to their continual plan updates an quest for rewards. Our framework allows for both specific and general (quantified) knowledge, and for epistemic predicates such as knowing-that and knowing-whether. Because realistic agentshave only partial knowledge of their world, the reasoning of the proposed agents uses a weakened closed world assumption; this has consequences for epistemic reasoning, in particular introspection. The planning operators allow for quantitative, gradual change and side effects such as the passage of time, changes in distances and rewards, and language production, using a uniform procedural attachment method. Question answering (involving introspection) and experimental runs are shown for our particular agent ME in a simple world, demonstrating the value of continual deliberate, reward-driven planning. Though the primary merit of agents definable in our framework is that they combine all of the above features, they can also be configured as single or multiple goal-seeking agents, and as such perform comparably with some recent experimental agents.
Keywords: crowdware; natural language processing; virtual agents; humancomputation; crowd computing.
Autonomous systems cannot yet reliably engage in an open-ended %dialogue withusers due the complexity of natural language processing, but online crowds present new opportunities to do so. We introduce Chorus, a system enabling real-time two-way natural language conversation between an end user and a single virtual agent powered by a distributed crowd of online humans. Chorus maintains consistent, on-topic conversations with end users across multiple sessions even as individual members of the crowd come and go by storing a shared, curated dialogue history. While users see only a steady stream of dialogue with a single conversational partner, multiple crowd workers collaborate to select responses via an interface that allows them to rapidly scan conversational history, identify relevant parts of a conversation, and select between responses. Experiments show that dialogue with Chorus demonstrates conversational memory and interaction consistency, answering over 84% of all user queries correctly. More generally, Chorus demonstrates the ability of crowd-powered communication interfaces to serve as a robust alternative to virtual agents when interacting with software systems.
Keywords: planning as satisfiability; satplan; planning; conditionalplanning; planning under uncertainty.
We present an effective SAT encoding of planning with partial knowledge, tests, branches, and non-deterministic actions. As in recent work on compiling conformant and contingent planning into STRIPS, our encoding is based on representing knowledge states. Unlike previous approaches, however, fluents are conditioned on threads of execution, rather than on alternative choices of the initial state. Tests and other non-deterministic actions activate threads of execution, which are later deactivated by join operations. The experiments we present in this paper compare our SAT approach with state of the art heuristic search planners on contingent and conformant planning problems. We then present results on solving conditional planning problems that cannot even be represented by competing approaches.
Keywords: crowdsourcing; collective intelligence; human computation;mechanism design; game theory; hci; voting theory; computationalsocial choice.
Crowdsourcing has been effectively applied to many difficult problems beyond the capabilities of current automated systems. Many such problems are not only difficult to solve, but their solutions are also difficult to verify. Absent an evaluation metric for automatic verification, a common approach is to have crowd workers not only solve problems but also verify solutions using a collective intelligence model. Optimizing workers' responses in this situation presents difficulties, as does optimizing with respect to both speed and accuracy simultaneously, as one is a constraint on the other. In this paper, we introduce HiveMind, a game-theoretic model of collective-intelligence crowdsourcing that addresses response optimization at the individual worker level with respect to both speed and accuracy. In addition to inferring worker commitment level, HiveMind allows task creators to tradeoffs between low crowd participation and excess noise by tuning the level of convergence. We discuss how this model can be used to motivate workers for general continuous real-time tasks of unbounded length using a reputation system and how to identify consistent leaders in these domains. We also explore expert-answer elicitation which requires a significant degree of divergence to attain a set of more varied richer responses, and present a solution using a specialized aggregate function within HiveMind as a framework. This enables systems that seek to elicit different sets of results from crowds (each with their own idiosyncrasies) to all use a single framework.
Keywords: machine translation; variational Bayes; machine learning.
Bayesian approaches have been shown to reduce the amount of overfitting that occurs when running the EM algorithm, by placing prior probabilities on the model parameters. We apply one such Bayesian technique, variational Bayes, to GIZA++, a widely-used piece of software that computes word alignments for statistical machine translation. We show that using variational Bayes improves the performance of GIZA++, as well as improving the overall performance of the Moses machine translation system in terms of BLEU score.
Keywords: word alignment; convex optimization; non-negative matrix factorization.
We consider word alignment within the "bag-of-words" framework of IBM Model 1, and explore alternative optimization criteria and solutions and show that neither the EM nor the probabilistic constraint is necessary for learning good parameters.
Keywords: machine translation; Bayesian learning; unsupervised tokenization.
Training a statistical machine translation system starts with tokenizing a parallel corpus. Some languages such as Chinese do not incorporate spacing in their writing system, which creates a challenge for tokenization. Morphologically rich languages such as Korean and Hungarian present an even bigger challenge, since optimal token boundaries for machine translation in these languages are often unclear. Both rule-based solutions and statistical solutions are currently used. In this paper, we present unsupervised methods to solve tokenization problem. Our methods incorporate information available from parallel corpus to determine a good tokenization for machine translation.
Keywords: extraction; knowledge; LDA.
Many of the previous efforts in generalizing over knowledge extracted from text have relied on the use of manually created word sense hierarchies, such as WordNet. We present initial results on generalizing over textually derived knowledge, through the use of the LDA topic model framework, as the first step towards automatically building corpus specific ontologies.
Keywords: parsing; complexity.
Reduction is the operation of transforming a production in a Linear Context-Free Rewriting System (LCFRS) into two simpler productions by factoring out a subset of the nonterminals on the production's righthand side. Reduction lowers the rank of a production but may increase its fan-out. We show how to apply reduction in order to minimize the parsing complexity of the resulting grammar, and study the relationship between rank, fan-out, and parsing complexity. We show that it is always possible to obtain optimum parsing complexity with rank two. However, among transformed grammars of rank two, minimum parsing complexity is not always possible with minimum fan-out.
Keywords: probabilistic counting; randomized storage; language modeling; bloom filter; morris counter.
Previous work by Talbot and Osborne (2007a) explored the use of randomized storage mechanisms in language modeling. These structures %trade a small amount of error for significant space savings, enabling the use of larger language models on relatively modest hardware.Going beyond space efficient count storage, here we present the Transition Counter, an extended model for performing space efficient counting over streams of finite length. Theoretical and initial experimental results show the promise of approximate counting in the context of limited space.
Keywords: extraction; semantics; knowledge.
Existing work in the extraction of commonsense knowledge from text has been restricted to factoids that serve as statements about what may possibly obtain in the world. We present an approach to deriving stronger general claims from large sets of factoids. The idea is to coalesce the observed nominals for a given predicate argument into a few predominant types, obtained as WordNet synsets. The results can be construed as generically quantified sentences restricting the semantic type of an argument position of a predicate.
Keywords: non-hierarchical planners; hierarchical planners; intention,belief-desire-intention(BDI); self-awareness; (internally) motivated systems.
The ability to plan is a essential for any agent, artificial or %not, wishing to claim intelligence in both thought and behavior. Not only should a planning agent persist in pursuing a goal as long as the situation justifies the agent's perseverance, but an intelligent planning agent must additionally be proficient with responding to failures, opportunities, and threats in the environment. This distinction leads naturally to a discussion of externally motivated and internally motivated planning systems.We first survey externally motivated planners, which exist and work only to accomplish user-given goals. These planners can be further classified as either non-hierarchical or hierarchical, depending on whether a high level plan is first developed and then successively elaborated. We then review internally motivated planners, which are endowed with self-awareness and such mental attitudes as beliefs, desires, and intentions. Finally, we present a preliminary proposal of a self-aware, opportunistic planning agent that maximizes its own cumulative utility while achieving user-specified goals.
Keywords: information extraction; knowledge acquisition; natural language.
Research in Information Extraction has been overly focused on the extraction of facts concerning individuals as compared to general knowledge pertaining to classes of entities and events. In addition, preference has been given to simple techniques in order to enable high volume throughput.In what follows we give examples of existing work in the field of knowledge acquisition, then follow with ideas on areas for exploration beyond the current state of the art, specifically with respect to the extraction of conditional knowledge, making use of deeper linguistic analysis than is currently the norm.
Keywords: semantic role labeling; FrameNet; cross-frame; clustera posteriori classifier; maximum entropy.
Recent systems for semantic role labeling are very dependent on the specific predicates and corpora on which they are trained, but labeling new data is expensive. We study which features and classifiers are best able to generalize to unseen predicates from new semantic frames. We find that automatically derived cluster information is especially helpful in this setting, and that a relatively simple a posteriori classifier outperforms Maximum Entropy.
Keywords: semantic role labeling; self-training; co-trainingweakly supervised algorithms; maximum entropydecision lists.
We address the problem of domain-dependence in semantic role labeling systems by attempting to bootstrap from unlabeled data in new domains. We explore a variety of methods for bootstrapping, and compare two machine learning techniques, decision lists and maximum entropy.
Keywords: machine translation; permutation; asymptotic enumeration.
Factoring a Synchronous Context-Free Grammar into an equivalent grammar with a smaller number of nonterminals in each rule enables more efficient strategies for synchronous parsing. We present an algorithm for factoring an n-ary SCFG into a k-ary grammar in time O(kn). We also show how to efficiently compute the exact number of k-ary parsable permutations of length n, and discuss asymptotic behavior as n grows. The number of length n permutations that are k-ary parsable approaches a fixed ratio between successive terms as n grows for fixed k. As k grows, the difference between successive ratios approaches 1/e.
Keywords: functional reference; Dynamic Predicate Logic; semantic representation; donkey anaphora; frames; scripts; generic sentences.
This report proposes a generalization of Dynamic Predicate Logic that allows a straightforward treatment of functional anaphora in texts such as "Most men had a gun, but only a few used it," or "If all of the graduates received a job offer, then all of them accepted their offer." The approach dynamically assigns (partial) functions as values of variables that are existentially quantified within the scopes of quantifiers like "all" and "most." The proposed method is also applicable to bridging anaphora and functionally dependent entities in frames, scripts, and generic sentences.
Keywords: self-aware agents; consciousness; episodic memory; knowledge representation; dialog agents.
Self-awareness is an aspect of consciousness that is highly developed in humans in comparison with other animals. A human being unaware of his or her personal characteristics, of what he or she knows and doesn't know, can do and cannot do, wants and doesn't want, has experienced and is experiencing, etc., would surely be difficult to communicate with naturally. Therefore we believe that consciousness plays a crucial role in building artificial dialog agents with human-level abilities.We will provide an overview of consciousness as viewed in philosophy, biology and artificial intelligence, and we will present relevant work on agents that show abilities related to consciousness. Moreover we will present our initial attempt to extend the architecture of a simple EPILOG-based agent originally built by A.N. Kaplan in the direction of our goal of a human-like conscious dialog agent.
Keywords: dialogue modeling; goal recognition; agent communication.
This thesis describes research which attempts to remove some of the barriers to creating true conversational agents---autonomous agents which can communicate with humans in natural language. First, in order to help bridge the gap between research in the natural language and agents communities, we define a model of agent-agent collaborative problem solving which formalizes agent communication at the granularity of human communication. We then augment the model to define an agent-based model of dialogue, which is able to describe a much wider range of dialogue phenomena than plan-based models. The model also defines a declarative representation of communicative intentions for individual utterances.Recognition of these intentions from utterances will require an augmentation of already intractable plan and intention recognition algorithms. The second half of the thesis describes research in applying statistical corpus-based methods to goal recognition, a special case of plan recognition.
Because of the paucity of data in the plan recognition community, we have generated two corpora in distinct domains. We also define an algorithm which can stochastically generate artificial corpora to be used in learning. We then describe and evaluate fast statistical algorithms for both flat and hierarchical recognition of goal schemas and their parameter values. The recognition algorithms are more scalable than previous work and are able to recognize goal parameter values as well as schemas.
Keywords: domain-independent classical planning; domain analysis for planning and scheduling; invariant discovery.
Planning invariants are formulae that are true in every reachable state of a planning world. We describe a novel approach to the problem of discovering such invariants in propositional form---by analyzing only a set of reachable states of the planning domain, and not its operators. Our system works by exploiting perceived patterns of propositional covariance across the set of states: It hypothesizes that strongly-defined patterns represent features of the planning world.We demonstrate that, in practice, our system overwhelmingly produces correct invariants. Moreover, we compare it with a well-known system from the literature that uses complete operator descriptions, and show that it discovers a comparable number of invariants, and moreover, does so hundreds or thousands of times faster.
We also show how an existing operator-based invariant finder can be used to verify the correctness of the invariants we find, should operator information be available. We show that such hybrid systems can efficiently produce verifiably true invariants.
Keywords: language understanding; maximum entropy; semantic roles.
In this paper, we present the results for semantic labeling, extending the work of [Gildea and Jurafsky, 2002], [Fleischman et al., 2003], [Pradhan et al., 2004], and others. The main labeling approach is based on Maximum Entroopy. We show the performance of the baseline system as well as those by applying coreference resolution, stemming and feature combinations to the feature files.
Keywords: commonsense reasoning; commonsense corpus; affective computing; emotions; affect lexicon.
This paper presents an integrated approach to build an affect lexicon for emotion tagging of free text. The primary linguistic resource for this lexicon includes electronic dictionaries, on-line word association norms and a large scale commonsense corpus. Our main goal is to automatically collect frequently used affect words and phrases and also assess their emotion intensity. Multiple natural language processing techniques, like POS tagging, parsing, phrase chunking, constituent identification, are employed. We show examples of affect assessment by using this lexicon as well as syntactic processing. Our system gives out plausible emotion analysis for test text. Potential applications includes building emotional virtual agents, estimating user's attitude, and assisting information retrieval.
Keywords: invariant discovery; planning; statistical rule discovery; relational data mining; inductive logic programming.
Human beings have the innate ability to educe meaning from a mass of data by discovering and exploiting regularities in it. Patterns in the world seem to "jump out" at us; they seem obvious. In this paper, we present a system to discover laws in richly structured worlds that is inspired by this form of human reasoning. Much previous work in rule discovery has worked with impoverished domains describable as a list of (object, value) pairs. Such representations admit of relatively efficient algorithms, but are too poor to describe interesting features of the real world and of many logical systems.We survey more recent work in the field of relational data mining that seeks to extend these algorithms to richer domains. Previous approaches to this problem have worked by searching the space of syntactically correct rule-statements for those that satisfy certain criteria. Their search is guided by linguistic and declarative bias; they hypothesize the possible rules in some order and then test each one.
We argue that the space of possible rules is too large to be searched effectively in this manner. We propose an alternative, data-driven search paradigm, in which the search is guided not by relationships between the forms of the hypothesized rules, but by correlations in the data they represent. We argue that such pattern-driven search enables the detection of richer and more powerful hypotheses, including those involving equality and nested quantification.
We present a prototype system that incorporates our ideas, and the results obtained when it is applied to the problem of detecting invariants in arbitrary planning worlds. Finally, we discuss ways of extending the approach to more realistic domains, and of extending the discovery process by enabling it to create new concepts as necessary to better describe the data.
Keywords: team learning; collaborative learning; CS1 & CS2 pedagogy; peer-led learning; assessment.
This study of the Fall 2002 Computer Programming (CSC 171) course provides a detailed analysis of the relationship between variables such as workshop attendance, gender, ethnicity and prior student ability and student performance. The results, detailed in the subsequent sections below, suggest the following:* Workshop attendance has a significantly positive impact on student performance even after controlling for variations in gender and prior student ability.
* Due to the small sample size of the female and minority groups, the magnitude of the role gender and ethnicity plays in affecting student performance cannot be conclusively determined based on statistical analyses.
* Withdrawing female students performed significantly below their male counterparts even though they attended more workshops on average, while female students who completed the course did not perform significantly differently from their male counterparts.
* Prior student ability (as measured by SAT scores) is significant in affecting student performance.
* Controlling for prior student ability alters the effect of workshop attendance on performance only slightly. OLS regression results suggest an overestimation, logistic regression results suggest an underestimation of the effect prior to adding SAT scores.
Keywords: computational linguistics; natural language processing; pronoun resolution; discourse understanding; spoken dialog systems; abstract entity anaphora; discourse deixis; givenness hierarchy; noun phrase coreference; demonstrative pronouns.
This thesis describes an implemented technique for resolving anaphoric pronouns referring to both individual and abstract entities. The model defines rules for evoking high-order entities from discourse and also a pronoun resolution method that is appropriate for both demonstrative and personal pronouns. It correctly interprets 72% of the pronouns, while a previous leading technique is correct on only 37%, when tested on a corpus of task-oriented spoken dialog.
Keywords: domain independent planning; preplanning; domain analysis; state invariants; state constraints; efficient planning.
DISCOPLAN is an implemented set of efficient preplanning algorithms intended to enable faster domain-independent planning. It includes algorithms that use a hypothesize-and-test paradigm to discover and inductively verify state constraints (invariants) implicit in the structure of a given set of planning operators and initial state. Such state constraints have been shown to be very useful, for example, for speeding up SAT-based planning, regression planning, and heuristic decomposition of planning problems. DISCOPLAN handles operators with conditional effects, and efficiently discovers constraints of the following types: (1) type constraints; (2) predicate domain constraints; (3) simple implicative constraints involving up to two fluent literals and any number of static literals, where one of the fluent literals contains all of the variables occurring in the other literals; (4) single-valuedness (sv-) and n-valuedness constraints; (5) implicative and sv-constraints, relaxing the restrictions on variable subsumption and requiring simultaneous induction; (6) antisymmetry constraints; (7) XOR-constraints; and (8) some additional constraints obtainable by an iterative version of the hypothesize-and-test paradigm. The methods for (6) and (8) involve "expanding" operators so as to include preconditions and effects implied by constraints discovered earlier. We also provide provably correct (and provisionally implemented) methods for discovering additional types of constraints, including constraints involving arbitrarily many fluent literals.
Keywords: computational semantics; presuppositions; situation theory; quantificational adverbs.
This paper describes a method for computing the domain of quantification of an adverbially quantified sentence. This method relies on the accommodation of presuppositions in the scope of a quantificational adverb and on the resolution of the domain in context. Situations form the link between adverbial quantifiers and presuppositions, as adverbial quantifiers are taken to quantify over situations and presuppositions are taken to be constraints on resource situations. This paper also briefly describes a computational system for processing such sentences based on this method.
Keywords: cognitive modeling; multimodal learning; language acquisition; integration of speech and vision.
Language is about symbols and those symbols must be grounded in the physical environment during human development. Most recently, there has been an increased awareness of the essential role of inferences of speakersU referential intentions in grounding those symbols. Experiments have shown that these inferences as revealed in eye, head and hand movements serve as an important driving force in language learning at a relatively early age. The challenge ahead is to develop formal models of language acquisition that can shed light on the leverage provided by embodiment. We present an implemented computational model of embodied language acquisition that learns words from natural interactions with users. The system can be trained in unsupervised mode in which users perform everyday tasks while providing natural language descriptions of their behaviors. We collect acoustic signals in concert with user-centric multisensory information from nonspeech modalities, such as userUs perspective video, gaze positions, head directions and hand movements. A multimodal learning algorithm is developed that firstly spots words from continuous speech and then associates action verbs and object names with their grounded meanings. The central idea is to make use of non-speech contextual information to facilitate word spotting, and utilize userUs attention as deictic reference to discover temporal correlations of data from different modalities to build lexical items. We report the results of a series of experiments that demonstrate the effectiveness of our approach.
Keywords: intelligent assistants; conversational systems; intention recognition; natural language understanding; scheduling; knowledge representation; prescription compliance.
The Medication Advisor is the latest project of the Conversational Interaction and Spoken Dialogue research group at the University of Rochester. The goal of the project is an intelligent assistant that interacts with its users via conversational natural language, and provides them with information and advice regarding their prescription medications. Managing prescription drug regimens is a major problem, particularly for older people living at home who tend to have both complex medication schedules and, often, somewhat reduced faculties for keeping track of them. Patient compliance with prescribed regimens is notoriously low, leading to incorrect and sometimes harmful usage of both prescribed and over-the-counter medications. The Medication Advisor builds on our prior experience constructing conversational assistants in other domains. In addition to providing new challenges, the project allows us to validate previous efforts in areas such as portability. This brief report details our initial efforts and outlines our future direction.
Keywords: communicative intentions; collaborative problem solving; dialogue systems.
In order to understand natural language, it is necessary to understand the intentions behind it, (i.e., why an utterance was spoken). We model dialogue as collaboration between agents. Communicative intentions can then be seen as how an agent is trying to affect the collaboration. Most previous work on intention-recognition approaches to dialogue has focused on only a small subset of agent collaboration paradigms (i.e., master-slave), and thus is unable to account for dialogues in other paradigms, such as mixed-initiative collaboration. Previous work has also either modeled dialogues where the agents are only planning or dialogues where agents are only acting. This restricts dialogue-model coverage to only those cases and does not model dialogues where collaboration about acting and planning occurs.In this paper, we present a collaborative problem-solving model of dialogue. This model is able to account for a much wider array of dialogues than previous models have covered. It covers the spectrum of collaboration paradigms (from master-slave to mixed-initiative) as well as dialogues where interleaved acting and planning are taking place.
We propose, for future research, to complete this model and to build a domain-independent intention-recognition system based on it for use within the TRIPS dialogue system.
Keywords: syntactic disambiguation; statistical natural language processing.
We describe a parser that draws from both extant corpora and linguistic knowledge sources, and thus is suitable as a front end for applications requiring both broad coverage and rich syntactic analysis. We detail many of the difficulties and assumptions involved in combining these data and knowledge sources. We also describe the novel language model that we use for disambiguation and show that it outperforms a comparable model without the same knowledge sources.
Keywords: maximum likelihood; clustering; non-distance based; high dimensional space.
The clustering problem has been widely studied since it arises in many application domains in engineering, business and social science. It aims at identifying the distribution of patterns and intrinsic correlations in large data sets by partitioning the data points into similarity clusters. Traditional clustering algorithms use distance functions to measure similarity and are not suitable for high dimensional spaces. In this paper, we propose a non-distance based clustering algorithm for high dimensional spaces. Based on the maximum likelihood principle, the algorithm is to optimize parameters to maximize the likelihood between data points and the model generated by the parameters. Experimental results on both synthetic data sets and a real data set show the efficiency and effectiveness of the algorithm.
Keywords: exploration; uncommunicative; learning; game.
Reinforcement learning is a promising technique for learning agents to adapt their own strategies in multi-agent systems. Most existing reinforcement learning algorithms are designed from a single-agent's perspective and for simplicity assume the environment is stationary, i.e., the distribution of the utility of each state-action pair does not change. However, in a more realistic model of multi-agent systems, the agents are continually adapting their own strategies owing to different utilities at different times. Because of the non-stationarity, multi-agent systems are more sensitive to the trade-off between exploitation, which uses the best strategy so far, and exploration, which tries to find better strategies. Exploration is especially important to these changing circumstances. In this paper, we assume that the utility of each state-action pair is a stochastic process. This allows us to describe the trade-off dilemma as a Brownian bandit problem to formalize Sutton's recency-based exploration bonus in non-stationary environments. To demonstrate the performance of the exploration bonus, we build agents using Q-learning algorithm with a smoothed best response dynamics. The simulations show that the agents can efficiently adapt to changes in their peers' behaviors whereas the same algorithm, using Boltzmann exploration, cannot adapt.
Keywords: plan recognition; intention recognition; natural language understanding.
State of the art plan recognition for use in natural language dialogue systems has progressed in coverage of discourse phenomena and plan navigation strategies. Most systems, however, suffer from several deficiencies, namely, they do not have a specific strategy for the (inevitable) case where they make an incorrect hypothesis inference and they cannot handle interleaved plan navigation, where a user jumps back and forth between several plans. In addition, most plan recognition systems cannot handle the rich variety of possible natural language utterances a dialogue system may receive as input, especially the case where a language utterance corresponds to several actions that the system considers to be atomic. We discuss previous work in plan recognition, especially in the area of dialogues systems. We then describe a plan recognition system that can recover from incorrect inferences, handles interleaved plan navigation, and handles several linguistic phenomena, including support for natural language multi-action paraphrase.
Keywords: WordNet; world knowledge; commonsense knowledge; subsumption; taxonomy.
WordNet is a lexical database that, among other things, arranges English nouns into a hierarchy ranked by specificity, providing links between a more general word and words that are specializations of it. For example, the word "mammal" is linked (transitively via some intervening words) to "dog" and to "cat." This hierarchy bears some resemblance to the hierarchies of types (or properties, or predicates) often used in artificial intelligence systems. However, WordNet was not designed for such uses, and is organized in a way that makes it far from ideal for them. This report describes our attempts to arrive at a quantitative measure of the quality of the information that can be extracted from WordNet by interpreting it as a formal taxonomy, and to design automatic techniques for improving the quality by filtering out dubious assertions.
Keywords: ontological analysis; rigidity; identity; unity.
In their framework for ontological analysis, Guarino and Welty provide a number of insights that are useful for guiding the design of taxonomic hierarchies. However, the formal statements of these insights as logical schemata are flawed in a number of ways, including inconsistent notation that makes the intended semantics of the logic unclear, false claims of logical consequence, and definitions that provably result in the triviality of some of their property features. This paper makes a negative contribution, by demonstrating these flaws in a rigorous way, but also makes a positive contribution wherever possible, by identifying the underlying intuitions that the faulty definitions were intended to capture, and attempting to formalize those intuitions in a more accurate way.
Keywords: annotation manual; spoken dialog; argumentation acts; rhetorical relations.
This manual describes a method for annotating rhetorical relations, adjacency pairs, and other argumentation acts found in task-oriented spoken dialog [Traum, 1993; Traum and Hinkelman, 1992]. It is largely aimed at the novice annotator rather than the computational linguist, and therefore in choosing terminology we have valued intuitiveness over precision. This work came out of an exploration of how to mark structure above the speech act in the Monroe corpus [Stent, 2000 (TN 99-2)]. For more information about the development of this manual, see [Stent, 2000 (INLG)]. This tool is designed for use with ArgumentationTool, a tool for marking argumentation acts in dialog that is available from http://www.cs.rochester.edu/research/cisd/resources/aad/.
Keywords: syntactic disambiguation; word sense disambiguation; statistical natural language processing.
Statistical techniques have revolutionized all areas of natural language processing, and syntactic parsing is no exception. The availability of large syntactically annotated corpora (principally through the Penn Treebank project) has precipitated parsing's shift from the task of constructing interpretations to the task of constructing a labeled bracketing.These corpus-based techniques are robust and scalable, two desiderata lacking in early, knowledge-based approaches to parsing. The early approaches are typified by parsers that could operate only in a narrow domain, but that produced semantically interpretable parses. In contrast, the corpus-based approaches produce underspecified labeled bracketings that are not sufficiently detailed for applications in natural language understanding.
In this dissertation we describe a parser that uses hand-written linguistically informed knowledge sources (grammar, lexicon, ontology) to enrich the labeled bracketing in the Penn Treebank. The enriched corpus is then used as the data source for statistical parsing in our well-founded framework. Furthermore, parsing in this framework supports a fully-lexicalized parsing model, and allows for the natural integration of word sense disambiguation with syntactic disambiguation. We show that jointly modeling word sense ambiguity and syntactic ambiguity results in improved syntactic disambiguation. We also describe our treatment of coordinated structures (a topic generally ignored in statistical parsing), and our novel method for using an ontology to settle on backed-off estimators via hypothesis testing.
Keywords: lexical semantics; selectional restrictions; semantic features; parsing; maximum entropy models; expectation maximization.
Reasoning about semantic classes and determining compatibility of the words in a given context is an important procedure used in many modules of natural language understanding systems. However, most existing systems do not devote much attention to their ontological knowledge representations, resulting in implementations that are not portable to other domains. At the same time, statistical methods are more robust and less labor-intensive to develop, but typically result in models that are not easily interpretable by humans. We propose a semantic feature representation for use in practical dialogue systems and argue that it can offer advantages in terms of lexicon development and portability---in particular for defining selectional restrictions---and can also be useful for other system modules that do logical inference. We then propose to develop statistical methods allowing us to learn parts of our representation from corpus data.
Keywords: multi-agent systems; mixed-initiative interaction; planning; scheduling.
One of the less appreciated obstacles to scaling multi-agent systems is understanding the impact of the role(s) that people will play in those systems. As we try to adapt existing software tools and agent-based applications to play supportive roles in larger multi-agent systems, we must develop strategies for coordinating not only the problem-solving behavior of these agent communities, but also their information sharing and interactive behavior. Our research interest is in mixed-initiative control of intelligent systems [Burstein and McDermott, 1996; Burstein et al., 1998; Ferguson et al., 1996a] and, in particular, of interactive planning systems comprised of a heterogeneous collection of software agents. In this paper, we describe our experience constructing a prototype tool combining elements of TRIPS [Ferguson and Allen, 1998], an interactive, mixed-initiative agent-based planning architecture using spoken natural language dialogue, with the CAMPS Mission Planner, an interactive airlift scheduling tool developed for the Air Force [Emerson and Burstein, 1999], together with some related resource management agents representing other parts of the airlift planning organization. The latter scheduling tools were not originally designed to participate as part of a mixed-initiative, interactive agent community, but rather were designed for direct user interaction through their own GUIs. We describe some requirements revealed by this effort for effective mixed-initiative interaction in such an environment, including the role of explanation, the need for contextual information sharing among the agents, and our approach to intelligent invocation and integration of available agent capabilities.
Keywords: error correction; noisy channel; spoken language understanding; dialogue; speech recognition; spontaneous speech; TRAINS-95 system; TRAINS-96 system.
The focus of this thesis is to improve the ability of a computational system to understand spoken utterances in a dialogue with a human. Available computational methods for word recognition do not perform as well on spontaneous speech in task-oriented dialogue as we would hope. Even a state of the art recognizer achieves slightly worse than 70\% word accuracy on spontaneous speech in a conversation focused on solving a specific problem.To address this problem, I explore novel methods for post-processing the output of a speech recognizer in order to correct errors. I adopt statistical techniques for modeling the noisy channel from the speaker to the listener in order to correct some of the errors introduced there. The statistical model accounts for frequent errors such as simple word/word confusions and short phrasal problems (one-to-many word substitutions and many-to-one word concatenations). To use the model, a search algorithm is employed to find the most likely correction of a given word sequence from the speech recognizer. The post-processor output contains fewer erors, thus making interpretation by downstream components, such as parsing, more reliable.
The post-processor was employed in the TRAINS-95 and TRAINS-96 conversational planning assistants to great avail. Using these techniques, we were able to reduce the number of word recognition errors in some scenarios by approximately 17\% (absolute) in the TRAINS-95 and TRAINS-96 systems (from just under 40\% to nearly 20\%). Consequently, both systems were significantly more robust to recognition errors when using the post-processor than when not. In the scenario where the speech recognizer is tunable with the availability of new data, the impact of these techniques is not as large, but they do make an improvement nontheless.
Keywords: spoken dialog; corpora; DAMSL; annotation.
In this report we discuss the creation and initial annotation of the Monroe corpus, a collection of video and audio data of 20 human-human, mixed-initiative, task-oriented dialogs about disaster-handling tasks. We describe how the dialogs were collected, what tasks were used, and how the data was transcribed and aligned.
Keywords: lexicography, word sense disambiguation, WordNet.
We propose a compaction of WordNet senses for natural language understanding (NLU) applications, where only those distinctions that are not predictable from other knowledge sources are retained. Further, we propose that word sense disambiguation programs that use WordNet as their dictionary may be evaluated with respect to this compaction, for a better indicator of performance. WordNet is attractive for studies of word sense disambiguation because of its quite comprehensive lexical coverage. However, for NLU applications, its very fine-grained distinctions among word senses may be superfluous, in that these distinctions often reflect a regular polysemy which is productive across many words. The knowledge that this regular polysemy exploits is knowledge that a natural language understanding system must already have in order to handle other phases of the understanding process, such as reference resolution. Thus a program which is able to disambiguate WordNet senses modulo, e.g. metonymy, will have essentially "done its job" as a word sense disambiguator. We conclude with an evaluation of different disambiguators with respect to the compaction.
Keywords: performance improvement; low-level feature detection; intermediate-level feature detection; object recognition; cluttered backgrounds; verification.
Appearance-based object recognition systems are currently the most successful approach for dealing with 3D recognition of arbitrary objects in the presence of clutter and occlusion. However, no current system seems directly scalable to human performance levels in this domain. In this report we describe a series of experiments on a previously described object recognition system that try to see which, if any, design axes of such systems hold the greatest potential for improving performance. We look at the potential effect of different design modifications and we conclude that the greatest leverage lies at the level of intermediate feature construction.
Keywords: object recognition; learning; view clusters; minimal supervision.
Image-based object recognition systems developed recently don't require the construction of a 3D geometric model, allowing recognition of objects for which current geometric recognition technologies do not apply. Such systems are typically trained with labeled, clean views that cover the whole viewing sphere and can sometimes handle generic, visually similar classes with moderate variation. It has been little explored whether such systems can be trained from imagery that is unlabeled, and whether they can be trained from imagery that is not trivially segmentable.In this report we investigate how an object recognition system developed previously can be trained from clean images of objects with minimal supervision. After training this system on a single or a small number of views of each object, a simple learning algorithm is able to attract additional views to the object representation, building clusters of views belonging to the same object. We explore how the learning performance improves by extending the set of views, introducing a small amount of supervision, or using more complicated learning algorithms.
Keywords: dialogue systems; discourse structure; disfluencies; utterance boundaries; parsing; speech acts.
There are four major dialog-specific challenges in processing natural language: 1) determining an utteranceUs speech act, 2) finding utterance boundaries, 3) allowing for the possibility that speakers may continue each other's utterances and interrupt each other, and 4) handling speech repairs and editing terms (uh, I mean). We worked with the Multiparty Discourse Group to develop the Backward- and Forward-Looking annotation scheme that unlike many current speech act taxonomies allows utterance multi-functionality to be captured. To help with challenge 2, we use a statistical utterance boundary detector. To handle challenges 3 and 4, we developed a unique parsing framework in which metarules specify allowable forms of phrase breakage and interleaving. A stream of words tagged with their speakers are given to the parser. Second speaker continuations are naturally allowed and metarules allow phrase structure to be formed around second speaker interruptions. Similarly, metarules allow phrase structure to be formed around speech repairs and editing terms. The parser can thus include repairs and editing terms in its output, allowing higher-level reasoning processes to make inferences about hesitations and false starts in the input. We have also shown that the parser can use its knowledge of grammar and the syntactic structure of the input to improve pre-parser speech repair identification.
Keywords: reference resolution; pronoun resolution; anaphora; discourse understanding; abstract entities; spoken language systems.
Existing algorithms for pronoun resolution typically cast the problem into a coreference task, which means they simply identify an antecedent noun phrase for each pronoun. Selection of the antecedent is usually based on a calculation of salience or focus. This simplified approach is unable to account for pronouns without noun-phrase antecedents. Examples are abstract referents such as events, propositions, and speech acts that might appear in the linguistic surface form as sentential complements, verbal constructions or entire sentences, as well as consequences or outcomes that don't appear in the surface form at all. This paper contains a survey of current methods of pronoun resolution for natural language understanding. It then proposes a strategy for resolving pronominal reference to abstract entities that incorporates semantic information in addition to salience calculations. Preliminary experiments are described that show the strategy to perform well above baseline on a collection of spoken task-oriented dialogs.
Keywords: edge detection; boundary extraction; 3D object recognition.
In this report we describe a method for extracting curves from an image using directional pixel variances instead of gradient measures as low-level boundary evidence. The advantage of the variance over the image gradient is that we can accurately compute the direction of a local edge even if a sudden contrast change occurs in the background. This allows curves belonging to object contours to be followed more easily. We compared our method to a similar method based on the image gradient and we found that it obtains better results when run on synthetic and natural images. Our method also improved the performance of a contour-based 3D object recognition system in cluttered images.
Keywords: efficient planning; planning graph algorithms; local search techniques.
Domain-independent planning is a notoriously hard search problem. Several systematic search techniques have been proposed in the context of various formalisms. However, despite their theoretical completeness, in practice these algorithms are incomplete because for many problems the search space is too large to be (even partially) explored, and a plan cannot be found in reasonable time (if one exists). In this paper we propose a new search method in the context of Blum and Furst's planning graph approach, which is based on local search. Local search techniques are incomplete, but in practice they can efficiently solve problems that are unsolvable for current systematic search methods. We introduce three particular heuristics to guide the local search (Walkplan, Tabuplan and T-Walkplan), and we propose two methods for combining local and systematic search. Our techniques are implemented in a system called GPG, which can be used for both plan-generation and plan-adaptation tasks. Experimental results show that GPG can efficiently solve problems that are very hard for the systematic search of IPP and Graphplan, including Kautz and Selman's Logistics-d.
Keywords: demonstrative pronouns; personal pronouns; pronoun resolution.
An investigation of the referring behavior of personal and demonstrative pronouns in two corpora: a collection of problem-solving dialogs from the TRAINS93 corpus and prepared news stories from the Boston University Radio Corpus. Unlike most studies of pronominal reference, which limit themselves to pronouns that co-specify the meaning of another noun phrase (called coreference annotation), this study has a wider scope and includes all pronouns in the discourse. As a result, a broader characterization is possible for the pronouns in question. This study shows that current models of pronoun resolution that assume each pronoun to have a nominal antecedent are of limited utility when applied to spontaneous language.
Keywords: natural language generation; natural language processing; overview.
Natural language generation is a knowledge-intensive, goal-directed process involving many interacting choices. Some questions that a generation system must answer include: (1) What information needs to be included in the output to satisfy the speaker's or writer's communicative goals? (2) How should a discourse contribution be structured to ensure its coherence? (3) Which modalities should be used to maximize the information exchange? (4) How can output be tailored to specific users? In this paper, we examine some aspects of natural language generation that constrain the planning process, including theories of discourse structure, models of discourse context and of users, and multimodal generation.
Keywords: syntactic disambiguation; statistical parsing.
For years, researchers have used knowledge-intensive techniques for disambiguating during parsing. These techniques required a lot of hand-coded information, thus they would not scale to large domains. In addition, they often required the invention of pseudo-probabilities, which also do not scale, and provide ill-founded quantitative measures. The data-driven techniques, which have become popular over the past few years, seem appealing in light of this: once you have an annotated corpus, there is no need to code up knowledge bases or invent "magic numbers." However, these methods also have extensive failings, which we will detail. We present a framework for corpus-based syntactic disambiguation which pulls together the well-foundedness of the traditional approaches and the scalability of the corpus-based approaches. We also present a model of language production that places greater emphasis on lexical statistics.
Keywords: centering; dialog; discourse modelling.
The centering framework explains local discourse coherence by relating a speaker's focus of attention and the forms of referring expressions. Although this framework has proven useful in single-speaker discourse, its utility for multi-party discourse has not been shown. It is unclear how to adapt it to handle discourse phenomena such as turn-taking, acknowledgments, first and second person pronouns, and disfluencies. This paper reports our experiments applying three naive models of centering theory for dialog. These results will be used as a baseline for future, more sophisticated models.
Keywords: computer science education; cooperative learning; large classes; personalized system of education.
A Personalized System of Instruction (PSI) is a student-paced method of teaching in which students progress by displaying mastery of written material. Cooperative Learning is a method of instruction in which students work in groups to help each other study. In the Fall of 1996, a computer literacy course in which half of the students followed a PSI curriculum and the other half followed a Cooperative Learning curriculum was offered. Data from this experiment showed several statistically significant differences between the two curricula in student satisfaction as measured by end-of-the-semester course evaluation forms. These questionnaires indicated that students felt that the PSI classes increased their knowledge at the 99\% confidence level. They also indicated that students felt that the PSI course procedures better supported course objectives, that the PSI course required more work, and that it was easier to get answers from the TAs in the PSI classes at the 95\% confidence level. The data also showed statistically significant evidence that students learned more from the PSI curriculum as measured by exams. Analysis of rosters from the programming class offered the following semester showed no statistically significant difference between the proportion of the PSI students who took the programming class and the proportion of the cooperative learning students who took the programming class.
Keywords: clue words; cue phrases; discourse relations; adjacency pairs; grounding; discourse markers; task-oriented dialog.
Discourse markers, also known as clue words, are used extensively in human-human task-oriented dialogs to signal the structure of the discourse. Previous work showed their importance in monologs and social conversations for marking discourse structure, but little attention has been paid to their importance in spoken dialog systems. This paper investigates what discourse markers signal about the upcoming speech, and when they tend to be used in task-oriented dialog. We demonstrate that there is a high correlation between specific discourse markers and specific conversational moves, between discourse marker use and adjacency pairs, and between discourse markers and the speaker's orientation to information presented in the prior turn. We limit our analysis to turn-initial discourse markers and discover new patterns in their interaction with phenomena specific to dialog.
Keywords: language modeling; disfluencies; POS tagging; speech recognition; decision trees; intonational boundaries; discourse markers.
Interactive spoken dialog provides many new challenges for natural language understanding systems. One of the most critical challenges is simply determining the speaker's intended utterances: both segmenting a speaker's turn into utterances and determining the intended words in each utterance. Even assuming perfect word recognition, the latter problem is complicated by the occurrence of speech repairs, which occur when the speaker goes back and changes (or repeats) something she just said. The words that are replaced or repeated are no longer part of the intended utterance, and so need to be identified. The two problems of segmenting the turn into utterances and resolving speech repairs are strongly intertwined with a third problem: identifying discourse markers. Lexical items that can function as discourse markers, such as "well" and "okay," are ambiguous as to whether they are introducing an utterance unit, signaling a speech repair, or are simply part of the context of an utterance, as in "that's okay." Spoken dialog systems need to address these three issues together and early on in the processing stream. In fact, just as these three issues are closely intertwined with each other, they are also intertwined with identifying the syntactic role or part-of-speech (POS) of each word and the speech recognition problem of predicting the next word given the previous words.In this thesis, we present a statistical language model for resolving these issues. Rather than finding the best word interpretation for an acoustic signal, we redefine the speech recognition problem so that it also identifies the POS tags, discourse markers, speech repairs and intonational phrase endings (a major cue in determining utterance units). Adding these extra elements to the speech recognition problem actually allows it to better predict the words involved, since we are able to make use of the predictions of boundary tones, discourse markers and speech repairs to better account for what word will occur next. Furthermore, we can take advantage of acoustic information, such as silence information, which tends to co-occur with speech repairs and intonational phrase endings, that current language models can only regard as noise in the acoustic signal. The output of this language model is a much fuller account of the speaker's turn, with part-of-speech assigned to each word, intonation phrase endings and discourse markers identified, and speech repairs detected and corrected. In fact, the identification of the intonational phrase endings, discourse markers, and resolution of the speech repairs allows the speech recognizer to model the speaker's utterances, rather than simply the words involved, and thus it can return a more meaningful analysis of the speaker's turn for later processing.
Keywords: belief; simulative reasoning; simulative inference; computational model of belief.
We propose a semantics for belief in which the derivation of new beliefs from old ones is modeled as a computational process. Using this model, we characterize conditions under which it is appropriate to reason about other agents by simulating their inference processes with one's own.
Keywords: natural images; visual cortex; spatiotemporal coding; receptive fields; direction selectivity; sparse distributed representations.
The receptive fields of neurons in the mammalian primary visual cortex are oriented not only in the domain of space, but in most cases, also in the domain of space-time. While the orientation of a receptive field in space determines the selectivity of the neuron to image structures at a particular orientation, a receptive fieldUs orientation in space-time characterizes important additional properties such as velocity and direction selectivity. Previous studies have focused on explaining the spatial receptive field properties of visual neurons by relating them to the statistical structure of static natural images. In this report, we examine the possibility that the distinctive spatiotemporal properties of visual cortical neurons can be understood in terms of a statistically efficient strategy for encoding natural time varying images. We describe an artificial neural network that attempts to accurately reconstruct its spatiotemporal input data while simultaneously reducing the statistical dependencies between its outputs. The network utilizes spatiotemporally summating neurons and learns efficient sparse distributed representations of its spatiotemporal input stream by using recurrent lateral inhibition and a simple threshold nonlinearity for rectification of neural responses. When exposed to natural time varying images, neurons in a simulated network developed localized receptive fields oriented in both space and space-time, similar to the receptive fields of neurons in the primary visual cortex.
Keywords: working memory; eye movements; natural tasks; saccades; saccade-contingent displays; visual memory.
This paper examines the nature of visual representations that direct ongoing performance in sensorimotor tasks. Performance of such natural tasks requires relating visual information from different gaze positions. To explore this we used the technique of making task relevant display changes during saccadic eye movements. Subjects copied a pattern of colored blocks on a computer monitor, using the mouse to drag the blocks across the screen. Eye position was monitored using a dual-purkinje eye tracker, and the color of blocks in the pattern was changed at different points in task performance. When the target of the saccade changed color during the saccade, the duration of fixations on the model pattern increased, depending on the point in the task that the change was made. Thus different fixations on the same visual stimulus served a different purpose. The results also indicated that the visual information that is retained across successive fixations depends on moment by moment task demands. This is consistent with previous suggestions that visual representations are limited and task dependent. Changes in blocks in addition to the saccade target led to greater increases in fixation duration. This indicated that some global aspect of the pattern was retained across different fixations. Fixation durations revealed effects of the display changes that were not revealed in perceptual report. This can be understood by distinguishing between processes that operate at different levels of description and different time scales. Our conscious experience of the world may reflect events over a longer time scale than those underlying the substructure of the perceptuo-motor machinery.
Keywords: object recognition; pose estimation; visual cortex; localized receptive fields; Kalman filters; neural networks.
Neurons in the visual cortex are known to possess localized, oriented receptive fields. It has previously been suggested that these distinctive properties may reflect an efficient image encoding strategy based on maximizing the sparseness of the distribution of output neuronal activities or alternately, extracting the independent components of natural image ensembles. Here, we show that a relatively simple neural solution to the problem of transformation-invariant visual recognition also causes localized, oriented receptive fields to be learned from natural images. These receptive fields, which code for various transformations in the image plane, allow a pair of cooperating neural networks, one estimating object identity (``what'') and the other estimating object transformations (``where''), to simultaneously recognize an object and estimate its pose by jointly maximizing the a posteriori probability of generating the observed visual data. We provide experimental results demonstrating the ability of these networks to factor retinal stimuli into object-centered features and object-invariant transformations. The resulting neuronal architecture suggests concrete computational roles for the neuroanatomical connections known to exist between the dorsal and ventral visual pathways.
Keywords: genetic programming; genetic algorithms; variable size representations; parsimony penalty; rooted tree-schema.
One serious problem of standard Genetic Programming (GP) is that evolved expressions appear to drift towards large and slow forms on average. This report presents a novel analysis of the role played by variable complexity in the selection and survival of GP expressions. It defines a particular property of GP representations, called rooted tree-schema, that sheds light on the role of variable complexity of evolved representations. A tree-schema is a relation on the space of tree-shaped structures which provides a quantifiable partitioning of the search space. The present analysis answers questions such as: What role does variable complexity play in the selection and survival of evolved expressions? What is the influence of a parsimony penalty? How heavy should parsimony penalty be weighted or how should it be adapted in order to preserve the underlying optimization process? Are there alternative approaches to simulating a parsimony penalty that do not result in a change of the fitness landscape? The present report provides theoretical answers to these questions, interpretation of these results, and an experimental perspective.
Keywords: TRAINS; spoken dialogue systems; task-based evaluation; natural language processing.
In this report we describe an experiment designed to: evaluate the performance of the TRAINS-96 system as a whole; examine the utility of a new robust post-parser module, recently added to the TRAINS system; and explore the benefit to the user of receiving system feedback on speech input. The evaluation uses the same task-based methodology as was used for the TRAINS-95 evaluation [Sikorski and Allen 96], in which the user and computer cooperatively solve a given problem. Success is measured in terms of task performance measures such as time to completion of a task, and the quality of the final plan produced.
Keywords: eye movements; visual search; computational models; reference frames; spatial memory.
Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds, although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. Once fixated, targets are remembered by using spatial memory in the form of object-centered maps. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in natural visual search tasks. Experimental results indicate excellent agreement between eye movements predicted by the model and those recorded from human subjects.
Keywords: reinforcement learning; task decomposition; modules.
In many domains, the task can be decomposed into a set of independent sub-goals. Often, such tasks are too complex to be learned using standard techniques such as Reinforcement Learning. The complexity is caused by the learning system having to keep track of the status of all sub-goals concurrently. Thus, if the solution to one sub-goal is known when another sub-goal is in some given state, the known solution must be relearned when the status of the other sub-goal changes.This dissertation presents a modular approach to reinforcement learning that takes advantage of task decomposition to avoid unnecessary relearning. In the modular approach, modules are created to learn each sub-goal. Each module receives only those inputs relevant to its associated sub-goal, and can therefore learn without being affected by the state of other sub-goals. Furthermore, each module searches a much smaller space than that defined by all inputs considered together, thereby greatly reducing learning time. Since each module learns how to achieve a separate sub-goal, at any given time it may recommend an action different from that recommended by other modules. To select an action that best satisfies as many of the modules as possible, a simple arbitration strategy is used. One such strategy, explored in this dissertation, is called {\em greatest mass\/} which simply combines action utilities from all modules and selects the one with the largest combined utility.
Since the modular approach limits and separates information given to the modules, the solution learned must necessarily differ from that learned by a standard, non-modular approach. However, experiments in a simple driving world indicate that while sub-optimal, the solution learned by the modular system only makes minor errors when compared with that learned by the standard approach. A complex task can thus be learned very quickly, using only small amounts of computational resources, with only small sacrifices in solution quality, using the modular approach.
Keywords: appearance-based recognition; prediction; visual learning; Kalman filters; robust statistics.
Using results from the field of robust statistics, we derive a class of Kalman filters that are robust to structured and unstructured noise in the input data stream. Each filter from this class maintains robust optimal estimates of the input process's hidden state by allowing the measurement covariance matrix to be a non-linear function of the prediction errors. This endows the filter with the ability to reject outliers in the input stream. Simultaneously, the filter also learns an internal model of input dynamics by adapting its measurement and state transition matrices using two additional Kalman filter-based adaptation rules. We present experimental results demonstrating the efficacy of such filters in mediating appearance-based segmentation and recognition of objects and image sequences in the presence of varying degrees of occlusion, clutter, and noise.
Keywords: pattern matching; TRAINS; discourse interpretation.
This document describes a new pattern matching engine used as part of the discourse reasoning components in the TRAINS-96 system. Its dominant characteristics are simplicity, efficiency, and an economical model for driving the search engine.
Keywords: default logic; autoepistemic logic; modal framework.
When we work with information from multiple sources, the formats of the knowledge bases may not be uniform. It would be desirable to be able to combine a knowledge base of default rules with one containing autoepistemic formulas. Previous works on relating default logic and autoepistemic logic mostly impose some constraints on autoepistemic logic, and thus are not suitable for combining the two logics. We first present a fixed point formulation of autoepistemic logic analogous to that of default logic. Then we introduce a possible world framework with a partition structure, which corresponds to our intuitive notion of accessibility as linking alternate ``possible'' worlds. We show that both default logic and autoepistemic logic can be characterized using this framework, and the constraints imposed on the possible world structures correspond to the requirements in the fixed point formulations. Casting both default logic and autoepistemic logic in a common framework is important for developing a semantics applicable to the two logics, both separately and combined.
Keywords: mixed-initiative planning; dialogue systems; interactive systems.
This document describes the design and implementation of TRAINS-96, a prototype mixed-initiative planning assistant system. The TRAINS-96 system helps a human manager solve routing problems in a simple transportation domain. It interacts with the human using spoken, typed, and graphical input and generates spoken output and graphical map displays. The key to TRAINS-96 is that it treats the interaction with the user as a dialogue in which each participant can do what they do best. The TRAINS-96 system is intended as both a demonstration of the feasibility of realistic mixed-initiative planning and as a platform for future research. This document describes both the design of the system and such features of its use as might be useful for further experimentation. Further references and a comprehensive set of manual pages are also provided.
Keywords: visual cortex; feedback; prediction; learning; Kalman filters; endstopping.
A characteristic feature of the mammalian visual cortex is the reciprocity of connections between cortical areas. While corticocortical feedforward connections have been well studied, the computational function of the corresponding feedback projections has remained relatively unclear. We have modelled the visual cortex as a hierarchical predictor wherein feedback projections carry predictions for lower areas and feedforward projections carry the difference between the predictions and the actual internal state. The activities of model neurons and their synaptic strength are continually adapted using a hierarchical Kalman filter that minimizes errors in prediction. The model generalizes several previously proposed encoding schemes and allows functional interpretations of a number of well-known psychophysical and neurophysiological phenomena. Here, we present simulation results suggesting that the classical phenomenon of endstopping in cortical neurons may be viewed as an emergent property of the cortex implementing a hierarchical Kalman filter-like prediction mechanism for efficient encoding and recognition.
Keywords: dynamic recognition; visual cortex; Kalman filters; EM algorithm; MDL principle; learning; prediction; estimation.
The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the Expectation-Maximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions.
Keywords: knowledge representation; natural language understanding; dialogue systems.
We describe the goals, architecture, and functioning of the TRAINS-93 system, with emphasis on the representational issues involved in putting together a complex language processing and reasoning agent. The system is intended as an experimental prototype of an intelligent, conversationally proficient planning advisor in a dynamic domain of cargo trains and factories. For this team effort, our strategy at the outset was to let the designers of the various language processing, discourse processing, plan reasoning, execution and monitoring modules choose whatever representations seemed best suited for their tasks, but with the constraint that all should strive for principled, general approaches.Disparities between modules were bridged by careful design of the interfaces, based on regular in-depth discussion of issues encountered by the participants. Because of the goal of generality and principled representation, the multiple representations ended up with a good deal in common (for instance, the use of explicit event variables and the ability to refer to complex abstract objects such as plans); and future unifications seem quite possible. We explain some of the goals and particulars of the KRs used, evaluate the extent to which they served their purposes, and point out some of the tensions between representations that needed to be resolved. On the whole, we found that using very expressive representations minimized the tensions, since it is easier to extract what one needs from an elaborate representation retaining all semantic nuances, than to make up for lost information.
Keywords: spoken dialogue systems; natural language processing; evaluation.
In this paper we describe a recent experiment designed to evaluate the performance of the TRAINS-95 system. The evaluation uses a task-based evaluation methodology appropriate for dialogue systems such as TRAINS-95, where a human and a computer interact and collaborate to solve a given problem. In task-based evaluations, techniques are measured in terms of their affect on task performance measures such as how long it takes to develop a solution using the system, and the quality of the final plan produced. The evaluation explores the robustness of the TRAINS-95 system in the presence of word recognition errors, the amount of training required to effectively use the system, and user preferences.
Keywords: invariant recognition; visual cortex; dorsal and ventral pathways; Kalman filters; motion; stereo; prediction; learning.
We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes.
Keywords: line detection; feature extraction; irregular parallel processing; lock preemption; deadlock prevention.
Finding lineal features in an image is an important step in many object recognition and scene analysis procedures. Previous feature extraction algorithms exhibit poor parallel performance because features often extend across large areas of the data set. This paper describes a parallel method for extracting lineal features based on an earlier sequential algorithm, stick growing. The new method produces results qualitatively similar to the sequential method.Experimental results show a significant parallel processing speed-up attributable to three key features of the method: a large numbers of lock preemptible search jobs, a random priority assignment to source search regions, and an aggressive deadlock detection and resolution algorithm. This paper also describes a portable generalized thread model. The model supports a light-weight job abstraction that greatly simplifies parallel vision programming.
Keywords: lexical acquisition; lexical semantics; morphology.
Most natural language processing tasks require lexical semantic information such as verbal argument structure and selectional restrictions, corresponding nominal semantic class, verbal aspectual class, synonym and antonym relationships between words, and various verbal semantic features such as causation and manner. This dissertation addresses two primary questions related to such information: how should one represent it and how can one acquire it.It is argued that, in order to support inferencing, a representation with well-understood semantics should be used. Standard first order logic has well-understood semantics and a multitude of inferencing systems have been implemented for it. However, standard first order logic, although a good starting point, needs to be extended before it can efficiently and concisely support all the lexically-based inferences needed. Using data primarily from the TRAINS dialogues, the following extensions are argued for: modal operators, predicate modification, restricted quantification, and non-standard quantifiers. These representational tools are present in many systems for sentence-level semantics but have not been discussed in the context of lexical semantics.
A number of approaches to automatic acquisition are considered and it is argued that a ``surface cueing'' approach is currently the most promising. Morphological cueing, a type of surface cueing, is introduced. It makes use of fixed correspondences between derivational affixes and lexical semantic information. The semantics of a number of affixes are discussed and data resulting from the application of the method to the Brown corpus is presented.
Finally, even if lexical semantics could be acquired on a large scale, natural language processing systems would continue to encounter unknown words. Derivational morphology can also be used at run-time to help natural language understanding systems deal with unknown words. A system is presented that provides lexical semantic information for such derived unknown words.
Keywords: dialogue systems; TRAINS; discourse structure; parsing.
Currently, the TRAINS dialog system uses a more or less standard chart parser as the interface between the text of the dialog and the rest of the dialog processing system. However, traditional chart parsers are not well equipped to handle dialogs because dialog constituents can be discontinuous, with interspersed acknowledgments, editing terms, repairs, etc. This paper proposes some modifications of the current TRAINS parser enabling it to handle discontinuous dialog structure. The representation of a dialog is still superficially hierarchical (rather than consisting of interleaved structures). This is made possible by two devices: one is to accommodate repairs (e.g., to uh .. to Corning) through explicit grammar rules; the other is to accommodate mid-sentence acknowledgments (e.g., okay), editing terms (e.g., uh), etc. as ``trailers" attached to lexical items. We show how this works on a simple sample dialog. Because allowing for repairs and interruptions introduces much ambiguity, we also discuss some initial disambiguation techniques.
Keywords: parameter domains in planning; preprocessing in planning; improving planning efficiency; domain-independent planning; partial order planning; UCPOP.
We show that by inferring parameter domains of planning operators, given the definitions of the operators and the initial and goal conditions, we can often speed up the planning process by an order of magnitude or more. We infer parameter domains by a polynomial-time algorithm that uses forward propagation of sets of constants occurring in the initial conditions and in operator postconditions. During planning, parameter domains can be used to prune operator instances whose parameter domains are inconsistent with binding constraints, and to eliminate spurious "clobbering threats" that cannot, in fact, be realized without violating domain constraints. We illustrate these applications with examples from the UCPOP test suite and from the Rochester TRAINS transportation planning domain.
Keywords: structural disambiguation; corpus-based disambiguation.
This paper describes a prototype disambiguation module, KANKEI, which uses two corpora of the TRAINS project. In ambiguous verb phrases of form V...NP PP or V...NP adverb(s), the two corpora have very different PP and adverb attachment patterns; in the first, the correct attachment is to the VP 88.7\% of the time, while in the second, the correct attachment is to the NP 73.5\% of the time. KANKEI uses various n-gram patterns of the phrase heads around these ambiguities, and assigns parse trees (with these ambiguities) a score based on a linear combination of the frequencies with which these patterns appear with NP and VP attachments in the TRAINS corpora. Unlike previous statistical disambiguation systems, this technique thus combines evidence from bigrams, trigrams, and the 4-gram around an ambiguous attachment. In the current experiments, equal weights are used for simplicity but results are still good on the TRAINS corpora (92.2\% and 92.4\% accuracy). Despite the large statistical differences in attachment preferences in the two corpora, training on the first corpus and testing on the second gives an accuracy of 90.9\%. These results suggest that our technique captures attachment patterns that are useful across corpora.
Keywords: partial order planning; efficient planning; planning strategies; UCPOP.
We describe some simple domain-independent improvements to plan-refinement strategies for well-founded partial order planning that promise to bring this style of planning closer to practicality. One suggestion concerns the strategy for selecting plans for refinement among the current (incomplete) candidate plans. We propose an A* heuristic that counts only steps and open conditions, while ignoring ``unsafe conditions'' (threats). A second suggestion concerns the strategy for selecting open conditions (goals) to be established next in a selected incomplete plan. Here we propose a variant of a strategy suggested by Peot \& Smith and studied by Joslin \& Pollack; the variant gives top priority to unmatchable open conditions (enabling the elimination of the plan), second-highest priority to goals that can only be achieved uniquely, and otherwise uses LIFO prioritization. The preference for uniquely achievable goals is a ``zero-commitment'' strategy in the sense that the corresponding plan refinements are a matter of deductive certainty, involving no guesswork. In experiments based on modifications of UCPOP, we have obtained improvements by factors ranging from 5 to more than 1000 for a variety of problems that are nontrivial for the unmodified version. Crucially, the hardest problems give the greatest improvements.
Keywords: spontaneous speech; dialogue; speech recognition; set representations; utterance triggers; dialogue triggers; adaptive probability boosting; word set error.
Based on the observation that the unpredictable nature of conversational speech makes it almost impossible to reliably model sequential word constraints, the notion of {\em word set error criteria} is proposed for improved recognition of spontaneous dialogues. The single pass Adaptive Boosting (AB) algorithm enables the language model weights to be tuned using the {\em word set error} criteria. In the two pass version of the algorithm, the basic idea is to predict a {\em set} of words based on some {\em a priori} information, and perform a re-scoring pass wherein the probabilities of the words in the predicted word set are amplified or {\em boosted} in some manner. An adaptive gradient descent procedure for tuning the {\em word boosting} factor has been formulated which enables the boost factors to be incrementally adjusted to maximize accuracy of the speech recognition system outputs on held-out training data using the word set error criteria. Two novel models which predict the required word sets have been presented: {\em utterance triggers} which capture within-utterance long-distance word inter-dependencies, and {\em dialogue triggers} which capture local temporal dialogue-oriented word relations. The proposed Trigger and Adaptive Boosting (TAB) algorithm, and the single pass Adaptive Boosting (AB) algorithm have been experimentally tested on a subset of the TRAINS-93 spontaneous dialogues and the TRAINS-95 semi-spontaneous corpus, and have resulted in improved performances.
Keywords: Kanerva memories; common filters; visual cortex; basis functions.
Recent neurophysiological experiments appear to indicate that the responses of visual cortical neurons in a monkey freely viewing a natural scene can sometimes differ substantially from those obtained when the same image subregions are flashed during a conventional fixation task. These new findings attain significance from the fact that neurophysiological research in the past has been based predominantly on cell recordings obtained during fixation tasks, under the assumption that these data would be useful in predicting responses in more general situations. We describe a hierarchical model of visual memory that reconciles the two differing experimental results mentioned above by predicting neural responses in both fixating and free-viewing conditions. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to achieve optimal estimation of current state using a Kalman filter based framework. The architecture of the model posits a role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties.
Keywords: simulation; TRAINS; discrete event simulator; graphical simulator; probabilistic causal model.
The TRAINS world is a transportation domain in which commodoties are moved from site to site by various forms of transportation. It includes factories, warehouses, trains, planes, ships and trucks and the agents that operate these facilities. The TRAINS world simulator is a general purpose graphical simulator that includes modules for simulating the TRAINS world. The simulator allows extensions to the TRAINS world by providing a language for describing causal models.
Keywords: parsing; chart parsing; natural language.
This report is a user's manual for the TRAINS-95 parsing system. An accompanying report describes the grammar used in TRAINS-95, and the robust speech act interpretation system, which takes the chart and produces a series of speech acts that best characterize it. The parser is based on the bottom-up parser described in Natural Language Understanding, Second Ed. (Allen, 1994, Chapters 3, 4, and 5). It uses the same formats for the grammar and the lexical entries, and the same basic bottom-up algorithm. There are a number of extensions beyond the basic system described in the book, each of which will be discussed in this report, including: (1) support for parsing word lattices; (2) best-first parsing using context-free probabilistic rules; (3) incremental (word by word) parser with backup for corrections; (4) hierarchical feature values and extended unification options; (5) a hierarchical lexicon entry format that simplifies defining large lexicons; and (6) procedural attachment to chart actions.
Keywords: dialogue; prosody; spoken language understanding; spontaneous speech; the TRAINS-95 system.
The focus of this thesis proposal is to improve the ability of a computational system to understand spoken utterances in a dialogue with a human. Available computational methods for word recognition do not perform as well on spontaneous speech as we would hope. Even a state of the art recognizer achieves slightly worse than 70\% word accuracy on (nearly) spontaneous speech in a conversation about a specific problem.To address this problem, I will explore novel methods for post-processing the output of a speech recognizer in order to correct errors. I adopt statistical techniques for modeling the noisy channel from the speaker to the listener in order to correct some of the errors introduced there. The statistical model accounts for frequent errors such as simple word/word confusions and short phrasal problems (one-to-many word substitutions and many-to-one word concatenations). To use the model, a search algorithm is required to find the most likely correction of a given word sequence from the speech recognizer. The post-processor output should contain fewer errors, thus making interpretation by higher levels, such as parsing, more reliable.
Spontaneous speech is also challenging to process because it is more incremental than written language. Utterances frequently form brief phrases and fragments rather than full sentences; they tend to come in installments and refinements. Known methods for parsing do not perform as well as we would like in the face of these linguistic ambiguities and idiosyncrasies. Even state of the art algorithms for parsing spontaneous language sustain high error rates.
To address the incrementality of spontaneously spoken utterances, I will develop methods for segmenting a given utterance into ``chunks'' representing individual thoughts. Given an utterance of spontaneous speech, a tool for automatic prosodic feature extraction will analyze the output of the error-correcting post-processor and the acoustic waveform to generate prosodic cues. These cues will aid a robust parser using a prosody-wise grammar to identify the incremental phrases in the utterance and to provide a syntactic analysis.
These components will augment the {\sc Trains-95} conversational planning assistant.
Keywords: genetic programming; genetic algorithms.
Proceedings of a workshop held in conjunction with the 12th International International Conference on Machine Learning in July 1995 in Tahoe City, CA.
Keywords: spoken dialogue; task-oriented dialogues; planning assistant; corpus.
This report describes a corpus of task-oriented dialogues set in the TRAINS domain. A user collaborates with a planning assistant to accomplish some task involving manufacturing and shipping goods in a railroad freight system. We include a description of the task, collection situation, and transcriptions conventions. The audio files, along with time-aligned word and phoneme transcriptions are available on CD-ROM from the Linguistic Data Consortium. Altogether, there are 98 dialogs included, collected using 20 different tasks and 34 different speakers. This amounts to six and a half hours of speech, about 5900 speaker turns, and 55000 transcribed words.
Keywords: neural nets; learning machines; Vapnik-Chervonenkis dimension.
Training a learning machine from examples is accomplished by minimizing a quantitative error measure, the training error defined over a training set. A low error on the training set does not, however, guarantee a low expected error on any future example presented to the learning machine---that is, a low generalization error.The main goal of the dissertation is to merge theory and practice: to develop theoretically based but experimentally adapted tools that allow an accurate prediction of the generalization error of an arbitrarily arbitrarily complex classifier. This goal is reached through experimental and theoretical studies of the relationship between the training and generalization error for a variety of learning machines. The result is the introduction of a practical and principled method for predicting the generalization error. The power and accuracy of the predictive procedure is illustrated from application to real-life problems. Theoretical inspiration for the model arises from calculations of of the expected difference between the training and generalization error for some simple learning machines. Novel computations of this character are included in the dissertation. Experimental studies yield experience with the performance ability of real-life classifiers, and result in new capacity measures for a set of classifiers.
The dissertation also presents a new classification algorithm, the Soft Margin Classifier algorithm, for learning with errors on the training set. The algorithm is an extension of the Optimal Margin Classifier algorithm, and is consistently found to outperform its predecessor because it absorbs out-lying and erroneous patterns in flexible margins.
Keywords: partial order planning; least commitment planning; planning strategies; improving planning efficiency.
We describe some simple domain-independent improvements to plan-refinement strategies for well-founded partial order planning that promise to bring this style of planning closer to practicality. One suggestion concerns the strategy for selecting plans for refinement among the current (incomplete) candidate plans. We propose an A* heuristic that counts only steps and open conditions, while ignoring "unsafe conditions" (threats). A second suggestion concerns the strategy for selecting open conditions (goals) to be established next in a selected incomplete plan. Here we propose a variant of a strategy suggested by Peot and Smith and studied by Joslin and Pollack; the variant gives top priority to unmatchable open conditions (enabling the elimination of the plan), second-highest priority to goals that can only be achieved uniquely, and otherwise uses LIFO prioritization. The preference for uniquely achievable goals is a "zero-commitment" strategy in the sense that the corresponding plan refinements are a matter of deductive certainty, involving no guesswork. In experiments based on modifications of UCPOP, we have obtained improvements by factors ranging from 5 to several hundred for a variety of problems that are nontrivial for the unmodified version. Crucially, the hardest problems give the greatest improvements.
Keywords: hierarchical genetic programming; diversity; causality; exploration vs. exploitation tradeoff; adaptation.
Hierarchical genetic programming (HGP) approaches rely on the discovery, modification, and use of new functions to accelerate evolution. This paper provides a qualitative explanation of the improved behavior of HGP, based on an analysis of the evolution process from the dual perspective of diversity and causality. From a static point of view, the use of an HGP approach enables the manipulation of a population of higher diversity programs. Higher diversity increases the exploratory ability of the genetic search process, as demonstrated by theoretical and experimental fitness distributions and expanded structural complexity of individuals. From a dynamic point of view, this report analyzes the causality of the crossover operator. Causality relates changes in the structure of an object with the effect of such changes, i.e., changes in the properties or behavior of the object. The analyses of crossover causality suggests that HGP discovers and exploits useful structures in a bottom-up, hierarchical manner. Diversity and causality are complementary, affecting exploration and exploitation in genetic search. Unlike other machine learning techniques that need extra machinery to control the tradeoff between them, HGP automatically trades off exploration and exploitation.
Keywords: 3-d object recognition; visual object recognition; memory-based learning; visual learning; principal views.
We describe a method of 3-D object recognition based on two stage use of a general purpose associative memory and a principal views representation. The basic idea is to make use of semi-invariant objects called keys. A key is any robustly extractable feature that has sufficient information content to specify a 2-D configuration of an associated object (location, scale, orientation) plus sufficient additional parameters to provide efficient indexing and meaningful verification. The recognition system utilizes an associative memory organized so that access via a key feature evokes associated hypotheses for the identity and configuration of all objects that could have produced it. These hypothesis are fed into a second stage associative memory, which maintains a probabilistic estimate of the likelihood of each hypothesis based on statistics about the occurrence of the keys in the primary database. Because it is based on a merged percept of local features rather than global properties, the method is robust to occlusion and background clutter, and does not require prior segmentation. Entry of objects into the memory is an active, automatic procedure. We have implemented a version of the system that allows arbitrary definitions for key features. Experiments using keys based on perceptual groups of line segments are reported. Good results were obtained on a database derived from of approximately 150 images representing different views of 7 polyhedral objects.
Keywords: ability; inference; models; semantics; entailment uncertain inference; probabilistic inference; nonmonotonic logic; defeasible reasoning; statistical uncertainty.
In ordinary first-order logic, a valid inference in a language {\bf L} is one in which the conclusion is true in every model of the language in which the premises are true. To accommodate inductive/uncertain/probabilistic/non-monotonic inference, we weaken that demand to the demand that the conclusion be true in a large proportion of the models in which the relevant premises are true. More generally, we say that an inference is [p,q] valid if its conclusion is true in a proportion lying between p and q of those models in which the relevant premises are true. If we include a statistical variable binding operator ``%'' in our language, there are many quite general (and useful) things we can say about uncertain validity. A surprising result is that some of these things may conflict with Bayesian conditionalization.
Keywords: knowledge representation; planning; defeasible reasoning.
This dissertation describes the formal foundations and implementation of a commonsense, mixed-initiative plan reasoning system. By ``plan reasoning'' I mean the complete range of cognitive tasks that people perform with plans including, for example, plan construction (planning), plan recognition, plan evaluation and comparison, and plan repair (replanning), among other things. ``Mixed-initiative'' means that several participants can each make contributions to the plan under development through some form of communication. ``Commonsense'' means that the system represents plans and their constituents at a level that is ``natural'' to us in the sense that they can be described and discussed in language. In addition, the reasoning that the system performs includes those conclusions that we would take to be sanctioned by common sense, including especially those conclusions that are defeasible given additional knowledge or time spent reasoning.The main theses of this dissertation are the following: (1) Any representation of plans sufficient for commonsense plan reasoning must be based on an expressive and natural representation of such underlying phenomena as time, properties, events, and actions. (2) For mixed-initiative planning, plans should be viewed as arguments that a certain course of action under certain conditions will achieve certain goals. These theses are defended by presenting, first, a representation of events and actions based on interval temporal logic and, second, a representation of plans as arguments in a formal system of defeasible reasoning that explicitly constructs arguments. These two aspects of commonsense plan reasoning are combined and implemented in the TRAINS domain plan reasoner, which is also described in detail.
The emphasis in this dissertation is on breadth, taking as its data human communicative and plan reasoning abilities and developing formalisms that characterize these abilities and systems that approximate them. I therefore draw on literature from a broad range of disciplines in the development of these ideas, including: philosophy of language, linguistics and AI work on knowledge representation for the representation of events and actions, philosophical logic and AI work on nonmonotonic reasoning for representing defeasible knowledge and reasoning about it, and, of course, AI work on planning and plan recognition itself.
Keywords: deictic computations; embodiment; working memory; natural tasks; eye movements; brain computation; binding; sensory-motor tasks; pointers.
To describe phenomena that occur at different time scales, computational models of the brain necessarily must incorporate different levels of abstraction. We argue that at time scales of approximately one-third of a second, orienting movements of the body play a crucial role in cognition and form a useful computational level, termed the embodiment level. At this level, the constraints of the body determine the nature of cognitive operations, since the natural sequentiality of body movements can be matched to the natural computational economies of sequential decision systems. The way this is done is through a system of implicit reference termed deictic, whereby pointing movements are used to bind objects in the world to cognitive programs. We show how deictic bindings enable the solution of natural tasks and argue that one of the central features of cognition, working memory, can be related to moment-by-moment dispositions of body features such as eye movements and hand movements.
Keywords: speech recognition; speech representation; speech coding; lexical access.
A central difficulty with automatic speech recognition is the temporally inaccurate nature of the speech signal. Despite this, speech has been traditionally modeled as a purely sequential (albeit probabilistic) process. The usefulness of accurate sequence information is re-evaluated in this paper, both at the acoustic and lexical levels for the task of speech recognition. At the acoustic level, speech segments are quantized into discrete vectors, and converted into set representations as opposed to accurate sequences. Recognition of the quantized vector sets dramatically improved performance as contrasted with the corresponding vector sequence representations. At the lexical level, our study suggests that accurate sequence information is, again, not crucial. In fact locally discarding phoneme sequence information may be useful for coping with errors (such as insertion, substitution). Based on the idea of phone set indexing, a lexical access algorithm is developed. Thus, this work questions the traditional approach of modeling speech as a purely sequential process, and suggests that discarding local sequential information may be a good idea. As an alternative to a purely sequential representation, a set representation seems to be a viable option.
Keywords: natural language processing; grounding; dialogue systems; speech act theory; discourse; situation-theory; plan-execution.
The process of adding to the common ground between conversational participants (called grounding) has previously been either oversimplified or studied in an off-line manner. This dissertation presents a computational theory, in which a protocol is presented which can be used to determine, for any given state of the conversation, whether material has been grounded or what it would take to ground the material. This protocol is related to the mental states of participating agents, showing the motivations for performing particular grounding acts and what their effects will be.We extend speech act theory to account for levels of action both above and below the sentence level, including the level of grounding acts described above. Traditional illocutionary acts are now seen to be multi-agent acts which must be grounded to have their usual effects.
A conversational agent model is provided, showing how grounding fits in naturally with the other functions that an agent must perform in engaging in conversation. These ideas are implemented within the TRAINS conversation system.
Also presented is a situation-theoretic model of plan execution relations, giving definitions of what it means for an action to begin, continue, complete, or repair the execution of a plan. This framework is then used to provide precise definitions of the grounding acts in terms of agents executing a general communication plan in which one agent must present the content and another acknowledge it.
Keywords: speech recognition; indexing; lexical access.
The usefulness of accurate sequence information is re-evaluated in this paper. A novel idea, called phonetic set hashing, of transforming phone sequences to words is then suggested. Phone sequences are mapped onto the corresponding phone sets, and the latter used as keys for indexing appropriate words. By using data-driven training strategies, the problem of word segmentation has been alleviated. The robustness of phone set hashing towards insertion, deletion, and substitution errors has also been studied. Experiments with subsets of the TIMIT database indicate that phone set hashing is a simple, fast scheme for word pre-selection.
Keywords: uncertain inference; Dempster-Shafer belief function theory; combinatorial semantics; medical diagnosis.
This report compares two formalisms for uncertain inference, combinatorial semantics and Dempster-Shafer belief function theory, on the basis of an example from the domain of medical diagnosis. We review Shafer's example about the imaginary disease ploxoma and show how it would be represented in combinatorial semantics. We conclude that belief function theory has a qualitative advantage because it offers greater flexibility of expression and provides results about more specific classes of patients. Nevertheless, a quantitative comparison reveals that the inferences sanctioned by combinatorial semantics are more reliable than those of belief function theory.
Keywords: cross-modal; classification; connectionist; learning vector quantization (LVQ); neural networks; self-supervised; unsupervised learning.
This dissertation addresses the problem of unsupervised learning for pattern classification or category learning. A model that is based on gross cortical anatomy and implements biologically plausible computations is developed and shown to have classification power approaching that of a supervised discriminant algorithm.The advantage of supervised learning is that the final error metric is available during training. Unfortunately, when modeling human category learning, or in constructing classifiers for autonomous robots, one must deal with not having an omniscient entity labeling all incoming sensory patterns. We show that we can substitute for the labels by making use of structure between the pattern distributions to different sensory modalities. For example the co-occurrence of a visual image of a cow with a ``moo'' sound can be used to simultaneously develop appropriate visual features for distinguishing the cow image and appropriate auditory features for recognizing the moo.
We model human category learning as a process of minimizing the disagreement between outputs of sensory modalities processing temporally coincident patterns. We relate this mathematically to the optimal goal of minimizing the number of misclassifications in each modality and apply the idea to derive an algorithm for piecewise linear classifiers in which each network uses the output of the other networks as a supervisory signal.
Using the Peterson-Barney vowel dataset we show that the algorithm finds appropriate placement for the classification boundaries. The algorithm is then demonstrated on the task of learning to recognize acoustic and visual speech from images of lips and their emanating sounds Performance on these tasks is within 1-7\% of the related supervised algorithm (LVQ2.1).
Finally we compare the algorithm to Becker's IMAX algorithm and give suggestions as to how the algorithm may be implemented in the brain using physiological results concerning the relationship between two types of neural plasticity, LTP and LTD, observed in visual cortical cells. We also show how the algorithm can be used as an efficient method for dealing with learning from data with missing values.
Keywords: mixed-initiative planning, dialogue systems, knowledge representation.
The TRAINS project is an effort to build a conversationally proficient planning assistant. A key part of the project is the construction of the TRAINS system, which provides the research platform for a wide range of issues in natural language understanding, mixed-initiative planning systems, and representing and reasoning about time, actions and events. Four years have now passed since the beginning of the project. Each year we have produced a demonstration system that focused on a dialog that illustrates particular aspects of our research. The commitment to building complete integrated systems is a significant overhead on the research, but we feel it is essential to guarantee that the results constitute real progress in the field. This paper describes the goals of the project, and our experience with the effort so far.This paper is to appear in the Journal of Experimental and Theoretical AI, 1995.
Keywords: planning; temporal prediction; simulation.
Temporal projection is a crucial task in planning. In order to achieve its goals, the planner must be able to reason about the consequences of its actions. In the real world, the planner does not have complete information about the environment or even the consequences of its actions. The planner thus must be able to reason about the probabilistic nature of the world and the probabilistic effects of its action.Interactions between actions and events in the world are not only probabilistic; they also are temporally complex. Simultaneous events can interact in many different ways depending on their temporal properties. An action's temporal relation to its preconditions and effects can also become quite complex.
Existing projection systems are weak in their representations of temporally complex actions and events in a probabilistic world. Those that can handle probabilistic situations have a limited representation of temporal relations, while those that can handle complex temporal relations generally assume the world to be completely deterministic. Moreover, the speed of existing probabilistic projection systems are low. These systems are impractical to scale up to larger problems. In this thesis, we propose a practical projection system that can handle both the probabilistic nature of the world and the temporally complex nature of actions. The projection is based on simulation methods. Projection is done by simulating possible courses of events, one at a time. The simulation traces are then collected and processed by a projection module front-end, which provides the planner and the execution monitor with probabilistic estimates of propositions' values. The representation allowed by the system is rich in both the temporal and the probabilistic aspects.
Keywords: dialogue transcriptions; utterances.
This document describes a toolkit and guidelines for the transcription of dialogues. The premise of these tools is that a dialogue between two people can be broken down into a series of utterance files, each spoken by one participant. This allows the transcription tools and standards already designed for single speaker speech to be used.
Keywords: spoken dialogue; speech repairs; utterances; discourse markers.
Spoken dialogue poses many new problems to researchers in the field of computational linguistics. In particular, conversants must detect and correct speech repairs, segment a turn into individual utterances, and identify discourse markers. These problems are interrelated. For instance, there are some lexical items whose role in an utterance can be ambiguous: they can act as discourse markers, signal a speech repair, or even be part of the content of an utterance unit. So, these issues must be addressed together. The resolution of these problems will allow a basic understanding of how a speaker's turn can be broken down into individual contributions to the dialogue. We propose that this resolution must be and can be done using local context. They do not require a full understanding of the dialogue so far, nor, in most cases, a deep understanding of the current turn. Resolving these issues locally also means they can be resolved for the most part before later processing, and so will make a natural language understanding system more robust and able to deal with the unconstrained nature of spoken dialogue.
Keywords: knowledge representation; temporal reasoning; causal reasoning.
We present a representation of events and action based on interval temporal logic that is significantly more expressive and more natural than most previous AI approaches. The representation is motivated by work in natural language semantics and discourse, temporal logic, and AI planning and plan recognition. The formal basis of the representation is presented in detail, from the axiomatization of time periods to the relationship between actions and events and their effects. The power of the representation is illustrated by applying it to the axiomatization and solution of several standard problems from the AI literature on action and change. An approach to the frame problem based on explanation closure is shown to be both powerful and natural when combined with our representational framework. We also discuss features of the logic that are beyond the scope of many traditional representations, and describe our approach to difficult problems such as external events and simultaneous actions.
Keywords: natural language processing; ambiguity; scope; context-dependency; discourse processes; semantic interpretation; task-oriented conversations; quantification; modals; definite descriptions.
The problem of ambiguity is central to any theory of language interpretation, whether our interest is in language processing in humans or in developing a usable natural language processing system. Psycholinguistic evidence suggests that human subjects are able to choose an interpretation when necessary, and that competing factors are involved in this choice; however, no theory of language interpretation deals satisfactorily with the combinatorial explosion paradox---the fact that no matter how ambiguous natural language sentences are, they are usually interpreted without significant effort.The main idea presented in this dissertation is that the scope preferences observed in the literature are not obtained by an independent `scope disambiguation' module, but are the result of independent interpretation processes such as definite description interpretation or the interpretation of modals. None of these interpretive procedures is especially concerned with `scope disambiguation,' but the result of these inferences is that relations of contextual dependency such as anaphoric reference or presuppositionality become part of the common ground; the scope preferences observed in the literature reflect these relations of dependency. The dissertation includes a formal proposal concerning the representation of contextual dependency and its impact on the semantics of sentence constituents.
The theory of ambiguity here presented is based on a distinction between semantic ambiguity, that can be captured implicitly, by means of underspecified representations, and perceived ambiguity, that results from the process of discourse interpretation. A new model of the common ground is introduced, that can be used to characterize both situations characterized by the presence of semantic ambiguity, and situations characterized by the existence of perceived ambiguity.
The reasoning that leads to the establishment of scoping preferences makes use, I argue, of information that is pragmatic in nature; this calls for a model of discourse interpretation in which the common ground contains such information. In the case of spoken language conversations, the common ground must be a model of the discourse situation of the conversational participants.
Keywords: supervised learning; neural networks; self-organization; pattern recognition.
Supervised, neural network, learning algorithms have proven very successful at solving a variety of learning problems. However, they suffer from a common problem of requiring explicit output labels. This requirement makes such algorithms implausible as biological models. In this paper, it is shown that pattern classification can be achieved in a multi-layered, feed-forward neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together iteratively using the derived self-organizing back-propagation algorithm. The ability of the self-organizing network to generalize on unseen data is also experimentally evaluated on real data sets, and compares favorably with the traditional labeled supervision with neural networks. However, interesting features emerge out of the proposed self-organizing supervision that are absent in conventional approaches. The further implications of self-organizing supervision with neural networks are also discussed.
Keywords: speech recognition; single level feedback; neural networks.
Speech of multiple speakers is transformed to speech produced by a single speaker (speech normalization) using cross-coding networks. Internal representations for classification are acquired by feeding back the internal speech (i-speech) produced. Training proceeds by unfolding the network through time, and combining the classification error with the intermediate speaker-normalization errors. Experimental results on multi-speaker syllable recognition tasks with trained and new speakers are discussed.
Keywords: temporal reasoning; complexity of reasoning; planning; reasoning with disjunctions.
We address the problems of determining consistency and of finding a solution for sets of 3-point relations expressing exclusion of a point from an interval, and for sets of 4-point relations expressing interval disjointness. Availability of these relations is an important requirement for dealing with the sorts of temporal constraints encountered in many AI applications such as plan reasoning. We prove that consistency testing is NP-complete and finding a solution is NP-hard.
Keywords: temporal reasoning; point algebra; interval algebra; qualitative temporal constraint; efficient reasoning; scalable systems; reasoning with disjunctions.
Reasoning about temporal information is an important task in many areas of Artificial Intelligence. In this paper we address the problem of scalability in temporal reasoning by providing a collection of new algorithms for efficiently managing large sets of qualitative temporal relations. We focus on the class of relations forming the Point Algebra (PA-relations) and on a major extension to include binary disjunctions of PA-relations (PA-disjunctions). Such disjunctions add a great deal of expressive power, including the ability to stipulate disjointness of temporal intervals, which is important in planning applications.Our representation of time is based on timegraphs, graphs partitioned into a set of chains on which the search is supported by a metagraph data structure. The approach is an extension of the time representation proposed by Schubert, Taugher and Miller in the context of story comprehension. The algorithms herein enable construction of a timegraph from a given set of PA-relations, querying a timegraph, and efficiently checking the consistency of a timegraph augmented by a set of PA-disjunctions. Experimental results illustrate the efficiency of the proposed approach.
Keywords: temporal reasoning; point algebra; constraint networks; reasoning with inequations.
We analyze the problem of computing the minimal labels for a network of temporal relations in the Point Algebra. van Beek proposes an algorithm for accomplishing this task which takes $O(max(n^3,n^2\cdot m))$ time (for $n$ points and $m$ $\neq$-relations). We show that the proof of the correctness of this algorithm given by van Beek and Cohen is faulty, and we provide a new proof showing that the algorithm is indeed correct.
Keywords: temporal reasoning systems; point algebra; interval algebra; scalable systems.
We describe two domain-independent temporal reasoning systems called TimeGraph I and II, which can be used in AI-applications as tools for efficiently managing large sets of relations in the Point Algebra, in the Interval Algebra, and metric information such as absolute times and durations. Our representation of time is based on timegraphs, graphs partitioned into a set of chains on which the search is supported by a metagraph data structure. TimeGraph I was originally developed by Taugher, Schubert, and Miller in the context of story comprehension. TimeGraph II provides useful extensions, including efficient algorithms for handing inequations, and relations expressing point-interval exclusion and interval disjointness. These extensions make the system much more expressive in the representation of qualitative information and suitable for a large class of applications.
Keywords: biological learning; computer learning; biological development; acquisition of visual capabilities; visual architecture.
It seems to be a common feeling that animals learn to see, and this feeling, together with the re-emergence of computer learning paradigms that mimic many forms of human learning, has raised hopes that learning is the key to the computer vision problem. Indeed, it seems clear that Nature does not "program" all our visual capabilities into the genome, and we certainly know that programming a computer with a closed-form solution to the vision problem is a daunting task.The aim of this informal and elementary report (basically a term paper) is to cast doubt on the idea that biological systems learn to see. The complex process of development, beginning at fertilization and ending with a mature individual, could be considered to have genetic ("nature") and learning ("nurture") processes as logical endpoints or opposite poles. This report mostly considers what goes on between those endpoints, and is meant to raise the possibility that some of the least understood processes in biology are responsible for visual capabilities.
Keywords: planning; probabilities; statistics; knowledge representation.
This paper describes a formalism, Statistical Event Logic (SEL), that adds statistical reasoning to Allen's planning language [Allen et al., 1991 (Reasoning about Plans)]. Interval temporal logic supports reasoning about time and events; probability inferred from the planner's experience supports reasoning about incomplete information. Statistical Event Logic can represent knowledge that allows a planner to reason both about choices based on incomplete knowledge and about the future likely to result from these choices.
Keywords: classification; inheritance; defaults; approximation algorithm.
When one works with a system that utilizes inheritance hierarchies the following problem often arises. A new object is introduced and it must be integrated into a hierarchy; under which classes in the hierarchy should the new object be positioned? In this paper, I formalize this problem for feature-based default inheritance hierarchies. Since it turns out to be NP-complete, I present an approximation algorithm for it. I show that this algorithm is efficient and look at some of the possible problematic situations for the algorithm. Although more analysis and experimentation are needed, these preliminary results show that the algorithm warrants such efforts.
Keywords: TRAINS; dialogue; prosody; discourse structure.
This paper describes how well prosodic information correlates with the topic structure of a cooperative dialogue. To investigate this correlation systematically, first we introduce the notion of utterance unit (UU) as a basic unit in conversations. We define the utterance unit by employing four principles. The grammatical principle is a syntactic criterion in which the UU boundary is set wherever the period can be placed. The pragmatic principle says that each UU corresponds to a basic speech act. In other words, if two neighboring phrases correspond to different speech acts (for instance, acknowledgment and request), they should be taken as two different UUs. The conversational principle addresses the turn-taking aspect of conversations. A UU boundary should be placed wherever the speaker changes. Finally, the prosodic principle says that whenever a medium length or longer pause (750 msec) is inserted between two phrases, they are to be taken as two different UUs. We apply these principles to a speech database containing about one and a half hours of collected dialogue to split the dialogues into a sequence of UUs. We then classify the inter-UU boundaries based on the relationship between two neighboring UUs into four semantic categories: topic shift, topic continuation, elaboration (or clarification), and speech-act continuation. The prosodic parameters measured at each boundary are the onset fundamental frequency (F0), the final F0, and the F0 maximal peak declination ratio (the ratio of the current UUUs maximal peak to that of the preceding UU). Our study shows how these prosodic parameters vary depending on the topic structure. Our results can be summarized as follows. (1) The onset F0 value tends to be higher when the topic is changed at the UU boundary. (2) The final F0 value indicates finality and is much higher (on average) at speech-act continuation boundaries than at other boundaries. (3) The maximal peak declination ratio reflects the degree of subordination to the preceding UU. That is, this ratio is lowest at elaboration boundaries and highest at topic shift boundaries. Finally, we discuss discourse structure identification via the prosodic parameters.
Integrated, flexible, surviving, autonomous AI systems are on the horizon, raising new issues in systems support. These systems typically embody hard real-time constraints (for servoing) and soft real-time constraints (solving problems to some level of effectiveness within some time constraints). We assume an adequate hard real-time control substrate, and are concerned here with resource allocation for high-level decision-making in Soft PArallel Real-Time ApplicationS (SPARTAS). Such applications only need to respond to their environment quicker than their environment can dramatically change on them. SPARTAS often generate behavior by running high-level algorithms based on a model of the world and on information from the environment. Designing an executive for SPARTAS is challenging, since in its full generality it calls for dynamic decision-making about resource allocation, scheduling, choice of methods, and handling reflexive or reactive behavior smoothly within a context of planned or intended actions, and a host of other issues not typically encountered either in off-line or hard real-time applications. An important aspect of the environment for a SPARTA is its own state; what resources are being used for what purposes, and which are available. Modern SPARTAS are written on parallel computers, further complicating matters. We are designing Ephor, a run-time envirobnment for parallel machines to alleviate some of the difficulties faced by a SPARTA programmer. In this paper we briefly describe Ephor, show how it allows simpler application code, and demonstrate that Ephor improves problem-solving performance in the presence of varying internal system state by dynamically choosing between different planners.
Keywords: temporal reasoning.
This paper describes the performance evaluation of six temporal reasoning systems. We show that if you are working with large temporal datasets where information is added incrementally throughout the execution of the program, systems using incompletely connected graphs (i.e., TMM, TimeGraph and TimeGraph-II) seem the best option. While they do not offer the constant query time of systems using fully connected graphs (i.e., the systems based on constraint satisfaction), the savings at assertion time are so substantial that the relatively small performance penalty for queries is a reasonable tradeoff. Of course, these systems do not offer the expressivity of the interval-based systems as they only handle point-based relations. Of the three, TimeGraph-II offers a wider range of qualitative relations as it handles point inequality. It does not currently handle metric information, however, as do TMM and TimeGraph. Thus decisions between these three may be more determined by the reasoning capabilities required rather than raw performance.
Keywords: evidence combination; evidential probability.
In uncertain reasoning one often needs to combine conflicting pieces of evidence. We show how the need for evidence combination arises in Kyburg's Evidential Probability system and investigate various methods of dealing with it.
Keywords: TRAINS; spoken language corpus; task-oriented dialogue; conversation.
This report contains a small corpus of transcriptions of task oriented spoken conversations in the TRAINS domain. Included are 16 conversations, amounting to over 80 minutes of speech. Also included are a description of the task and collection situation and the conventions used in transcription and utterance segmentation.
Keywords: reasoning about change; frame problem; monotonic reasoning; explanation closure; Sandewall test suite.
Explanation closure (EC) axioms were previously introduced as a means of solving the frame problem. This paper provides a thorough demonstration of the power of EC combined with action closure (AC) for reasoning about dynamic worlds, by way of Sandewall's test suite of 12-or-so problems [Sandewall 1991; 1992]. Sandewall's problems range from the "Yale turkey shoot" (and variants) to the "stuffy room" problem, and were intended as a test and challenge for nonmonotonic logics of action. The EC/AC-based solutions for the most part do not resort to nonmonotonic reasoning at all, yet yield the intuitively warranted inferences in a direct, transparent fashion. While there are good reasons for ultimately employing nonmonotonic or probabilistic logics---e.g., pervasive uncertainty and the qualification problem---this does show that the scope of monotonic methods has been underestimated. Subsidiary purposes of the paper are to clarify the intuitive status of EC axioms in relation to action effect axioms; and to show how EC, previously formulated within the situation calculus, can be applied within the framework of a temporal logic similar to Sandewall's "discrete fluent logic," with some gains in clarity.
Keywords: memory for reinforcement learning; perceptual aliasing; memory for reinforcement learning; perceptual aliasing; .
This report presents a method by which a reinforcement learning agent can solve the incomplete perception problem using memory. The agent uses a Hidden Markov Model (HMM) to represent its internal state space and creates memory capacity by splitting states of the HMM. The key idea is a test to determine when and how a state should be split: the agent only splits a state when the split will help the agent predict utility. Thus the agent can build an internal state space proportionate to the task at hand, not as large as would be required to represent all of its perceivable world. I call the technique UDM, for Utile Distinction Memory.
Keywords: science; rationality; credibility; probability; evidence; acceptance.
There is a fundamental division between two approaches to cognition and inference in the real world. These approaches may be found in relatively pure form among "probabilists" and "logicists" in artificial intelligence. Given evidence and background knowledge, the justifiable inference, on the first view, is that the probability of the conclusion is p (or that its degree of certainty is c, etc.). Given evidence and background knowledge, the inference to the conclusion, on the second view, is justified just in case it conforms to an acceptable (often non-monotonic) principle of inference.This is such a fundamental difference that it may well be that abstract arguments are not really going to prove much. It is surely the case that both approaches should be explored and tested. This article will explore the dimensions of a research program based on a particular version of the second approach: that in which the conclusion of an inference from data and background knowledge is justified if that data and background knowledge renders the conclusion probable enough. The corresponding conclusion on the first view would be: the conclusion is highly probable. But our conclusion is categorical; it is not qualified or hedged; it is accepted. Nevertheless, it is accepted defeasibly: more data could lead us to withdraw it.
Questions of various sorts arise: What are the relations between this sort of defeasible reasoning and ordinary deductive reasoning? What is the source of the "data and background knowledge" on the basis of which we derive conclusions? Where do the probabilities come from? How do we use this structure for making decisions? How do we choose a level of practical certainty? How does this structure relate to other non-monotonic formalisms? How is it related to probabilistic structures? Both traditional probabilists and non-monotonic reasoners take evidence as "given," and modify beliefs in its light. But we must also consider the reliability of evidence: we evaluate our evidence in the light of what we believe. Is there a vicious circularity here?
What I seek to explore is the question of empirical argument and inference as it concerns us in the world. I don't expect to provide definitive answers to these questions (and I would not presume to think that they are the only questions that can be raised!), but I hope to be able to provide provocative indications of the form that answers might take.
Keywords: parallel genetic algorithms; integer linear programming; transputers; distributed-memory systems.
The implementation of genetic algorithms raises many important issues. These issues can be divided into two main classes: genetic search quality and execution performance. In the context of parallel genetic algorithms on distributed-memory computers, performance considerations have always driven the design of implementations. Thus, centralized implementations have not previously been seriously considered for distributed-memory architectures.The work we present here defines a set of genetic algorithm implementation alternatives for distributed-memory computers, in which strategies with some centralization are included. Each of our implementation alternatives uses a different level of distribution of the population, from the single logically centralized population to a totally distributed set of subpopulations.
The design alternatives we define can be applied to the implementation of any parallel genetic algorithm. As an example of such an implementation, we study the quality of the search and the execution performance of our strategies on the 0-1 Integer Linear Programming problem, on a Transputer network. Our results show that implementations incurring higher overheads can produce as good or better solutions faster than than very "efficient" implementations, depending on the characteristics of the problem at hand. More specifically, in some cases, utilizing more centralized parallel genetic search strategies results in the fastest convergence towards the optimal solution, therefore reducing the number of generations needed by the algorithm.
Keywords: referring expressions; collaboration; planning; discourse.
This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation.
Keywords: moving light display; visual motion; cognitive modeling; attention; connectionist; recognition; visual sequence; temporal sequence.
Recognition of motion sequences is a crucial ability for biological and robot vision systems. We present an architecture for the higher-level processes involved in recognition of complex structured motion. The work is focused on modeling human recognition of Moving Light Displays. MLDs are image sequences that contain only motion information at a small number of locations. Despite the extreme paucity of information in these displays, humans can recognize MLDs generated from a variety of common human movements. This dissertation explores the high-level representations and computational processes required for the recognition task. The structures and algorithms are articulated in the language of structured connectionist models. The implemented network can discriminate three human gaits from data generated by several actors.Recognition of any motion involves indexing into stored models of movement. We present a representation for such models, called scenarios, based on coordinated sequences of discrete motion events. A method for indexing into this representation is described. We develop a parallel model of spatial and conceptual attention that is essential for disambiguating the spatially and temporally diffuse MLD data. The major computational problems addressed are: (1) representation of time-varying visual models; (2) integration of visual stimuli over time; (3) gestalt formation in and between spatially-localized feature maps and central movement representations; (4) contextual feedback to lower levels; and (5) the use of attention to focus processing on particular spatial locations and particular high-level representations. Several novel connectionist mechanisms are developed and used in the implementation.
In particular, we present advances in connectionist representation of temporal sequences and in using high-level knowledge to control an attentional mechanism. We show that recognition of gait can be achieved directly from motion features, without complex shape information, and that the motion information need not be finely quantized. We show how the "what" and "where" processes in vision can be tightly coupled in a synergistic fashion. These results indicate the value of the structured connectionist paradigm in modeling perceptual processes: no previous computational model has accounted for MLD recognition and we do not know how it would be approached in any other paradigm.
Keywords: knowledge representation; plans; temporal reasoning; defeasible reasoning.
We propose a wide-ranging knowledge representation formalism designed expressly to support many different forms of reasoning about plans. We begin with an event-based language based on the interval temporal logic. The language supports reasoning about action attempts and composite actions, both of which are given axiomatic definitions. We then define a representation for plans viewed as arguments that a certain course of action under certain explicit conditions will achieve certain goals. We can represent both correct and incorrect plans, and reason about why they might or might not fail. An important aspect of this work is the formal characterization of plan reasoning as assumption-based reasoning, to make the non-deductive aspects of plan reasoning explicit. A preliminary implementation of these ideas has already been built as the plan reasoning component of the TRAINS system.
Keywords: lexicon; word semantics; word formation; derivational affixation.
Lexicon coverage is often the limiting factor in natural language processing systems. Recent work has attempted to remedy this situation by extracting information from machine readable dictionaries. Unfortunately, no NLP lexicon system or dictionary could possibly list all the potential words of English. However, humans are often able to interpret novel word forms (that is, words they have not seen before) without difficulty. One way we do this, if the word is complex (e.g., "undecidability"), is by using cues from the internal structure of the word. Relations in phonological form often correspond to relations in meaning. For example, if someone knows what the verb "open" means, a number of educated guesses can be made about the meaning of "reopen". Exceptions abound in lexical data and any system that attempts to use lexical generalizations must be able to handle exceptions in a principled fashion. In this report, I will describe the preliminary design of a system that uses relations in form to derive relations in meaning. For a new word, the system will produce meaning postulates that represent an educated guess about the meaning of the new word. These meaning postulates will be written in Episodic Logic, and the entire system will be a module of the TRAINS system.
Keywords: speech acts; conversation; literal meaning; discourse; grounding; turn taking.
A linguistic form's compositional, timeless meaning can be surrounded or even contradicted by various social, aesthetic, or analogistic companion meanings. This paper addresses a series of problems in the structure of spoken language discourse, including turn-taking and grounding. It views these processes as composed of fine-grained actions, which resemble speech acts both in resulting from a computational mechanism of planning and in having a rich relationship to the specific linguistic features which serve to indicate their presence.The resulting notion of Conversation Acts is more general than speech act theory, encompassing not only the traditional speech acts but turn-taking, grounding, and higher-level argumentation acts as well. Furthermore, the traditional speech acts in this scheme become fully joint actions, whose successful performance requires full listener participation.
This paper presents a detailed analysis of spoken language dialogue. It shows the role of each class of conversation acts in discourse structure, and discusses how members of each class can be recognized in conversation. Conversation acts, it will be seen, better account for the success of conversation than speech act theory alone.
Keywords: uncertainty; decision-making; maximum entropy; Bayesian methods.
Problems for strict and convex Bayesianism are discussed. A set-based Bayesianism generalizing convex Bayesianism and intervalism is proposed. This approach abandons not only the strict Bayesian requirement of a unique real-valued probability function in any decision-making context but also the requirement of convexity for a set-based representation of uncertainty. Levi's E-admissibility decision criterion is retained and is shown to be applicable in the non-convex case.
The RHET system is a knowledge representation tool that is intended to support the development of advanced prototype natural language under- standing and planning systems. It is what is currently called a "hybrid" representation, which consists of a set of separately defined specialized reasoning systems that are presented to the user within a single uniform framework. It can be used as a horn-clause based theorem proving system, or it can be used as a rich frame-based representation, or used in any way falling between these styles of use. The primary specialized reasoning components include a type hierarchy system, an equality reasoning system, a temporal reasoning system, and a hierarchical context mechanism that support reasoning about different agent's beliefs as well as hypothetical reasoning. This report provides a sequence of tutorials each demonstrating a major feature of the system.
Rhetorical (Rhet) is a programming / knowledge representation system that offers a set of tools for building an automated reasoning system. Its emphasis is on flexibility of representation. This document extends TR 326 with more information about the internals of the Rhet system. In addition it provides the information needed for users to write their own builtin functions, or better lispfns (that use internally provided library functions).
Rhetorical (Rhet) is a programming / knowledge representation system that offers a set of tools for building automated reasoning systems. Its emphasis is on flexibility of representation, allowing the user to decide if the system will basically operate as a theorem prover, a frame-like system, or an associative network. Rhet may be used as the back-end to a user's programming system and handle the knowledge represen- tation chores, or it may be used as a full-blown programming language.Rhet offers two major modes of inference: a horn clause theorem prover (backwards chaining mechanism) and a forward chaining mechanism. Both modes use a common representation of facts, namely horn clauses with universally quantified, potentially type restricted, variables, and use the unification algorithm. Additionally, they both share the following additional specialized reasoning capabilities: (1) variables may be typed with a fairly general type theory that allows a limited calculus of types including intersection and subtraction; (2) full reasoning about equality between ground terms; (3) reasoning within a context space, with access to axioms and terms in parent contexts; (4) escapes into Lisp for use as necessary.
RPRS is a hierarchical plan recognition system built within the RHET knowledge representation system. It provides a powerful system for plan recognition based on the algorithms of Kautz, with the general reasoning capabilities of RHET. RPRS takes special advantage of Rhet's type relations, constraints, equality, and contextual reasoning abilities.RPRS is also intended as a demonstration of the Rhet programming and knowledge representation system's hybrid reasoning capabilities. Utilizing the lisp interface to Rhet, RPRS allows the user to use the Rhet structured type system to build plan types, and given some observation or set of observations have Rhet derive the set of plans that are consistent with these observations. Since RPRS includes the TEMPOS specialized reasoner for Rhet, steps and observations can have reference to time-intervals, and/or be temporally constrained with respect to one another.
Last Change: 14 Dec 2012 / marty@cs.rochester.edu