Workshop on Logic-Based Artificial Intelligence

This report appeared, originally in AI Magazine, v. 20, no. 4, Winter 1999, 97--102.
It appears here with the permission of the American Association for Artificial Intelligence.

Jack Minker

Introduction

The Workshop on Logic-Based Artificial Intelligence (LBAI), sponsored by the National Science Foundation and the American Association for Artificial Intelligence, was held in Washington, D.C., June 13-15, 1999. The workshop was organized by Jack Minker and John McCarthy. The Program Committee members were: Krzysztof Apt, John Horty, Sarit Kraus, Vladimir Lifschitz, John McCarthy, Jack Minker, Don Perlis and Ray Reiter. The purpose of the workshop was to bring together researchers who use logic as a fundamental tool in AI so as to permit them to review accomplishments, assess future directions, and share their research in LBAI. This note constitutes a short summary of the workshop.

The areas selected for discussion at the Workshop were: Abductive and Inductive Reasoning, Applications of Theorem Proving, Commonsense Reasoning, Computational Logic, Constraints, Logic and High Level Robotics, Logic and Language, Logic and Planning, Logic for Agents and Actions, Logic of Causation and Action, Logic, Probability and Decision Theory, Nonmonotonic Reasoning, Theories of Belief, and Knowledge Representation.

Sessions consisted of from one to four lectures. Following the lectures in each session, a discussion was held to assess accomplishments and directions of research related to the topic of the session. In addition, 12 different logic-based systems were discussed briefly and the systems were demonstrated. A brief synopsis of the lectures and the systems demonstrated follows.

Description of Workshop

In Abductive and Inductive Reasoning, F. Buccafurri, T. Eiter, G. Gottlob, and N. Leone provide a first step towards integrating model checking methods with AI. They develop an abstract concept of a system repair problem which is solved by applying logic-based abduction. S.H. Muggleton and F.A. Marginean note that data refinement operators in inductive logic programming have been viewed as functions from clauses to sets of clauses. Searches carried out by genetic algorithms allow larger changes by combining pairs of search nodes. They introduce and discuss binary refinement operators within the subsumption lattice which are functions from pairs of clauses to sets of clauses.

In the Applications of Theorem Proving, J. Moore discussed the capabilities of the ACL2 system (the successor of the Boyer-Moore NQTHM prover). ACL2 defines the state-of-the-art in microprocessor verification. It has been used to formally model a Motorola CAP digital signal processor and the Rockwell-Collins JEM1 microprocessor (the world's first silicon Java Virtual Machine). The use of ACL2 uncovered at least four hardware design bugs -- bugs that had survived hundreds of millions of test cases -- in time to fix them before the processor was shipped. A. Levy addressed data integration systems which require a flexible mechanism to describe contents of sources. He surveyed the main logic-based languages that have been considered for describing data sources in data integration systems, and the specialized inference techniques developed in this context. P. Nayak and B. Williams dealt with a reactive self-configuring system installed in the NASA Deep Space 1 spacecraft now in flight. This autonomous system performs failure analysis and correction for a variety of tasks during the years of active service of the spacecraft. D. Gunning spoke about knowledge-base projects at DARPA, where general reasoning systems are needed both to handle the general question answering problem and to reason from first principles in unanticipated situations.

In Commonsense Reasoning, J. McCarthy noted that logical AI involves representing knowledge of an agent's world, its goals and the current situation by sentences in logic. He characterized a large number of concepts that have arisen in research in logical AI. E. Sandewall addressed the topic, ``what is the appropriate research methodology when logic is used in our field?'' He argued that any research methodology must specify what is to be counted as relevant results in the research, and what is the evidence that is expected to be seen to accept proposed results.

In Computational Logic, G. Gottlob discussed an application of a restricted form of logic programming admitting negation but not function symbols to hardware verification. The idea is to use a fragment of that restricted form that can be processed in linear time (data complexity). The fragment is relevant to databases as it allows constructs such as transitive closure, and thus of recursive queries and to AI concerns such as frame problem, inheritance reasoning and various forms of default reasoning. K.R. Apt discussed an application of full first-order logic as a programming language. Although first-order logic is undecidable, by a slightly nonstandard interpretation of logic, one can build a correct (but incomplete) algorithm for ``example finding'' (for open theories the analogous technique is based on the so called Green trick for finding answer substitutions). A programming language called ALMA-0 was described that combines advantages of both logic (declarativeness) and usual imperative programming. ALMA-0 is based on a three-valued semantics, and consequently, it is correct, but not complete.

In Constraints, R. Dechter described bucket elimination, an algorithmic-framework that generalizes dynamic programming for many complex problem-solving and reasoning activities. The bucket elimination framework was demonstrated for processing knowledge bases such as constraint networks, and the Davis-Putnam method for satisfiability.

In Knowledge Representation, H. Levesque, explored two ways of using expressive logical representation languages without necessarily sacrificing tractability in handling open-world reasoning. N. Leone, discussed how to represent and solve problems in disjunctive logic programming, using the dlv System, based on a nondeterministic ``guess and check'' paradigm.

In Logic for Agents and Actions, J.-J. Meyer applied dynamic logic to view an agent's actions. Meyer also used deontic logic to be able to handle when an agent is permitted to carry out an action. V.S. Subrahmanian applied logical knowledge-base techniques and deontic logic. M. Shanahan applied the event calculus to handle agent's actions. In carrying out its actions, an agent must often be able to explain the current situation, and assimilate new (sometimes conflicting) information. A variety of techniques are used for this, including abduction (Shanahan), belief revision (Meyer) and logical knowledge-base techniques (Subrahmanian).

In Logic and Causation, V. Lifschitz, discussed the interaction between the design of languages for describing actions, the analysis of causal reasoning, and the use of propositional problem solvers for planning. He described the development of his work based on these lines of research. H. Geffner, focused on the application of theories of action research to planning. He emphasized the need for action languages to facilitate modeling and computation for planning. He illustrated his ideas with the presentation of Functional Strips and showed the high impact on efficiency that modeling based on this language has on planning tasks.

In Logic and High Level Robotics, C. Baral and M. Gelfond discussed the design of ``software components of reasoning, planning and acting in a changing environment.'' The dynamic system is viewed as a transition system and described by a logic program. An application to space shuttle control was described. R. Reiter provided an overview of the cognitive robotics project at the University of Toronto led by himself and H. Levesque. The project aims to provide a logical language called GOLOG for specifying and programming the behavior of an autonomous agent.

In Logic and Language, L.K. Schubert spoke about the situations we reason about and argued for associating situations (including events, episodes, eventualities, etc.) with arbitrarily complex sentences, rather than just atomic predicates in the process of understanding natural language (NL) utterances. He argued that two such notions are needed: the notion of a sentence characterizing a situation; and the notion of a sentence partially describing a situation. He formalized these notions in FOL**, an extension of first-order logic, and a part of his Episodic Logic, EL. R.H. Thomason presented a logic for intra-agent modality, showed how it can be used to formalize reasoning about the attitudes of other agents, and how it can be applied to the problem of mutual attitudes.

In Logic and Planning, H. Kautz and B. Selman discussed Blackbox, a planning system that unifies the planning as satisfiability framework with the plan graph approach to STRIPS planning. B. Nebel discussed the expressive power of preconditions in planning. He showed that preconditions in conjunctive normal form add to the expressive power of propositional STRIPS. He further showed that they do not add any expressive power once there are conditional effects.

In Logic, Probability and Decision Theory, intelligent agents must be able to deal with uncertainty. H. Prade and D. Dubois developed a logic for making decisions based on qualitative information about preferences and likelihoods (where the likelihoods were modeled using possibility theory). Although the logic was a propositional one, they showed how some of the intuitions underlying decision theory can be embedded in a logical language. C. Boutilier showed how logical ideas and representations can play a role in speeding up probabilistic computations, in particular the computation of policies in Markov decision problems.

In Nonmonotonic Reasoning, M. Denecker, W. Marek and M. Truszczynski described an algebraic approach to the semantics of nonmonotonic logics. This work is based on Fitting's theory of logic programs with negation as failure and generalizes that theory to other fixpoint formalisms. J. Delgrande and T. Schaub argued that default logic is more general and more widely applicable than previously supposed. They showed how to express preferences among defaults by means of naming default rules and encoding conditions for their invocation.

In the Theory of Beliefs, D. Perlis gave an overview of what sorts of things could be represented via a formal logic of beliefs, and discussed why it was natural to do so. J. Halpern showed that a simple, natural mathematical mechanism served to unify major aspects of belief treatments developed by several theorists, starting from quite different perspectives.

Several System Implementations were demonstrated. These systems were for robotics, deductive databases, planning, inductive inference and nonmonotonic reasoning. They are as follows:

  1. smodels - developed by I. Niemela that implements stable semantics for Datalog programs with negation.

  2. XSB - developed by D.S. Warren that implements full first-order logic programming with tabling under the well-founded semantics.

  3. dlv - developed by T. Eiter and N. Leone that implements disjunctive logic programming under the answer set semantics.

  4. DeReS - developed by W. Marek and M. Truszczynski that implements full default logic of Reiter.

  5. LDL++ - developed by C. Zaniolo that implements an extension of stratified databases and has additional features such as aggregation for deductive databases.

  6. GOLOG - developed by R. Reiter and H. Levesque, a system that focuses on high level support for cognitive functions in robotics.

  7. IMPACT - developed by V.S. Subrahmanian that provides a platform and environment for agent and software interoperability.

  8. Causal Calculator - developed by N. McCain for query answering in planning domains using a high-level action-description language. The problem is reduced to satisfiability checking using special purpose systems.

  9. General Planning Tool (GPT) - developed by B. Bonet and H. Geffner, a software tool that supports a general approach to planning, including classical, contingent and probabilistic planning.

  10. TLplan - developed by F. Bacchus, a planning system based on model checking logical formulas against finite models.

  11. Blackbox - developed by B. Selman and H. Kautz, is a planning system that works by converting problems specified by STRIPS notation into Boolean satisfiability problems, and solving the problems by state-of-the-art satisfiability engines.

  12. Progol, developed by S. Muggleton, a system for inductive inference. It employs Inverse Entailment with mode declarations to derive the most-specific clause with the mode language which entails a given example.

Summary

The general consensus of the attendees was that the field of LBAI has made significant contributions both to the theory and practice of artificial intelligence. In the theory of LBAI, contributions have been made to the foundations of commonsense reasoning, the formalization of nonmonotonic reasoning, the development of semantics for theories, and results on computational complexity. In the practice of LBAI, we have seen the area of automated reasoning successfully applied to the verification of hardware systems, and to improvements in solving satisfiability problems that have been incorporated into planning systems. The emergence of fast software systems implementing nonmonotonic reasoning is the most significant development in that area in the recent years, highlighted by the systems smodels, XSB, dlv, and DeReS. System demonstrations of practical running systems that can be used for realistic knowledge-base and nonmonotonic reasoning applications, the development of planning systems and tools for robotics were important contributions, not available five years ago. There remain many long-term and short-term research efforts that must be handled before AI will be able to develop truly intelligent machines. The efforts in LBAI are expected to contribute to this problem. It is expected that a comprehensive report will be written concerning the workshop and that a book will be published that consists of papers from the Workshop.

Acknowledgements

I would like to thank the following individuals for serving as session chairs at the workshop and for providing material for this report: Abductive and Inductive Reasoning, Francesca Toni, Imperial College of London; Commonsense Reasoning, Leora Morgenstern, IBM Yorktown Heights; Computational Logic, Witek Marek, University of Kentucky; Constraints, Jacques Cohen, Brandeis University; Applications of Theorem Proving, Don Loveland, Duke University; Knowledge Representation, David Etherington, University of Oregon; Nonmonotonic Reasoning, Vladimir Lifschitz, University of Texas at Austin; Logic, Probability and Decision Theory, Faheim Bacchus, University of Waterloo; Logic for Agents and Actions, Michael Fisher, Manchester Metropolitan University; Logic and High Level Robotics, Fangzhen Lin, Hong Kong University; Logic and Language, Stuart Shapiro, SUNY at Buffalo; Theories of Belief, Melvin Fitting, Lehmann University; Logic and Planning, Henry Kautz, AT\&T Labs; Logic of Causation, Javier Pinto, University Catolica of Chile; Systems Demonstration, Juergen Dix, University of Koblenz-Landau.

I would also like to express my appreciation to Parke Godfrey, University of Maryland (and now of York University), who was responsible for the equipment used for the demonstrations of the systems.

Jack Minker
Department of Computer Science
and
Institute of Advanced Computer Studies (UMIACS)
University of Maryland
College Park, Maryland 20742