Welcome | People | Applied Logic | Neural Networks | Memory | Natural Language | Sensorimotor Processing | Lab Map

Applied Logic

Before the start of the 20th century, logic as a research field had its home in departments of philosophy, and as its primary focus the elucidation of the notion of proof. The impetus provided by growing interest in the foundations of mathematics, and especially by Tarski's mathematical explication of truth, led to the conviction that the proper situation of logic was in the departments of mathematics, and that the proper focus of interest was model theory. Towards the middle of the 20th century, three things happened in fairly quick succession. Characterisations of computability were given by Turing, Church, and Gödel, leading to the design and construction of real computers, and then theorem-proving algorithms were implemented on such computers by some of the founders of artificial intelligence. These developments implied that logic could usefully be situated in departments of computing.

A consequence of the association between logic and computing, and particularly the association between logic and artificial intelligence, has been an evolution in the way logic is perceived.

A century ago, logic was held to be about proofs. Half a century ago, logic was held to be about truth. Now the notions of proof and truth have been assimilated into a larger picture, for logic is known to be about agents: an agent's information about a system may be represented either model-theoretically or sententially, and theorem-proving algorithms simulate consequence relations representing the conclusions agents may rationally draw from the information at their disposal.

A century ago, logic was assumed to involve a single formal language in which all statements of interest could be expressed, and to have a single intended interpretation, namely the universe. Now logic is known to involve purpose-built formal languages designed to permit the representation of knowledge about specific systems, and our frequent inability to reduce the class of models to a single member (incompleteness) is taken for granted as merely a reflection of the differences in expressiveness between object language and metalanguage, whereas earlier such results were regarded as surprising and philosophically significant.

A century ago, logic was thought to be about the formalisation only of arguments that were universally valid, and indeed this constituted the principle grounds for criticism of the notion of induction by Hume and Popper. Now logic is known to enable the formalisation of defeasible reasoning, in which the evidence supports conclusions only tentatively, and the legitimacy of basing hypotheses on experimental results has been re-established.

Logic as a specification tool for multi-agent systems, and as a
foundation for efficient or automated reasoning in those systems, may
conveniently be distinguished from more traditional foundational and
theoretical areas by the term *applied logic*.

**Defeasible Reasoning**

The AI Laboratory's current research programme in applied logic focuses on the areas of defeasible reasoning, epistemic logic (the logic of knowledge), and belief and knowledge change. The idea of belief change is that agents who incorporate tentative conclusions into their belief sets may need subsequently to retract these in light of new evidence. And even if beliefs are not tentative but deeply entrenched, in other words are knowledge, subsequent events in the world surrounding agents may still require them to update that knowledge. How should such belief and knowledge changes be accomplished? And how does the answer depend on such parameters as context and resource-boundedness of agents?

**Meyer, T.A., Labuschagne, W.A. and Heidema, J.**Refined epistemic entrenchment.*Journal of Logic, Language, and Information (JoLLI)*9:237-259 2000**van Ditmarsch, H.P.**Descriptions of game actions.*Journal of Logic, Language, and Information (JoLLI)*11:349-365 2002

**Participating Members:**