People can represent facts about other people's mental states: for instance, facts about what other agents believe or desire. How are these facts represented in the brain? In this talk I will discuss some ideas I'm currently thinking about
Representing facts about mental states in a neural network poses several problems. Firstly, there is no accepted model of how facts in general are represented in the brain. Secondly, facts about mental states have a recursive structure, where one fact is nested inside another (e.g. [John believes that [his neighbour is friendly]]). Neural network models of nested propositions are particularly tricky, because there are no obvious equivalents of the variable-binding operations available in symbolic logic. Finally, facts about mental states express relations between agents and propositions ('propositional attitudes') whose semantics is notoriously hard to capture. When I say that 'John believes P', the mental state P has a different status to ordinary facts about the world. (For instance, say John's neighbour is Jack the Ripper, but John doesn't know this. John believes that his neighbour is friendly, but certainly not that Jack the Ripper is friendly.) In the talk, I will discuss some ideas about how to solve these problems, which have their origins in a neural network model of language learning and language processing.
Last modified: Thursday, 16-Sep-2010 13:35:02 NZST
This page is maintained by the seminar list administrator.