For my doctoral thesis, I developed a way of transforming decision trees into multilayer perceptrons. As a result, I get pleasingly small neural networks that train quickly and are more accurate than the original decision trees.
Why should that be? What is the perceptron doing that the decision tree isn't? Is the perceptron doing anything that can't be done by logistic regression, or k-nearest neighbours, or support vector machines?
In this talk, I'll explain my tree-to-perceptron method, and show how well it performs empirically. I'll also talk about when you should expect perceptrons to do better, and when you shouldn't.
Last modified: Tuesday, 23-Sep-2008 13:37:36 NZST
This page is maintained by the seminar list administrator.