computer science


Lech Szymanski


Room Owheo Building, Room 249
Phone +64 3 479 5691

My research interests are machine learning, deep representation and connectionist models. Our state of the art computation models can learn from examples, but not without the heavy involvement of an expert user, making critical decisions about system architecture, parameters, and appropriate representation. My ultimate objective is to make machine learning into a tool that is easier to use by an average user. However, in order for that to happen, we need to develop truly autonomous machine learning algorithms, capable of forming an appropriate model for the task at hand, completely on their own.

Before my PhD, which I completed in 2012, I worked as a software engineer for a wireless telecommunications company in Ottawa, Canada. My background is in computer and electrical engineering, with a focus on embedded programming and digital signal processing. My interest in artificial neural network models dates back to summer employment, while still an undergraduate student, developing programs for data analysis in a neurobiology laboratory at the National Research Council Canada. Since then I have worked on several aspects of modelling and machine learning, including speech recognition, classification, learning theory and object recognition from images.


Selected publications

Hierarchical structure from motion optical flow algorithms to harvest three-dimensional features from two-dimensional neuro-endoscopic images, R. Johnson, L. Szymanski, S. Mills, Journal of Clinical Neuroscience, Volume 22, Issue 2, 378 - 382, 2015.

Deep networks are effective encoders of periodicity, L. Szymanski, B. McCane, IEEE Transactions on Neural Networks and Learning Systems, Volume 25, Issue 10, 1816 - 1827, 2014.

Practical use of SELinux for enhancing the security of web applications Part 1: Using Type Enforcement security, L. Szymanski, D. Eyers, Department of Computer Science, University of Otago, Tech. Rep. OUCS-2014-02, 2014.

Learning in deep architectures with folding transformations, L. Szymanski, B. McCane, Proceedings of the International Joint Conference on Neural Networks (IJCNN), 1-8, 2013.

Push-pull separability objective for supervised layer-wise training of neural networks, L. Szymanski, B. McCane, Proceedings of the International Joint Conference on Neural Networks (IJCNN), 23-30, 2012.

Last updated:  14th Apr 2022