Winter School on Machine Learning – WISMAL 2022
Nicolai Petkov, program director
Nicola Strisciuglio, publicity
Carlos Travieso, local organisation
The Winter School on Machine Learning will take place on 23-25 March 2022.
This short winter school consists of several tutorials that present different techniques of Machine Learning. Each tutorial is two hours, introducing the main theoretical concepts and giving typical practical applications and their implementation in popular programming environments.
The participation in the winter school is free of charge for registered participants in APPIS 2022. The number of participants in the winter school is limited to 70 and early registration is encouraged to secure a place in the winter school.
The fee for participation is 200 Euro (before 14th January), and it includes free registration to APPIS 2022.
Registrations to the Winter School on Machine Learning are open and can be done at the url:
IMPORTANT: Please register as participant to APPIS and, after confirming your email address, indicate in the second step of the registration form if you are going to participate to WISMAL and APPIS or only to WISMAL.
Download WISMAL Program
Machine Intelligence on Graphs – Danilo Mandic
Abstract. The current availability of powerful computers and huge data sets is creating new opportunities in computational mathematics to bring together concepts and tools from graph theory, machine learning and signal processing, creating Data Analytics on Graphs. In discrete mathematics, a graph is merely a collection of points (nodes) and lines connecting some or all of them. The power of such graphs lies in the fact that the nodes can represent entities as diverse as the users of social networks or financial market data, and that these can be transformed into signals which can be analyzed using data analytics tools. In this talk, we aim to provide a comprehensive introduction to generating advanced data analytics on graphs that allows us to move beyond the standard regular sampling in time and space to facilitate modelling in many important areas, including communication networks, computer science, linguistics, social sciences, biology, physics, chemistry, transport, town planning, financial systems, personal health and many others. Graph topologies will be revisited from a modern data analytics point of view, and we will then proceed to establish a taxonomy of graph networks. With this as a basis, we show how the spectral analysis of graphs leads to even the most challenging machine learning tasks, such as clustering, being performed in an intuitive and physically meaningful way. Unique aspects of graph data analytics will be outlined, such as their benefits for processing data acquired on irregular domains, their ability to finely-tune statistical learning procedures through local information processing, the concepts of random signals on graphs and graph shifts, learning of graph topology from data observed on graphs, and confluence with deep neural networks, multi-way tensor networks and Big Data. Extensive examples are included to render the concepts more concrete and to facilitate a greater understanding of the underlying principles.
Danilo P. Mandic is a Professor in signal processing with Imperial College London, UK, and has been working in the areas of adaptive signal processing and bioengineering. He is a Fellow of the IEEE and member of the Board of Governors of International Neural Networks Society (INNS). He has more than 300 publications in journals and conferences. Prof Mandic has received the 2019 Dennis Gabor Award by the International Neural Networks Society (for outstanding achievements in neural engineering), and the President Award for Excellence in Postgraduate Supervision at Imperial. He has authored research monographs “Recurrent Neural Networks for Prediction”, Wiley 2001, “Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models”, Wiley 2009, and “Tensor Networks for Dimensionality Reduction and Large Scale Optimisation”, Now Publishers, 2017. He is a 2018 recipient of the Best Paper Award in IEEE Signal Processing Magazine, for his paper “Tensor Decompositions for Signal Processing Applications”. His work related to this talk is a series of three articles entitled “Data Analytics on Graphs”, published in Foundations and Trends in Machine Learning”, December 2020.
Consensus Learning – Xiaoyi Jiang
Abstract.Consensus problems in various forms have a long history in computer science. In pattern recognition, for instance, there are no context- or problem-independent reasons to favor one classification method over another. Therefore, combining multiple classification methods towards a consensus decision can help compensate the erroneous decisions of one classifier by other classifiers. Practically, ensemble methods turned out to be an effective means of improving the classification performance in many applications. In general, this principle corresponds to combining multiple models into one consensus model, which helps among others reduce the uncertainty in the initial models. Consensus learning can be formulated and studied in numerous problem domains; ensemble classification is just one special instance. This tutorial will present an introduction to consensus learning. In particular, the focus will be the formal framework of so-called generalized median computation, which is a mathematical way to formalize the intuitive expectation of averaging and applicable to arbitrary domains. The concept of this framework, theoretical results, and computation algorithms will be discussed. A variety of applications in pattern recognition and other fields will be shown to demonstrate the power of consensus learning.
Convolutional Neural Networks for Image Processing – Markus van Almsick, Algorithms R&D, Wolfram Research
To understand neural network design, you have to understand the data that is being processed. In this tutorial you will learn about the properties of image and video data and how these properties shape the design and training of convolutional neural networks. You will learn how to peek into neural networks, how to visualise learned features and how to detect where things go wrong. The talk will end with a range of entertaining real world application examples.
The live coding in this tutorial is done in Mathematica and the Wolfram Language which is a comprehensive software for mathematical and scientific calculations. The computer language contains a high-performance neural network framework with CPU and GPU support. Constructing and training networks often requires only a few lines of code, putting deep learning in the hands of even non-expert users. To obtain a free Mathematica demo version for the tutorial follow the link https://www.wolfram.com/mathematica/trial/
Markus van Almsick studied physics at the Technical University of Munich and received his PhD in Biomedical Image Processing from the Technical University of Eindhoven, the Netherlands. He has been a member of the Theoretical Biophysics group at the Beckman Institute for Advanced Science and Technology at the University of Illinois and he has worked for the Max Planck Institute for Biophysics in Frankfurt, Germany. Since 1988 he has been a consultant for Wolfram Research and for the last 12 years he has been a member of their image processing and computer vision team.
Clustering – Kerstin Bunte
With modern digitalisation and sensor technology the amount of data is increasing every year.
Tools for unsupervised exploratory data analysis are highly desirable to find interesting patterns.
One example concept is the task of clustering, which aims in finding groups of objects that are more similar to each other than those belonging to other groups.
Generally this unsupervised grouping is an ill-posed problem facing non-trivial questions such as:
- What would be a good cluster?
- What is a suitable definition of similarity? and
- How many clusters are present?
A vast amount of cluster analysis techniques have emerged and exemplary methods from hierarchical clustering, prototype-based and density-based clustering will be introduced and discussed.
The presented concepts and methods will be illustrated in terms of benchmark problems and selected real world applications.