Marco Aiello, University of Stuttgart
The Web was done by amateurs: A Reflection on one of the largest collective systems ever engineered

In 2012, Turing Award computer scientist Alan Kay released an interview in which he stated: ‘‘the Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. […] The Web, in comparison, is a joke. The Web was done by amateurs.’’ By looking at the history and present state of the Web, I will bring arguments to prove or refute Kay’s statement. Questions like: “How did the Web come about? Who are the heroes behind it? How did it evolve to what it is today?” will be addressed. The material is based on the book “The Web Was Done by Amateurs” published in June 2018 by Springer-Nature.

Joao M. Tavares, University of Porto
Segmentation of Complex Medical Images – Algorithms and Applications
The computational segmentation of complex medical images is very challenging, and it is mostly undertaken using, for example, deformable models based on statistical, geometrical or physical modelling, and/or machine learning techniques. Examples of current applications include the segmentation of skin lesions, lungs, heart, blood vessels, brain, ear, and related structures, just to name a few.
In this lecture, algorithms that we have developed to segment medical images acquired using different imaging modalities will be described and their use in real cases discussed.

Pasquale Foggia, University of Salerno
Graph algorithms for very large graphs, and their applications to bioinformatics and social network analysis.
Graphs are a common data representation for structured information, and in the past 40 years many researchers in the Structural Pattern Recognition field have proposed graph-based algorithms for tasks like segmentation, classification, clustering, pattern search and learning. However, the large majority of these algorithms are practically usable only on small graphs, say up to a few hundreds, or rarely a few thousands, of nodes. In the recent years, two increasingly important application fields have motivated the need to extract information from very large graph structures: bioinformatics, especially proteomics and genomics, and social network analysis. In this talk, we will discuss the applicability of commonly used graph-based techniques to the very large graphs obtained from these domains, and will present some methods and algorithms especially devised for large graphs, with examples of their application. We will also discuss the challenges that we will have to face to keep up with the increasing availability of larger and larger structured data.