Learning and reasoning for distributed computing continuum ecosystems

Schahram Dustdar, Wien University of Technology

A captivating set of hypotheses from the field of neuroscience suggests that human and animal brain mechanisms result from few powerful principles. If proved to be accurate, these assumptions could open a deep understanding of the way humans and animals manage to cope with the unpredictability of events and imagination. Modern distributed systems also deal with uncertain scenarios, where environments, infrastructures, and applications are widely diverse. In the scope of IoT-Edge-Fog-Cloud computing, leveraging these neuroscience-inspired principles and mechanisms could aid in building more flexible solutions able to generalize over different environments.

Brief Bio
Schahram Dustdar is a Full Professor of Computer Science at the TU Wien, heading the Research Division of Distributed Systems, Austria. He holds several honorary positions: University of California (USC) Los Angeles; Monash University in Melbourne, Shanghai University, Macquarie University in Sydney, University Pompeu Fabra, Barcelona, Spain. From Dec 2016 until Jan 2017 he was a Visiting Professor at the University of Sevilla, Spain and from January until June 2017 he was a Visiting Professor at UC Berkeley, USA. From 1999 – 2007 he worked as the co-founder and chief scientist of Caramba Labs Software AG in Vienna (acquired by ProjectNetWorld AG), a venture capital co-funded software company focused on software for collaborative processes in teams. He is co-founder of edorer.com (an EdTech company based in the US) and co-founder and chief scientist of Sinoaus.net, a Nanjing, China based R&D organization focusing on IoT and Edge Intelligence. He is founding co-Editor-in-Chief of ACM Transactions on Internet of Things (ACM TIoT) as well as Editor-in-Chief of Computing (Springer). He is an Associate Editor of IEEE Transactions on Services Computing, IEEE Transactions on Cloud Computing, ACM Computing Surveys, ACM Transactions on the Web, and ACM Transactions on Internet Technology, as well as on the editorial board of IEEE Internet Computing and IEEE Computer. Dustdar is recipient of multiple awards: IEEE TCSVC Outstanding Leadership Award (2018), IEEE TCSC Award for Excellence in Scalable Computing (2019), ACM Distinguished Scientist (2009), ACM Distinguished Speaker (2021), IBM Faculty Award (2012). He is an elected member of the Academia Europaea: The Academy of Europe, where he currently is chairman of the Informatics Section, as well as an IEEE Fellow (2016) and an Asia-Pacific Artificial Intelligence Association (AAIA) Fellow (2021) and the AAIA president (2201).

Cognitive Robotics: perception in the wild

Alessia Saggese , University of Salerno

The combination between robotics and artificial intelligence has been defined as one of the most promising wedding of our century. Within this context, a cognitive robot is a robot able to acquire data from the surrounding environment, to reason about that by means of AI algorithms and to perform the best possible action. A “social robot” is a cognitive robot able to autonomously navigate inside the environment and to dialogue with the interlocutor in a socially accepted way. During this talk we will discuss about the challenges of designing a social robot, together with a real implementation: MIVIABot, the social robot designed and developed by the researchers of MIVIA Lab, based on the ROS framework and on cutting-edge algorithms for both perception, reasoning and action. We will also present the algorithms designed for visual perception, and in particular for face biometrics, and audio analysis.


Brief Bio
Alessia Saggese is an Associate Professor of the University of Salerno, Italy. In 2010 she got the Laurea degree cum laude in Computer Engineering from the University of Salerno (Italy). In 2014 she got a double Ph.D. degree in electronic and computer engineering from the University of Salerno (Italy) and from the ENSICAEN, University of Caen Basse Normandie (France). Her thesis has been awarded by the Italian-French University (UIF-UFI) within the Project Vinci framework. Her thesis has been also awarded with the “Best Thesis award” from the GIRPR, the Italian Chapter of the IAPR. Her research mainly interests computer vision and artificial intelligence algorithms for video and audio surveillance applications, for cognitive robotics and autonomous vehicle driving.

Automatic welding in shipbuilding using artificial vision

Pedro Galindo, University of Cadiz

In this talk, the development of an automatic welding system using artificial vision for shipbuilding is described. The system has been developed over the last few years in collaboration with Navantia company, the first shipbuilding company in Spain. Navantia is a state-owned company, and gives service to both military and civil sectors, occupying a leading role in European and Worldwide military and technologically advanced shipbuilding. Traditionally, the shipbuilding industry has relied on the labour of a large number of skilled workers, who worked for many hours in dangerous and highly demanding tasks such as welding, cutting and painting. To become competitive, European companies should offer the best product of the highest quality at a competitive price. This goal is achieved in Navantia thanks to its constant commitment to innovation and technology. Navantia has a powerful technical office and makes extraordinary efforts in Research and Development with the aim of always offering the latest products and services. Back in 2012, Navantia completed the acquisition of two MIG welding robots mounted in an 8 meters-high mobile gantry crane at Navantia company in Puerto Real, Spain. However, the use for production purposes of this equipment was quite difficult as the programming process was very difficult, time-consuming, and quite expensive in terms of highly specialized man-hours. A further project has been developed in collaboration with the University of Cádiz to improve the productivity and programming speed using the available infrastructure. The complete solution was originally conceived and undertaken by two research groups at the University of Cadiz with more than 20 years of experience (in areas such as Robotics, Artificial Intelligence, Image Processing, Data Analysis, Simulation,…) in Industry 4.0 related projects: (i) Intelligent Systems and (ii) Applied Robotics. In a final stage, artificial vision has been used to completely automatize welding in the Navantia shipbuilding environment, allowing a relevant reduction in manufacturing time and labour costs.

Brief Bio
Pedro L. Galindo is Full Professor of Computer Science and Engineering and leader of the “Intelligent Systems” group at the Univ. of Cádiz, Spain. His main research interest is the application of Artificial Intelligence and advanced Image Processing techniques to real problems, both in the research and industrial areas. In the research field he applies these techniques to the extraction of quantitative information from electron microscopy images at the atomic level. At the industry level, he has leaded several Industry 4.0 projects with large companies, such as Airbus (leader in the aerospace industry), Navantia (largest shipbuilding industry in Spain) or Acerinox (one of the largest stainless steel producer companies in the world) in different fields (Robotics, Artificial Vision and Big Data). He is also honorary visiting professor at the Department of Physics at the University of York(UK) for his contributions in the development of software for the design of semiconductors at the atomic level. He is author of the PPA software for peak detection and strain analysis from electron microscopy images. This software is comercialized by HREM Research(Japan), the leader company in electron microscopy image processing software and represents the largest source of incoming royalties for the University of Cadiz since 2008.

Hearables: Enabling Technologies for Lifelong Learning in E-Health

Danilo Mandic, Imperial College London

Future health systems require the means to assess and track the neural and physiological function of a user over long periods of time, and in the community. Hearables is a recent concept which makes use of the constant position of the ear relative to the brain and vital organs, to investigate the possibility of recording the Electroencephalogram (EEG), Electrocardiogram (ECG), respiration, temperature, pulse, and activity from the ear canal. We focus on our own pioneering and patented works on Ear-EEG, Ear-ECG, and multimodal electro-mechanical collocated sensing, to provide robust measurement of both neural activity and vital signs in most daily activities in the wild. This framework opens up the avenues for a subsequent use of a number of machine learning paradigms, from lifelong learning to Big Data, in some most pressing problems we are facing such as the care for the elderly, management of chronic diseases out of the clinic, and dealing with hearing deficits and head trauma. A brief outline of some candidate machine learning technologies in this context is provided, including the Big Data paradigms, the role of tensor decompositions, and the way to learn the “transfer function”, between human brain responses and motor actions in a tensor decomposition framework.

Brief Bio
Prof Mandic is a Fellow of the IEEE, President of the International Neural Networks Society (INNS), member of the Big Data Chapter within INNS and member of the IEEE SPS Technical Committee on Signal Processing Theory and Methods. He has received five best paper awards in Brain Computer Interface, runs the Financial Machine Intelligence Lab at IMperial, and has about 600 publications in international journals and conferences. He has authored two research monographs on neural networks, Recurrent Neural Networks for Prediction (Wiley, 2001) and Complex Valued Nonlinear Adaptive Filters: Nonlinearity, Widely Linear and Neural Models (Wiley, 2009). He has also co-authored a two volume monograph Tensor Networks for Dimensionality Reduction and Large Scale Optimisation (Foundations and Trends in Machine Learning, 2016, 2017), and a three volume monograph on Data Analytics on Graphs (Foundations and Trends in Machine Learning, 2020). Prof. Mandic has given a number of keynote speeches and tutorials at foremost international conferences (IJCNN, ICASSP), and has received the President Award for Excellence in Postgraduate Supervision at Imperial. In terms of the applications of his work, he is a pioneer of Hearables, a radically new in-the-ear-canal system for the recording of the Electroencephalogram (EEG) and vital signs. This work appeared in IEEE Spectrum, MIT Technology Review and has won several awards.