We are happy to have four distinguished professionals as our keynote speakers for the QoMEX 2018. Please find more information on the keynotes and the speakers below. Click the title of the keynote to find out more about the content and the speaker.

Keynote speech on May 29: by Aljosa Smolic, Trinity College Dublin, Ireland, on “Content Creation for AR, VR, and Free Viewpoint Video

Keynote speech on May 30: by Paul Verschure, Universitat Pompeu Fabra, Spain, on “Building Multimodal User Experience on Brain Theory: Experiments with the Active Learning in Digitally Enriched Spaces Paradigm in Neurorehabilitation, Human-Robot Interaction, and Cultural Heritage”

Keynote speech on May 31: by Alberto Messina, RAI, Italy, on “Improving user experience on web TV through automated content analysis and organisation”

Keynote speech on May 31: by Ioannis Katsavounidis, on “Optimizing on-demand streaming video quality @ NETFLIX”

Keynote speech on June 1: by Jörgen Gustafsson, Ericsson, Sweden, on “QoE in the world of 5G”


Title: Content Creation for AR, VR, and Free Viewpoint Video

Augmented reality (AR) and virtual reality (VR) are among most important technology trends these days. Major industry players make huge investments, vibrant activity can be observed in the start-up scene and academia. The elements of the ecosystem seem mature enough for broad adoption and success. However, availability of compelling content can become a limiting factor. This talk will address this content gap for AR/VR, and present solutions developed in the V-SENSE team at TCD, i.e. 3D reconstruction of dynamic real world scenes and their interactive visualization in AR/VR.

Speaker: Prof. Aljosa Smolic

Prof. Smolic is the SFI Research Professor of Creative Technologies at Trinity College Dublin (TCD). Before joining Trinity, Prof. Smolic was with Disney Research Zurich as Senior Research Scientist and Head of the Advanced Video Technology group, and with the Fraunhofer Heinrich-Hertz- Institut (HHI), Berlin, also heading a research group as Scientific Project Manager. At Disney Research he led over 50 R&D projects in the area of visual computing that have resulted in numerous publications and patents, as well as technology transfers to a range of Disney business units. Prof. Smolic served as Associate Editor of the IEEE Transactions on Image Processing and the Signal Processing: Image Communication journal. He was Guest Editor for the Proceedings of the IEEE, IEEE Transactions on CSVT, IEEE Signal Processing Magazine, and other scientific journals. His research group at TCD, V-SENSE, is on visual computing, combining computer vision, computer graphics and media technology, to extend the dimensions of visual sensation, with specific focus on immersive technologies such as AR, VR, free viewpoint video, 360/omni-directional video, and light-fields.


Title: Building Multimodal User Experience on Brain Theory: Experiments with the Active Learning in Digitally Enriched Spaces Paradigm in Neurorehabilitation, Human-Robot Interaction, and Cultural Heritage

Prof. Paul Verschure

Paul received both his Ma. and PhD in psychology. His scientific goal is to find a unified theory of mind, brain and body through the use of synthetic methods and to apply such a theory to the development of novel cognitive technologies. Paul has pursued his research at different institutes in the US (Neurosciences Institute and The Salk Institute, both in San Diego) and Europe (University of Amsterdam, University of Zurich and the Swiss Federal Institute of Technology-ETH and Universitat Pompeu Fabra in Barcelona).
Paul works on biologically constrained models of perception, learning, behavior and problem solving that are applied to wheeled and flying robots, interactive spaces and avatars. The results of these projects have been published in leading scientific journals including Nature, Science, PLoS and PNAS. In addition to his basic research, he applies concepts and methods from the study of natural perception, cognition and behavior to the development of interactive creative installations and intelligent immersive spaces. Since 1998, he has, together with his collaborators, generated a series 25 public exhibits of which the most ambitious was the exhibit “Ada: Intelligent space” for the Swiss national exhibition Expo.02, that was visited by 560000 people. The most recent one was the Multimodal Brain Orchestra that premiered in the closing ceremony of the EC Future and Emerging Technologies conference in Prague in April 2009. 
Paul leads SPECS, a multidisciplinary group of over 30 pre-doctoral, doctoral and post-doctoral researchers that include physicists, psychologists, biologists, engineers and computer scientists supported by his own technical and administrative staff.

Title: Improving user experience on web TV through automated content analysis and organisation

This speech will touch upon on various aspects about Quality of Experience for broadcasters, and specifically from the point of view of RAI. Starting from the latest advancements brought in by standards and technologies in the area of perceived quality (HDR, HFR, VR), the speech will focus on the quality of experience that customers have with online media services and how this experience can be improved and enhanced through the usage of Artificial Intelligence technologies and smart applications.

Speaker: Dr. Alberto Messina

Alberto Messina started as a research engineer with RAI in 1996, when he completed his MS Thesis about objective quality evaluation of MPEG2 video. After starting his career as a designer of RAI’s Multimedia Catalogue, he has been involved in several internal and international research projects in digital archiving, automated documentation, and automated production. His current interests are from file formats and metadata standards to content analysis and information extraction algorithms. R&D coordinator since 2005, he leads research on Automated Information Extraction & Management/Information and Knowledge Engineering, where he is author of more than 80 publications. He has extensive collaborations with national and international research institutions, in research projects and students tutorship. He has a PhD in Business and Management, with a specialisation in the area of Computer Science. He has been active member of several EBU Technical projects, and now he leads the EBU Strategic Programme on Media Information Management. He worked in many European funded projects including PrestoSpace, PrestoPrime, TOSCA-MP, IBC Award – winning VISION Cloud, BRIDGET and currently in MULTIDRONE. He has served in the Programme Committee of many international conferences, including Web Intelligence 2009-2013 and 2016, Machine Learning and Applications 2009-2013, MMM 2012, CIKM 2016. He’s ACM Professional member since 2005 and nominated Contract Professor of Multimedia Archival Techniques at Politecnico di Torino from 2012 to 2015. He actively participates in International Standardisation bodies, mainly in EBU and MPEG, where he contributed to MPEG-A, MPEG-7 and MPEG 21 extensions.

Title: Optimizing on-demand streaming video quality @ NETFLIX

Ensuring high quality of experience for 125 million members in over 190 countries is our mission and at Netflix we are using and developing multiple technologies to achieve that goal. We will focus on the encoding task, which includes 3 major tasks – inspect, encode, validate, and we will show how we do it at scale. We will first present VMAF, the perceptual quality metric developed at Netflix which is used to assess quality of our encodes; also, an important building block in assessing members’ streaming quality. We will then show how we optimize video codec parameters to achieve the highest possible quality. Of particular importance is how we address the video resolution/bitrate tradeoff and how we are willing to spend enormous amounts of CPU in order to achieve the best video quality for our members. We will also expand on how we are using royalty-free codecs, such as VP9 and the new AV1 codec, and finally present a system developed at Netflix that ties everything together, called Dynamic Optimizer.


Speaker: Dr. Ioannis Katsavounidis

Ioannis Katsavounidis received the Diploma (B.S./M.S.) degree from the Aristotle University of Thessaloniki, Greece, in 1991 and the M.S. and Ph.D. degrees from the University of Southern California, Los Angeles, in 1992 and 1998 respectively, all in Electrical Engineering. From 1996 to 2000, he worked in Italy as an engineer for the high-energy Physics department of the California Institute of Technology. From 2000 to 2007, he worked at InterVideo, Inc., in Fremont, CA, as Director of Software for advanced technologies, in charge of MPEG2, MPEG4 and H.264 video codec development. Between 2007 and 2008, he served as CTO of Cidana, a mobile multimedia software company in Shanghai, China, covering all aspects of DTV standards and codecs. From 2008 to 2015 he was an associate professor with the department of electrical and computer engineering at the University of Thessaly in Volos, Greece, teaching undergraduate and graduate courses in signals, controls, image processing, video compression and information theory. He is currently a senior research scientist at Netflix, working on video quality and video codec optimization problems, where he contributed to the development of VMAF and led the efforts for the development of the Dynamic Optimizer. His research interests include image and video quality, compression and processing, information theory and software-hardware optimization of multimedia applications.

Title: QoE in the world of 5G

Communication networks are rapidly evolving, and around the corner is 5G, with an even increase capacity, performance, and features that will enable wireless communication for a huge amount of new services, for both consumers and enterprises. At the same time AI, or Machine Intelligence, is transforming virtually all industry segments, and rapidly starting to become an integrated part of life for all of us. What are the role of QoE in this new world? How can these services be managed, taking QoE into account? And what are the challenges for the QoE community? This keynote will address those questions, and give inspiration for important QoE research questions that should be addressed  in the world we are moving into.

Speaker: Dr. Jörgen Gustafsson

Jörgen Gustafsson is a research manager at Ericsson Research, heading a research team in the areas of machine learning and QoE. The research is applied to a number of areas, such as media, manufacturing, operations support systems/business support systems, the Internet of Things and more. He joined Ericsson in 1993. He is co-rapporteur of Question 14 in ITU-T Study Group 12, where leading and global standards on parametric models and tools for multimedia quality assessment are being developed, including the latest standards on quality assessment of adaptive streaming. He holds an M.Sc. in computer science from Linköping University, Sweden.