|
Keynote Speakers
Keynote Speakers

 



Ifigeneia Mavridou, Tilburg University, Netherlands

 

Dr. Ifigeneia Mavridou is an Assistant Professor at Tilburg University, Netherlands. She offers unique perspectives in the field of immersive technologies and embedded sensing for emotion recognition via her experience in both industrial and academic environments. Previously, Dr. Mavridou pioneered and developed innovative multimodal wearables specifically designed for off-the-shelf virtual reality headsets at Emteq Labs, Brighton. Currently, Dr. Mavridou coordinates the Extended Reality Lab at MINDLABS—a dynamic multidisciplinary partnership involving institutions, local municipality, and businesses. Her extensive publications delve into the intersection of extended realities, human factors, and artificial intelligence.

Speech Title: From Sensors to Sensitivity: Affective State Detection for Enhancing Human Computer Interaction in Virtual Reality

Abstract: We are navigating an era of rapid technological advances and sensor miniaturisation, yet we face unprecedented challenges. As researchers we are responsible not only for developing novel solutions that meet usability criteria but also for ensuring the reliability, sensitivity, and replicability of these detection systems. In the last six years, there has been a significant push towards expanding the ICT and XR sectors, with efforts to create tools that assist professionals, creators, and researchers in further leveraging these technologies. Detecting the user’s state for enhanced human-computer interaction has become a crucial area of focus, especially with the surge in emotion-detecting systems. The implications and challenges of false interpretations has underscored the importance of accuracy in these systems. How can we remain true to our mission of delivering effective and safe detection solutions? This keynote will explore existing solutions and trends in user tracking within virtual reality settings, addressing the challenges of calibration, multimodal synchronisation, and signal artifacts. We will discuss the critical role of context in understanding intent and subjective responses, and why extended reality technologies can serve as the ultimate laboratory tool.

 

 


Jian Chang, Bournemouth University, UK


Jian Chang is a professor in computer animation at the National Centre for Computer Animation (NCCA), Bournemouth University, UK. The NCCA is recognised as the best UK educational and research base for computer animation. His research has focused on physics-based animation, motion synthesis, novel HCI (eye tracking, gesture control and haptic), serious games, and VR/AR applications. He is keen to exploit the usage of novel computing techniques for cross-disciplinary research and applications, which has led to international research collaboration and joint funding bids. His research has been funded (over £2million) by various funding sources, including EPSRC, the Royal Society, EU FP7, EU H2020, InterReg France (Channel) England, Innovate UK, and HEIF. He is a founder and director of an international postdoctoral research training centre for innovation in creative technologies (CfACTs), funded by EU H2020 Marie Skłodowska-Curie Actions - Cofund Scheme. He has published over 130 peer-reviewed papers (including high-impact CVPR, SIGGRAPH ASIA, TOG, IEEE TVCG & Pattern Recognition), co-edited three books, and have been a program co-chair of Computer Graphics International 2019 and programme committee member for over 20 International conferences.

Speech Title: Tools Serving Innovation in Heritage Applications and Medical Training - VR/AR Solutions

Abstract: A wide range of digital technologies, including Virtual Reality (VR) and Augmented Reality (AR), have many applications to both cultural heritage sites and medical training environments. These technologies offer immersive experiences that enhance the engagement of visitors and educational value for users. In the realm of cultural heritage, VR/AR tools can be used to serve different target groups, designing captivating experiences, and exploring new business models to boost revenue. Similarly, in medical training, VR/AR applications provide realistic and interactive simulations that allow medical professionals to practice and refine their skills in a safe and controlled environment. These technologies support a range of training scenarios, from surgical procedures to patient interaction, enhancing the overall quality and effectiveness of medical education.

 

 

 

Tom Durrant, CTO and Co-Founder of VividQ


Tom is CTO and Co-Founder of VividQ, where he leads technology and product development for 3D displays, focusing on computer-generated holography. He was recognised on the Forbes 30 Under 30 list in the field of technology. He holds degrees in mathematics from the University of Cambridge and University of Oxford.

Speech Title: Bring VR Experiences to Life with 3D Display
Abstract: VR has had a promising start, but has as yet failed to hit the consumer mainstream. We will consider whether true 3D display could solve the remaining challenges and produce an experience compelling enough to become as popular as TVs and PCs.