Prof. Nadia Magnenat Thalmann
Nanyang Technological University, Singapore;University of Geneva, Switzerland
瑞士工程科学院院士. Nadia教授现任新加坡南洋理工大学媒体创新研究所（IMI）的主任，同时也是瑞士日内瓦大学跨学科研究小组MIRALab的主任和创始人，曾任日内瓦大学副校长（2003年-2006年）。她是Springer Verlag出版社“The Visual Computer”的总编辑，是wiley出版社“Computer Animation and Virtual Worlds”的主编之一，同时还是多个其他科学期刊的副主编
Professor Nadia Magnenat Thalmann is a computer graphics scientist and a robotician and is the founder and head of MIRALab at the University of Geneva. She chaired the Institute for Media Innovation at Nanyang Technological University (NTU), Singapore until July 2021. Nadia Magnenat Thalmann received an MS in Psychology, an MS in Biology and a Master in Biochemistry at the University of Geneva. She obtained a PhD in Quantum Physics in 1977 from the same university. She started her career as an Assistant Professor at Laval University in Canada, then became a Professor at University of Montreal until 1988. In 1989, she moved to the University of Geneva where she founded the interdisciplinary laboratory MIRALab. Thalmann has authored and co-authored more than 600 papers in the area of Virtual Humans, social robots, VR, AR, and 3D simulation of human articulations. She has participated in more than 45 European Research projects and has initiated quite a few. She has served the Computer Graphics community by creating the Computer Animation and Social Agents (CASA) Conference as well as the Computer Graphics International Conference (CGI) in Geneva, both of which are internationally well known yearly conferences. She is the editor-in-chief of the journal The Visual Computer published by Springer, Germany and co editor-in-chief of the Computer Animation Journal published by Wiley, UK. Professor Thalmann has received more than 30 honours and Awards such as "Woman of the Year”, for early pioneer contribution in computer graphics in Montreal (1987). More recently, she was awarded a Doctor Honoris Causa in Natural Sciences from Leibniz University Hannover (2009), an Honorary Doctorate of the University of Ottawa (2010), and a Career Achievement Award from the Canadian Human Computer Communications Society in Toronto (2012). The same year, she received the prestigious Humboldt Research Award in Germany and the Eurographics Distinguished Career Award. Nadine, her Social Robot, has received more than 1.2 million video views online, and over 200 publications in international media. Professor Thalmann is a life member of the Swiss Academy of Engineering Sciences.
(Online Talk) Speech Title: 3D modelling of digital patients and surgeons
Abstract: During this last decade, we anatomically modelled the human hip and the knee including bones, ligaments, muscles and the flow of molecules. We have developed a full methodology from MIRI data for each patient to kinematic modelling. We have demonstrated case studies with virtual ballerinas and soccer players. The work is performed with the collaboration of medical doctors. Our ongoing research is to model virtual digital surgeons. The first surgery act that all medical students need to do is to be able to do incision and suture of the skin. For now, this is mostly done on cadavers with the assistance of real skilled surgeons. In our ongoing work, we are modelling both the gestures 'surgeons and the gestures 'students (postures and hand gestures) while doing the incision and suture of the skin. We model the skilled real surgeons 'gestures so that a virtual surgeon can teach the medical students how to perform the right gestures. Motion capture of gestures, Machine Learning, trained data set, are being developed. In our presentation, we will showour methodology and the first results we have obtained so far.
Prof. Xiaosong Yang, Bournemouth University, United Kingdom
Prof Yang is currently a Professor at the National Centre for Computer Animation, Bournemouth University, United Kingdom. He has over 30 years’ experience of research, education and professional practice in computer animation, machine learning, data mining, digital health, virtual reality and surgery simulation. He is passionate to promote the technical innovation of AI in animation production and has successfully developed new technique into animation production pipeline. He has produced more than 90 peer reviewed publications including many in prestigious journals and conferences. . As PI and Co-I, he has secured over 35 research grants from European Commission, Arts and Humanities Research Council, British Academy, Leverhulme, British Council, Newton Fund, Innovate UK, Wessex AHSN, Higher Education Innovation Fund. Prof Yang has supervised over 30 PhD students, postdocs, research assistant and international visiting scholars. Prof Yang has rich experience to manage teams with both artist and technician together, and collaborate closely with companies, like Disney Research, Double Negative, MPC. He is a member of the Peer Review College of the Arts and Humanities Research Council (AHRC) UK, funding reviewer for AHRC, EPSRC, MRC, German Research Foundation and the Natural Science and Engineering Research Council of Canada (NSERC). Prof Yang is a conference program chairs (CGI2012, CASA2020/2022, ICVR2023), reviewer for many top journals (TVCG, ACM ToG, C&G, Signal Processing, Pattern Recognition, Neurocomputing, IEEE Access) and conferences (SIGGRAPH, Eurographics, ISMAR, PG, CGI etc).
(Onsite Talk) Speech Title: Virtual Production – Extended Reality for a Film Director to Tell a Story
Abstract: While many of us were locked down in our homes a silent revolution has been underway in film and television production worldwide, with every major production house shifting to this new technology. The system developed by the Industrial Light and Magic team in the production of The Mandalorian, which garnered numerous awards in recognition for their innovative and standard-setting Virtual Production practices, has rapidly been adopted across the sector leading to significant skills shortages.
Virtual Production (VP) defines a set of new production practices where practitioners work in and interact directly with a virtual set. VP reduces the need to move crews and equipment to location and enables remote working in Virtual Reality, reducing CVD-19 risks, the environmental footprint, and production costs. Virtual Production combines the use of Virtual Reality (VR), Augmented Reality (AR), Computer-Generated Imagery (CGI), motion capture, camera tracking, and game engine technology with LED walls and intelligent lighting system, enabling creators to see their scenes unfold as they are composed and captured on set. Virtual Production can be a gateway to the Metaverse. The blend of VP and the Metaverse has a great deal of potential in online games, live events, film, fashion, business.
In 2022, we secured a government funding from the Arts and Humanities Research Council of the United Kingdom to do a market research on the Virtual production industry of UK and China. The Project is called “UK-China Research and Innovation Collaboration in Cloud based Virtual Film Production”. We have interviewed lots of VP related experts and practitioners from both academia and leading industry players. Our presentation here will present our research findings such as the technique advantages of VP, existing challenges and potential solutions from the collaboration between two countries. The final market analysis report is available here.