From the platonic notion of world of ideas and world of the senses to the computer graphics simulations across the Mixed Realities continuum the principal aim was common: invent an empirical realism that sustains its inextricably integrated strands of ideal and sensual, virtual and real, through pragmatic and structuralist narratives. This process has been exemplified through the cinematic film narratives where the composition of real and virtual dynamic elements (such as compositing virtual creatures in real video scenes) has been evolving with a constantly growing pace. Recent developments in the computer graphics hardware and algorithmic domain allow for consistent and believable compositions for the interactive equivalent of cinematic compositions, that of Augmented Reality (AR) simulations. For such interactive and real-time AR synergies to occur, the geometric and the illumination registration of the virtual elements with the real is crucial. The focus of this thesis is to engage in the research for such an illumination model that would allow for consistent compositions in the AR world of senses and conversely for the VR 'ideal' world, specifically for virtual character simulations. The reason for focusing on virtual humans is derived from the fact that so far research has eluded the problematic of real-time simulation of dynamic virtual characters in AR scenes. Furthermore most real-time VR illumination models for virtual humans have either been based on extensions over local illumination or ad-hoc approaches and rarely on physically plausible illumination models for such deformable, dynamic and multi-hierarchical geometrical meshes. In the light of the above narratives involving virtual characters in Mixed Realities, we derive two algorithms that build up in our observations regarding the virtual human topology as well as its response to environment illumination, in a quest for a physically plausible model. Our first algorithm enhances the latest developments in low-frequency (diffuse, continuous) Precomputed Radiance Transfer models by its application to virtual characters in AR. Our second algorithm is presenting an all-frequency (including glossy, discontinuous lights) virtual character illumination model, inspired by the key-fill cinematographic light setup applied in live action movies. We finally present two virtual heritage case studies involving our combined MR illumination model: a) a desktop VR simulation and b) a mobile AR on-site experience, both involving complete simulated virtual characters, re-enacting digital narratives via illumination registration with 'natural' captured light.


    Access

    Download


    Export, share and cite



    Title :

    An Illumination Registration Model for Dynamic Virtual Humans in Mixed Reality



    Publication date :

    2006-01-01


    Remarks:

    doi:10.13097/archive-ouverte/unige:155590



    Type of media :

    Theses


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629