Being able to animate plausible hair is essential for achieving realistic virtual humans. Hair is however one of the most difficult features to model: A full head of hair is made of more than 100 000 interacting strands, i.e. anisotropic, constant-length elastic fibers colliding together and with the body, even in rest postures. Both the static shape and the dynamic motion of hair emerge from these complex interactions, too numerous for being individually computed.
This talk first presents the basic methodology developed in Computer Graphics for animating virtual hair. We then discuss two alternative approaches for capturing hair self-interactions, namely volumetric versus wisp-based methods. The remainder of the talk focuses on two recent contributions: the "adaptive with tree" method which handles complexity by automatically adapting the level of detail of the hair model during animation, and the super-helix model, a stable representation for strands which accurately captures both straight and curly hair, enabling larger time steps. This last model, enhanced by adapted collisions processing and rendering techniques, is validated through side by side comparison with real hair.
Last modified: Thursday, 28-Jul-2005 17:23:30 NZST
This page is maintained by the seminar list administrator.