After presenting its technique for cloning the human face in an effort to produce more realistic audio animatronics at SIGGRAPH, Disney Research Zurich has released this video which takes a closer look at the process, which we began discussing on here last month.
The video starts off by giving an overview of the patent application which we previously described. Essentially the project aims to correct issues with traditional audio animatronics in which the synthetic skin is stretched as actuators contort it to form various expressions. By using an array of high definition cameras to produce marker-less motion capture, the system can accurately determine how a specific synthetic skin material, such as silicone, should be cut in terms of varying thicknesses and attached to the animatronic skeleton so that the desired expressions are replicated precisely, down to the wrinkle level. The video then goes on to give a full demonstration of the process, from scanning the subject, to producing the mold, to comparing the original actor with his audio animatronic counterpart.
While Disney Research Zurich prepares to present its face cloning for audio animatronic use at SIGGRAPH today, Disney Research Pittsburgh is demonstrating its own new technology which allows converting any isolated plant into an interactive experience, allowing computers to detect where a human touches the plant.
Dubbed ‘Botanicus Interacticus: Interactive Plant Technology,’ the technology, which is based on the Touche technology introduced earlier this year, allows a single electrical wire to be inserted in soil. The wire transmits a frequency sweep between .1 and 3 Mhz which allows the area in which the plant is touched to be estimated without causing any damage to the plant itself.
Gestures such as sliding fingers, touching specific leaves, user proximity or amount of touch can be detected and mapped to perform computer-controlled functions. Disney Research hopes the technology (which works equally well with artificial plants) can be used to encourage activity between people and their environments as well as each other by ‘enhancing living, working and social spaces to make them responsive, intelligent and adaptive.’
‘Botanicus Interacticus,’ which is being demonstrated at SIGGRAPH at an exhibit which uses the Pepper’s Ghost illusion to project a computer-generated response to samples including bamboo, orchid, cactus and snake plant with each plant presented it is unique interactive and visual character.
Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around — mounting almost two dozen, outward-facing cameras on the actors themselves — scientists at Disney Research, Pittsburgh (DRP), and Carnegie Mellon University (CMU) have shown that motion capture can occur almost anywhere — in natural environments, over large areas and outdoors.
Motion capture makes possible scenes such as those in “Pirates of the Caribbean: Dead Man’s Chest,” where the movements of actor Bill Nighy were translated into a digitally created Davy Jones with octopus-like tentacles forming his beard. But body-mounted cameras enable capture of motions, such as running outside or swinging on monkey bars, that would be difficult — if not impossible — otherwise, said Takaaki Shiratori, a post-doctoral associate at DRP.
“This could be the future of motion capture,” said Shiratori, who will make a presentation about the new technique today (Aug. 8) at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver. As video cameras become ever smaller and cheaper, “I think anyone will be able to do motion capture in the not-so-distant future,” he said.
Other researchers on the project include Jessica Hodgins, DRP director and a CMU professor of robotics and computer science; Hyun Soo Park, a Ph.D. student in mechanical engineering at CMU; Leonid Sigal, DRP researcher; and Yaser Sheikh, assistant research professor in CMU’s Robotics Institute.
The wearable camera system makes it possible to reconstruct the relative and global motions of an actor thanks to a process called structure from motion (SfM). Takeo Kanade, a CMU professor of computer science and robotics and a pioneer in computer vision, developed SfM 20 years ago as a means of determining the three-dimensional structure of an object by analyzing the images from a camera as it moves around the object, or as the object moves past the camera.
In this application, SfM is not used primarily to analyze objects in a person’s surroundings, but to estimate the pose of the cameras on the person. Researchers used Velcro to mount 20 lightweight cameras on the limbs, and trunk of each subject. Each camera was calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.
SfM is used to estimate rough position and orientation of limbs as the actor moves through an environment and to collect sparse 3D information about the environment that can provide context for the captured motion. The rough position and orientation of limbs serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment, resulting in the final motion capture result.
The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, Shiratori said, but will improve as the resolution of small video cameras continues to improve.
The technique requires a significant amount of computational power; a minute of motion capture now can require an entire day to process. Future work will include efforts to find computational shortcuts, such as performing many of the steps simultaneously through parallel processing.
For more information and to see a video, visit the project website at http://drp.disneyresearch.com/projects/mocap/.
ACM SIGGRAPH announces the launch of the Learning Challenge at SIGGRAPH 2010 – an open competition sponsored by Disney Research with the goal of finding new and creative ways to use technology to make learning fun for children. Based on the principle that fun and learning should not be contradictory, teams are asked to develop an engaging, computer-based learning application that will delight, inspire, and reveal key learning concepts for children ages 7-11.
The learning application must be a layered activity that moves a child from minimal knowledge to active knowledge in one or more learning concepts via entertaining interactions on computers. The subject matter should be in the areas of math, art, science, music, or reading/writing and involve at least one of 10 key learning concepts.
“Pushing the boundaries of computer graphics and interactive techniques is a core part of SIGGRAPH,” says Terrence Masson, SIGGRAPH 2010 Conference Chair from Northeastern University. “We are thrilled that Disney Research has chosen SIGGRAPH 2010 as the location for such a noble competition with a goal improving youth education through the use of technology and creativity. We anticipate a fantastic response from both the academic and professional communities.”
The competition is open to individuals or teams (from collegiate students working with faculty advisors to working professionals) who must submit work by 7 June 2010. A complete submission includes a one-page abstract, one representative image suitable for use in promotional materials, and up to six supplementary images and/or a maximum five-minute supplementary video. The submission will be judged by a jury of industry leaders and experts.
Twenty finalists will receive travel grants of $1,500 per team and free SIGGRAPH 2010 registration. The winners will be announced 28 July at SIGGRAPH 2010 in Los Angeles and will be eligible to receive a $10,000 cash prize, Disney R&D Tours, Disney Animation Tours, and Walt Disney Studio Tours.
For complete details, visit www.learningchallenge2010.com.