With a patent application titled simply ‘Role-Play Simulation Engine,’ Disney Parks may be looking to use its NextGen technology base and cash in on the CosPlay/LARPing — that’s Live Action Role Play — crazes and bring a brand new experience to its theme parks.
The patent allows for guests to participant in ‘long-form role play’ events in which they interact with performers that are employed by the park to engage the guest in the role-playing activities. The performers don’t even need to be humans either. They can be audio animatronics, for example, or something as simple as a video screen that triggers in response to the guest’s arrival.
All of this information — prompts to performers and responses by guests — are fed into an electronic device/system known as the planner, or game master. The planner then makes decisions on how the rest of the role playing session should play out based on how the guest responds to the game thus far.
The idea is to take what already exists, such as the World Showcase Players, to a much larger extreme, allowing the guest to freely roam the park(s) while the session continues and the guest continues to encounter multiple performers, all of whom are kept apprised of the session via the planner device. In addition to session-specific information, the planner will also make available additional information about the guest to the performer to help carry out the performance.
The guest can even make use of special props in their role-play activities which of course can employ RFID or GPS enabled devices that will allow the planner/game master/performers to track a guest’s location throughout the park and respond accordingly.
Role Play Simulation Engine is US patent application #20130066608 and was developed by Asa Kalama, Cory Rouse, Michael Ilardi and Reid Swanson.
While Walt Disney World’s billion-dollar NextGen project has been no secret for quite some time, along with many of its aspects such as extensive use of RFID, the company itself has remained famously mum about the extent of the project, something we have been discussing for well over a year.
A new article from the New York Times, goes into the technology, now officially known as MyMagic+ and My Disney Experience (terms we have been applying for nearly a year), and some of the experiences it will offer. Some of the recently confirmed technology will allow characters to be able to deliver personalized experiences, a topic we have spoken about quite a bit, including in this article on just what Disney will learn about its guests, from which the NYT quotes some of our commenters. Back in October, we also looked at additional NextGen technology which describes just how the MyMagic+ will function inside the parks and negotiate multiple guests with multiple ‘entitlements,’ including characters who will greet guests by name (as the NYT article also suggests).
As extensive at the NYT article appears to be, it’s our contention that it still just a drop in the bucket with what the full MyMagic+ experience will entail, such as ‘Achievements’ (discussed in Big Brother article), a mobile app that will let family members at home virtually join the trip and interact including purchasing real in-park gifts, personalized and interactive elements at attractions like ‘it’s a small world’ and much, much more.
Disney Research Pittsburgh has just released the video below which demonstrates one of its latest projects: an audio animatronic robot that can interact with people by playing catch with them. The system uses an off-the-shelf Microsoft Kinect (according to the video’s narration) along with an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman ?lter to predict ball destination and timing. So not only is the robot able to track a human’s position and size by the location of their head, but it can attempt to move its hand to catch the ball. If the robot misses the catch, it’s fully aware and even responds with one of several different humorous animations to elicit a response from the person interacting with it.
Disney Research has also been able to use its system to successfully juggle up to three balls at a time when a professional juggler is used as the participant.
According to the video and its description, Disney Research is hopeful that this product leads to a fully interactive experience between guests and audio animatronics in environments such as theme parks, while managing to keep the guest a safe distance from the robots.
Stop us if you’ve heard this one: When is a character not a character? When they’re an ‘experience delivery system.’ Get it? Okay, maybe it’s not that funny, or clever, but according to a patent application for a system known as ‘Managing Experience State to Personalize Destination Visits,’ it’s the future truth — and it’s a key element to the MyMagic+/My Disney Experience coming to the Walt Disney World Resort as part of its NextGen experience.
Though it lacks any shocking revelations, the patent application answers one of the most forefront questions since information began pouring in since we began sharing information about the project. Completely apart from FASTPASS+ which is available to families just as the regular FASTPASS system is, this is more in line with the “it’s a small world” real-world avatar; the question being if there are multiple people entitled to a customization, how does Disney decide who gets it?
And so then you have this: the Experience State Management System. And it goes a little something like this.
First, the familiar. Guests will have the reusable and personalized (for an upcharge) MagicBand which uses RFID technology and serves as the key to virtually everything from unlocking hotel room doors (for which Disney is aggressively updating all the locks systematically as you read this) to holding park admission media (though both of these are optional depending on the circumstances) to holding access to FASTPASS+ enabled attractions. Readers, however, will be installed virtually everywhere and it is no gross understatement to suggest that the system is capable of identifying guests virtually anywhere in the parks. We also know that with the readers being able to identify guests as they enter attractions, it also potentially provides supplementary information to a cast member who can now greet a guest by name and/or wish them a happy birthday even if there’s no button, congratulate them on their graduation, or any other possible celebration imaginable, so long as it’s noted in their CRS database.
That’s where the ESMS really comes in because it too will have access to all of this information and not only will it be able to see what entitlements a guest is set to receive, it will obviously be able to record and reference entitlements already distributed. Therefore it can use its guest history to decide who amongst a group of guests will receive the special attention at any single experience, be it within the same family or amongst different groups as well. For example, an attraction could be configured to wish someone a happy birthday when it detects them in a group. But what if there’s two guests celebrating a birthday that day? Maybe one of them was already recognized for it earlier in the day, so the system will decide to honor the other guest. There are several other factors it can consider, or if it determines there is a statistical tie, it can make a random decision to skew the numbers going forward. The system will also use biological information such as age and gender to determine which experiences will be available for a particular guest.
Aside from a talking character (note we do not say face character) being able to greet guests by name upon entering a room, several other potential uses are suggested by the patent application. Special upcharge experiences include birthday acknowledgements, or a pirate experience in which the guest will be acknowledged automatically throughout the parks as being a pirate in any number of ways including visually morphing the guest into looking like a pirate, being dressed like a pirate, etc. The same technique could be applied to make the guest appear to be most anything such as a movie star or athlete.
Although the patent application doesn’t explicitly mention it, the ESMS is also likely to play a role in Achievements, which we also previously spoke about to some extent.
For further reading, you can view the patent application in its entirety here.
In an age where Kinect and PlayStation Eye/Move are encouraging less traditional interaction with video game consoles and cameras are a mainstay in virtual everything, let alone most mobile devices, one man at Disney Interactive sees video game systems moving even further from the path of the familiar and letting the console games make their own decisions based on — you guessed it — physical appearance.
Both ‘System and method for number of players determined using facial recognition’ (US Patent Application 20120214585) and ‘Gender and age based gameplay through face perception’ (US Patent Application 20120214584) list Phillippe Paquet as the sole inventor and offer to leverage existing technology in interesting ways.
The patent application titles pretty much describe the concepts and, to be honest, conceptual is mostly what these appear to be at this time. Though the technology is there, as Disney’s own Imagineers have demonstrated the capability of identifying and tracking individuals in a crowd, the implementations and practical applications seem to be lacking.
In short, Paquet envisions video game consoles (or virtually any device, including vehicle simulators) as being able to identify the number of participants, distinguishing active ones from spectators, as well as the approximate ages and genders of the game. Furthermore, he anticipates gameplay will change automatically, catering to what the system learns of its user. While a fascinating prospect, this would encourage locking players into stereotypical gender roles which could be a problem — assuming the gender of the player is 100% accurate. Even the example of using age to determine difficulty is questionable since skillsets — particularly when it comes to video games — are very much subjective to the individual rather than the number of growth rings inside their bodies.
Interestingly enough, it wasn’t too long ago that Face.com (now owned by Facebook) had developed significant facial recognition software which was even able to attempt to determine an individual’s age. Thus it would appear that Disney’s main concern is simply to cover the uncharted area should it seem like it could become a reality. As a final bullet point of interest, the patent applications were filed about a month before Face.com revealed its age-identification technology to the public.
After presenting its technique for cloning the human face in an effort to produce more realistic audio animatronics at SIGGRAPH, Disney Research Zurich has released this video which takes a closer look at the process, which we began discussing on here last month.
The video starts off by giving an overview of the patent application which we previously described. Essentially the project aims to correct issues with traditional audio animatronics in which the synthetic skin is stretched as actuators contort it to form various expressions. By using an array of high definition cameras to produce marker-less motion capture, the system can accurately determine how a specific synthetic skin material, such as silicone, should be cut in terms of varying thicknesses and attached to the animatronic skeleton so that the desired expressions are replicated precisely, down to the wrinkle level. The video then goes on to give a full demonstration of the process, from scanning the subject, to producing the mold, to comparing the original actor with his audio animatronic counterpart.
While Disney Research Zurich prepares to present its face cloning for audio animatronic use at SIGGRAPH today, Disney Research Pittsburgh is demonstrating its own new technology which allows converting any isolated plant into an interactive experience, allowing computers to detect where a human touches the plant.
Dubbed ‘Botanicus Interacticus: Interactive Plant Technology,’ the technology, which is based on the Touche technology introduced earlier this year, allows a single electrical wire to be inserted in soil. The wire transmits a frequency sweep between .1 and 3 Mhz which allows the area in which the plant is touched to be estimated without causing any damage to the plant itself.
Gestures such as sliding fingers, touching specific leaves, user proximity or amount of touch can be detected and mapped to perform computer-controlled functions. Disney Research hopes the technology (which works equally well with artificial plants) can be used to encourage activity between people and their environments as well as each other by ‘enhancing living, working and social spaces to make them responsive, intelligent and adaptive.’
‘Botanicus Interacticus,’ which is being demonstrated at SIGGRAPH at an exhibit which uses the Pepper’s Ghost illusion to project a computer-generated response to samples including bamboo, orchid, cactus and snake plant with each plant presented it is unique interactive and visual character.
In this day and age in which 3D scans of human faces are turned into exciting keepsakes such as the Disney/LucasFilm Star Wars Weekends experience ‘Carbon Freeze Me,’ in which guests could receive a replica of themselves frozen in carbonite a la Han Solo, and the upcoming ‘I Am A Princess,’ which builds on a previous test in which guests could have a princess doll in their likeness made, technology is becoming a key player in what has been even the most traditional of trades.
Therefore it shouldn’t come as a surprise that one of the pioneering technologies employed by The Walt Disney Company is being updated in a fascinating new way that will attempt to make audio animatronic figures rival the most advanced 3D, high definition screens. The ominous-sounding ‘Physical Face Cloning’ patent application (US 2012/0185218) seeks to improve upon the decades-old theme park experience by using some complicated algorithms to produce the most life-like audio animatronic figures to date.
Based on the listed location of the majority of the inventing team, the project appears to come out of Disney Research in Zurich, Switzerland. Disney Research recently came into the limelight with its technology dubbed touche, allowing users to control devices by gestures.
The problem with today’s audio animatronic figures, according to the patent, is that they require enlisting a team of animators, sculptors and other experts to create a face and skeletal system able to produce a realistic set of human expressions, as relayed through a layer of artificial skin. Taking the guess work out of the process, the new system could simply use motion capture technology to record the human subject’s face making various expressions and, via some very non-simple mathematical formulas, generate the perfect layer of silicone rubber skin (or whichever material is desired) of varying thickness, along with directions for attaching said skin to the skeleton, so that when the skin is stretched and manipulated on the figure to form the desired expressions, it provides the most realistic visuals possible.
UPDATE #1: 7/25/12 – Disney Research will discuss the new technology at SIGGRAPH 2012 on August 9. More information here.
Disney Studio All Access Finally Launching This Month? (Introducing Disney Everywhere’s Movie Cloud)
Now that Disney (NYSE:DIS) has finally gotten its ‘TV Everywhere’ initiative off the ground, as we first reported earlier this month — with even more networks such as ABC Family on the way, along with cable providers beyond Comcast — the focus now shifts to Disney’s extensive film collection.
An announcement made last week on Disney Movies Online has raised some eyebrows, causing some to ponder if DSAA/Keychest’s time has finally arrived. Certainly the changes coming to DMO on June 27 are worth the contemplation: accounts for those under 13 not permitted; accounts only for United States users; and a slew of films that won’t be available for viewing online for the foreseeable future.
More curiously, however, and perhaps more to the point, are the following domain names very recently registered by the company: DISNEYANYWHERE.COM, DISNEYEVERYWHERE.COM, DISNEYMOVIECLOUD.COM, DISNEYMOVIESANYWHERE.COM, DISNEYMOVIESEVERYWHERE.COM.
While it’s possible Disney Studio All Access may finally reveal itself to the world this summer (see our sneak peek for more details), what we’re more likely to see is an interim phase in which Disney Movies Online simply goes mobile on Apple iOS and Android devices.
As far as DSAA is concerned, Disney has officially been maintaining a ‘wait and see’ attitude, monitoring the successes and failures of UltraViolet, the only competing service. UltraViolet has managed to rack up more than 3 million accounts since its debut, most of which as a result of a push campaign by Walmart this year, but continues to confuse and disappoint its customer base.