Viewers in Microsoft Drama Would Interrogate Suspects

Viewers would be able to control the outcome of a television drama with gestures and voice commands if Microsoft Corp. succeeds in commercializing an interactive TV invention published in a U.S. patent on Tuesday.

While programmers such as Showtime, NBC, Fox and HBO have introduced interactive overlays and scond-screen content for primetime series in recent years, the Microsoft invention focuses on how viewers could alter the outcome or sub-plot of a series that is focused on interactivity from its inception.

“The story may relate to a crime drama where detectives are about to interrogate a suspect. A user may perform a predefined gesture indicating that they wish to interact with the story application to interrogate the suspect. The user may ask questions, and may perform the interrogation in different ways and with different attitudes. These actions may be limited in scope, for example by a script from which the user can select questions. Alternatively, it may be any questions the user wishes to ask,” Microsoft states in the patent.

Microsoft Senior Director of Hardware Engineering Andrew Fuller is named as lead inventor on the patent, titled, “Natural user input for driving interactive stories.”

Abstract: A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the 3-D world of the displayed image. In particular, the image displayed on the screen changes to create the impression that a user is stepping into the 3-D virtual world to allow a user to examine virtual objects from different perspectives or to peer around virtual objects.

Patent

Claims: 

1. In a system comprising a computing environment coupled to a capture device for capturing user motion and an audiovisual device for displaying images and/or providing audio, a method of combining interactive gaming aspects into a linear story, comprising: a) presenting the linear story via the audiovisual device using at least one of images and an audio narrative, the linear story having a default story and an altered story, the default story including a default set of images and/or narrative that is presented to a user if a user does not interact with the story; b) detecting at least one of a movement or a voice command by a user via the capture device; and c) altering the linear story to the altered story where at least one of movement or voice command are detected in said step b), the linear story being altered in said step c) by presenting at least one of the additional images and additional narrative to a user via the audiovisual device.

2. The method of claim 1, wherein the linear story includes images having story scenes defined by three-dimensional descriptions of the scene in virtual machine space, said step c) of altering the linear story comprising the step d) of presenting additional images showing a scene from the default set of images from a different three dimensional viewing perspective within the virtual machine space.

3. The method of claim 2, the additional images presented in said step d) creating the impression of a user moving into the scene.

4. The method of claim 2, the additional images presented in said step d) creating the impression of a user examining an object from the scene more closely.

5. The method of claim 2, the additional images presented in said step d) creating the impression of a user examining an object from the scene from a different perspective.

6. The method of claim 2, the additional images presented in said step d) creating the impression of a user looking behind an object within the scene.

7. The method of claim 1, wherein said step c) of altering the linear story comprises the step e) of a user interacting with a character displayed within a scene.

8. The method of claim 1, wherein said step c) of altering the linear story occurs where it is determined that a movement and/or voice command is not an interaction that alters the linear story, but a predefined event occurs which alters the linear story, the predefined event relating to receiving the linear story a predetermined number of times without altering the story.

9. The method of claim 1, wherein the linear story includes an audio narrative and images, said step a) of presenting the linear story comprising the steps of: a1) a user voicing the narrative, a2) matching the user-voiced narrative to corresponding images of the linear story, and a3) presenting the images at a pace determined by a pace of the user-voiced narrative.

10. The method of claim 1, wherein the linear story includes an audio narrative and images presented to a user, said step a) of presenting the linear story comprising the steps of: a1) a third party voicing the narrative, the third party not present with the user and the third party’s voice provided as the narrative via a speaker in a vicinity of the user, a2) matching the third party-voiced narrative to corresponding images of the linear story, and a3) presenting the images at a pace determined by a pace of the third party-voiced narrative.

11. A processor-readable storage medium for programming a processor to perform a method of combining interactive gaming aspects into a linear story, comprising: a) presenting the linear story via an audiovisual device using at least one of images and an audio narrative, the linear story presented as a complete story, beginning to end and including a default set of images, in the event no interaction by a user is perceived by a capture device monitoring user movements; b) detecting user interaction with the story by a user via a capture device to alter the linear story of the default set of images; c) altering the linear story to a story branch by presenting images in addition to or instead of the default set of images to a user via the audiovisual device if user interaction is received in said step (b) to alter the linear story of the default set of images; and d) scoring a user’s interaction where the interaction corresponds to awarding or taking away a predetermined number of points based on how the user interacts with the story.

12. The processor-readable storage medium of claim 11, wherein the linear story includes images having story scenes defined by three-dimensional descriptions of the scene in virtual machine space, said step c) of altering the linear story comprising the step e) of presenting additional images showing a scene from the default set of images from a different three dimensional viewing perspective within the virtual machine space.

13. The processor-readable storage medium of claim 11, wherein said step b) comprises the step f) of a user taking over at least partial control of a character displayed as part of the linear story.

14. The processor-readable storage medium of claim 13, wherein said step f) comprises the step of a user controlling movement of a character displayed as part of the linear story in a monkey-see-monkey-do fashion, and/or a user controlling talking of the character.

15. The processor-readable storage medium of claim 11, wherein said steps a) and c) comprise the step of displaying the linear story and/or story branch in at least one of still-image panels, dynamic computer graphics animation and linear video.

16. A system for combining interactive gaming aspects into a linear story, comprising: an audiovisual device operable to present at least one of images and an audio narration; an image capture device operable to capture at least one of image and audio data from a user; and a computing environment coupled to the audiovisual device and image capture device, the computing environment operable to: a) present the linear story via the audiovisual device using at least one of images and an audio narrative, the linear story presented as a complete story, beginning to end and including a default set of images, in the event no interaction by a user is perceived by the capture device; b) detect an exploration gesture via the capture device; c) branch from the linear story to a story branch upon identifying the exploration gesture in said step b), the branch including: c1) sensing a point on the audiovisual device indicated by the user to be a desired viewing perspective, and c2) displaying the virtual object from the viewing perspective indicated in step c1).

17. The system of claim 16, the computing environment operable to sense the point on the audiovisual device indicated by the user by the capture device sensing a position of the user’s head.

18. The system of claim 16, the computing environment operable to sense the point on the audiovisual device indicated by the user by the capture device sensing a point indicated by the user’s hand.

19. The system of claim 16, the computing environment operable to branch back to the linear story when the user gestures that they are finished examining the virtual object.

20. The system of claim 16, wherein the user is able to augment a score associated with the user’s experience in interacting with the linear and branched story.