Verizon Tackles 3D Virtual Event Viewing

Verizon won a U.S. patent on Tuesday for an invention that details how it could build “immersive 3D virtual environments” to allow subscribers to view live events captured from multiple angles at stadiums and arenas.

Davison
Davison

Waltham, Mass.-based Verizon systems integration engineer and SeaChange veteran Jeffrey Davison is named as lead inventor on the patent, titled, “Virtual event viewing.”

Abstract: A method may include receiving a number of video feeds for a live event from video capture devices located at an event venue. A three-dimensional model of the event may be generated based on received video feeds. A request to view a virtual event corresponding to the live event may be received from a user device. The 3D model may be forwarded to the user device. A virtual representation of the event may be output based on the 3D model. A request may be received to manipulate a view within the virtual representation. A modified virtual representation of the event may be output based on the request

Patent

Claims:

  1. A computer-implemented method comprising: receiving a number of video feeds for a live event from video capture devices located at an event venue, wherein the received number of video feeds include motion capture data from motion capture assisting elements associated with at least one participant in the live event; generating a three-dimensional (3D) model of the live event based on the received number of video feeds, wherein generating the 3D model includes extracting the motion capture data from the received number of video feeds; receiving, from a user device, a request to view a virtual event corresponding to the live event; forwarding the 3D model to the user device; outputting, by the user device, a virtual representation of the live event based on the 3D model; receiving, by the user device, a user request to manipulate a view within the virtual representation; outputting, by the user device, a modified virtual representation of the live event based on the user request; receiving, by the user device and from the user, audio content corresponding to at least a portion of the modified virtual representation; storing, by the user device, the at least the portion of the modified virtual representation of the live event and the audio content for subsequent outputting, wherein the stored at least the portion of the modified virtual representation depicts a particular point of view within the 3D model based on the user request and the audio content corresponds to the particular point of view; and mapping data from the at least the portion of the modified virtual representation of the live event into another 3D virtual environment, wherein thematic appearance or character information is mapped onto one or more of event, object, or participant elements of the virtual event based on input from a user.
  1. The computer-implemented method of claim 1, comprising: wherein the number of video feeds comprises a number of video feeds from different angles and perspectives relative to the live event.
  1. The computer-implemented method of claim 1, further comprising: receiving event information relating to the number of video feeds; and generating the 3D model of the live event based on received video feeds and the event information, wherein generating the 3D model includes selecting and navigating the 3D model based on the event information.
  1. The computer-implemented method of claim 3, wherein the event information comprises information regarding identities of event elements.
  1. The computer-implemented method of claim 1, wherein receiving the number of video feeds, generating the 3D model, and forwarding the 3D model are performed: in real-time in relation to the live event, or following completion of the live event.
  1. The computer-implemented method of claim 1, further comprising: obtaining information regarding elements in the 3D model, wherein the elements in the 3D model comprise participants, teams, or objects.
  1. The computer-implemented method of claim 1, further comprising: inserting advertisement information into the 3D model when forwarding the 3D model to the user device.
  1. The computer-implemented method of claim 1, further comprising: forwarding the 3D model to a third party for insertion into a virtual environment associated with the third party.
  1. The computer-implemented method of claim 1, wherein receiving the request to manipulate a view within the virtual representation comprises receiving a request to position a virtual camera within the virtual representation.
  1. The computer-implemented method of claim 1, further comprising: receiving, by the user device, a user request to share the stored at least the portion of the modified virtual representation with another user; and transmitting, by the user device, the stored at least the portion of the modified virtual representation to the other user based on the request.
  1. The computer-implemented method of claim 1, wherein generating the 3D model includes extracting the motion capture data, further comprises: extracting the motion capture data based on radio frequency identification (RFID) tags associated with event elements.
  1. A system comprising: a service provider device; and a user device connected to the service provider device via a network, wherein the service provider device is configured to: receive a number of video feeds for a live event from video capture devices located at an event venue, wherein the received number of video feeds include motion capture data from motion capture assisting elements associated with at least one participant in the live event; generate a three-dimensional (3D) model of the live event based on the received number of video feeds, wherein generating the 3D model includes extracting the motion capture data from the received number of video feeds; receive, from the user device, a request to view a virtual event; and forward the 3D model to the user device, and wherein the user device is configured to: output a virtual representation of the live event based on the 3D model; receive a request to manipulate a view within the virtual representation; output a modified virtual representation of the live event based on the request; receive audio content corresponding to at least a portion of the modified virtual representation; store the at least the portion of the modified virtual representation of the live event and the audio content for subsequent outputting, wherein the stored at least the portion of the modified virtual representation depicts a particular point of view within the 3D model based on the request and the audio content corresponds to the particular point of view; and map data from the at least the portion of the modified virtual representation of the live event into another 3D virtual environment, wherein thematic appearance or character information is mapped onto one or more of event, object, or participant elements of the virtual event based on input from a user.
  1. The system of claim 12, wherein the number of video feeds comprise a number of video feeds from different angles and perspectives relative to the live event.
  1. The system of claim 12, wherein the service provider device is further configured to: receive identity information regarding an identity of event elements relating to the number of video feeds; and generate the 3D model of the live event based on received video feeds and the received identity information.
  1. The system of claim 12, wherein the service provider device is further configured to: obtain additional information regarding elements in the 3D model, wherein the elements in the 3D model comprise participants, teams, or objects.
  1. The system of claim 12, wherein the user device is further configured to: receive a request to position a virtual camera within the virtual representation; and output the modified virtual representation of the live event based on the request.
  1. The system of claim 12, wherein the user device is further configured to: receive a user request to share the stored at least the portion of the modified virtual representation with another user; and transmit the stored at least the portion of the modified virtual representation to the other user based on the request.
  1. A non-transitory computer-readable medium having stored thereon sequences of instructions which, when executed by at least one processor, cause the at least one processor to: receive a number of video feeds for a live event from video capture devices located at an event venue, wherein the received number of video feeds include motion capture data from motion capture assisting elements associated with at least one participant in the live event; generate a three-dimensional (3D) model of the live event based on the received number of video feeds, wherein generating the 3D model includes extracting the motion capture data from the received number of video feeds; receive, from a user device, a request to view a virtual event corresponding to the live event; forward the 3D model to the user device; output a virtual representation of the live event based on the 3D model; and receive, from a user of the user device, a request to manipulate a view within the virtual representation; output a modified virtual representation of the live event based on the request; receive, from the user of the user device, audio commentary corresponding to the modified virtual representation of the live event; store at least a portion of the modified virtual representation of the live event and the audio commentary corresponding to the modified virtual representation for subsequent viewing, wherein the stored at least the portion of the modified virtual representation depicts a particular point of view within the 3D model based on the request and the audio commentary corresponds to the particular point of view; and map data from the at least the portion of the modified virtual representation of the live event into another 3D virtual environment, wherein thematic appearance or character information is mapped onto one or more of event, object, or participant elements of the virtual event based on input from a second user.
  1. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the at least one processor to: receive a user request to share the stored at least the portion of the modified virtual representation with another user; and transmit the stored at least the portion of the modified virtual representation and the audio commentary corresponding to the modified virtual representation to the other user based on the request.
  1. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the at least one processor to: receive a request to position a virtual camera within the virtual representation; and output the modified virtual representation of the live event based on the request.