How Crowd Sourcing Video Could Transform TV

With more consumers beginning to purchase wearable cameras, it may not be long before TV viewers will be able to experience events ranging from football games to weddings through any point of view.

Huston
Huston

That’s one of the visions of inventor Chad Huston, who has been quietly building a patent portfolio that leverages advances in augmented reality, virtual reality and GPS technology.

Huston, who has previously developed golf applications that use GPS, has expanded into location-based content. He told The Donohue Report that he believes the concept could even appeal to major sports leagues such as the NFL.

His latest invention,“System, Method and Device Including a Depth Camera for Creating A Location Based Experience,” appeared in a U.S. patent application that became public last week.

While some sports leagues and rights holders may resist allowing consumers at stadium events to capture and broadcast video with mobile devices or goggles, Huston says content owners should embrace the idea.

“I guarantee, the NFL, if they get a cut of it, is going to want to make money,” said Huston, co-founder of Austin, Texas-based startup SoLoMoAr (Social / Local / Mobile / Augmented reality). Asked about potential rights issues, Huston noted, would a league such as the NFL prefer to “sell one seat on the 50-yard line, or 30,000 [virtual seats] for $10 apiece?”

Huston predicts that augmented reality and virtual reality programming will take off within the few years.

“I think 2016 will just be eye opening for the average consumer with all of the AR and VR applications that will come out,” Huston told The Donohue Report.

Huston and SoLoMoAr co-founder Chris Coleman are named as inventors on the patent application, which was published on Oct. 8. 

Abstract: A system, method, and device for creating an environment and sharing an experience using a plurality of mobile devices having a conventional camera and a depth camera employed near a point of interest. In one form, random crowdsourced images, depth information, and associated metadata are captured near said point of interest. Preferably, the images include depth camera information. A wireless network communicates with the mobile devices to accept the images, depth information, and metadata to build and store a 3D model of the point of interest. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.

Patent

Descriptions of images contained in patent application are contained in gallery below.  

Claims:

1. A system for creating and sharing an environment comprising: a network for receiving images and metadata from a plurality of devices each having a depth camera employed near a point of interest to capture image data and associated metadata near said point of interest, wherein the associated metadata for said image data includes a location of the device and an orientation of the camera; an image processing server connected to the network for receiving said image data and metadata, wherein the server processes the image data and metadata to build a 3D model of one or more targets proximate the point of interest based at least in part on said image data; an experience platform connected to the image processing server for storing the 3D model of one or more targets, whereby users can connect to the experience platform to view the point of interest from a user selected location and orientation, and view the 3D model of one or more targets.

2. The system of claim 1 wherein the network includes wireless access and some of the devices are mobile, wherein the random images are crowdsourced from users of the mobile devices.

3. The system of claim 1, wherein the experience platform includes a plurality of images associated with locations near the point of interest and a user connected to the experience platform can view images associated with a user selected location and orientation.

4. The system of claim 1, wherein the processing server is operable to stitch a number of images together to form a panorama.

5. The system of claim 4, wherein a user connected to the experience platform can view panoramas associated with a user selected location and orientation.

6. The system of claim 2, wherein said images include advertising based on context.

7. A method for creating an environment for use with a location based experience, comprising: capturing random image data and associated metadata near a point of interest with a plurality of mobile devices accompanying a number of crowdsource contributors, each mobile device having a depth camera, wherein the associated metadata for said image data includes a location of the mobile device and an orientation of the camera; communicating said random image data and metadata from said mobile devices to a wireless network; receiving said random image data and metadata at an image processing server connected to the network; and processing the image data to determine the location of one or more targets in the image data and to build a 3D model of one or more targets near the point of interest, including creating one or more panoramas associated with a number of locations near the point of interest.

8. The method of claim 7, the depth camera is a time of flight camera, a structured light sensor, or a plenoptic camera.

9. The method of claim 7, wherein said processing step includes using an existing 3D model of a target and enhancing said existing 3D model using said random captured images and metadata.

11. The method of claim 7, wherein said random images are crowdsourced from a plurality of contributors without coordination among said users.

12. A portable device for assisting in the use of a 3D of a point of interest, comprising: a GPS receiver for determining the position of the device near the point of interest; a conventional optical camera for capturing an image of the point of interest; a depth camera for capturing depth data associated with said image; memory for storing metadata associated with said image or depth data or both; a communication link to an experience server, operable to load information relevant to the point of interest to the device, based at least in part on the position of the device and metadata; and a display operable to view a perspective view of said point of interest from said device position proximate said point of interest, said display operable to show at least some of said load information as an artificial reality (“AR”) message.

13. The device of claim 12, wherein said portable device comprises glasses that include a transparent display operable to view at least a portion of said point of interest and to view said artificial reality message.

14. A method of sharing content in a location based experience, comprising: capturing a plurality of random images and associated metadata near a point of interest, including depth information; processing the captured images, depth information, and associated metadata to build a 3D model of one or more targets near said point of interest; storing the 3D model of one or more targets in an experience platform connected to a network; accessing the experience platform using the network to access the 3D model of one or more targets; selecting a location and orientation near said point of interest; and viewing the 3D model of one or more targets using the selected location and orientation.

15. The method of claim 14, including viewing the 3D model of one or more targets and an advertisement based on context.

16. The method of claim 14, wherein said random images are captured by crowdsource from users equipped with mobile devices without coordination of targets or time of acquisition among users.

17. The method of claim 16, wherein at least one of the mobile devices includes a depth camera comprising a time of flight camera, a structured light sensor, or a plenoptic camera.

18. The method of claim 14, wherein said processing step includes receiving an existing 3D model of a target and enhancing said preexisting 3D model using said captured random images and metadata.

19. The method of claim 12, wherein said viewing step includes wearing goggles and viewing a point of interest with at least some portion of the point of interest enhanced with artificial reality.

20. The method of claim 12, wherein said viewing step includes a user remote from said selected location.