AT&T Jolts Video Compression

Offering a glimpse at how it may be able to distribute dozens of live TV networks to mobile devices and LTE-connected TVs, AT&T has come up with a new approach to video compression that involves synthesizing video frames.

James PrattJames Pratt, lead member of the big data technical staff at AT&T Labs, is named as lead inventor on the patent, “Video Share Model-Based Video Fixing,” which was published on Thursday. By building models that can synthesize content on user devices, AT&T says it could reduce congestion from both video programming and video phone calls.

Abstract: Systems and methods for model-based video fixing are disclosed. A video can be retrieved and analyzed to determine if any portion of the video can be represented by a model. If a portion that can be modeled is identified, a model that approximates the portion can be specified, the portion can be removed from the video, and instructions for modeling the video can be formatted. The video and the instructions can be transmitted to a receiving device, which can synthesize the model and the received video to generate a model-based video. Systems for providing the model-based video fixing are also disclosed.

Patent Application

Claims:

  1. A method comprising: receiving, at a processor executing a video application, video data associated with a smart phone that is in communication with a wireless network; recognizing, by the processor, a portion of the video data that can be modeled; determining, by the processor, a model that can be used to represent the portion of the video data; determining, by the processor, a location of the smart phone; verifying, by the processor, the model based upon the location of the smart phone; removing, by the processor, the portion of the video data that can be modeled from the video data; and transmitting, by the processor, a remainder of the video data to a communication device in communication with the wireless network.
  1. The method of claim 1, further comprising: compressing, by the processor, the remainder of the video data; and formatting, by the processor, instructions that are used by the communication device to synthesize the model and the remainder of the video data.
  1. The method of claim 1, wherein the remainder of the video data is transmitted with instructions that identify the portion and the model.
  1. The method of claim 1, wherein the smart phone receives the video data during a video sharing session.
  1. The method of claim 4, wherein the video sharing session comprises a two-way video sharing session.
  1. The method of claim 5, wherein the mo network comprises a cellular network.
  1. The method of claim 1, wherein the processor is located at the smart phone.
  1. The method of claim 1, wherein the processor is located at a model-based video system.
  1. The method of claim 1, wherein the smart phone captures the video data, and wherein the location of the smart phone comprises a location at which the video data is captured.
  1. A device comprising: a processor; and a memory that stores instructions that, when executed by the processor, cause the processor to perform operations comprising receiving video data, recognizing a portion of the video data that can be modeled, determining a model that can be used to represent the portion of the video data, determining a location at which the video data is captured, verifying the model based upon the location at which the video data is captured, removing the portion of the video data that can be modeled from the video data, and transmitting a remainder of the video data to a communication device in communication with a wireless network.
  1. The device of claim 10, wherein the device comprises a smart phone, and wherein the location comprises a location at which the smart phone is located when the video data is captured.
  1. The device of claim 10, wherein the device comprises a model-based video system.
  1. The device of claim 10, wherein the video data is captured during a video sharing session.
  1. The device of claim 13, wherein the video sharing session comprises a two-way video sharing session, and wherein the wireless network comprises a cellular network.
  1. The device of claim 10, wherein verifying the model comprises determining if the model is consistent with the location.
  1. A method comprising: receiving, at a processor, video data associated with a first communication device that is in communication with a wireless network; recognizing, by the processor, a portion of the video data that can be modeled; determining, by the processor, a model that can be used to represent the portion of the video data; determining, by the processor, a location of the first communication device; removing, by the processor, the portion of the video data that can be modeled from the video data; compressing, by the processor, a remainder of the video data; formatting, by the processor, instructions that are used by a second communication device to synthesize the model and the remainder of the video data; and transmitting, by the processor, the remainder of the video data to the second communication device via the wireless network.
  1. The method of claim 16, wherein the wireless network comprises a cellular network.
  1. The method of claim 17, wherein the first communication device comprises a smart phone, wherein the smart phone captures the video data, and wherein the location of the first communication device comprises a location at which the smart phone captures the video data.
  1. The method of claim 16, wherein the first communication device receives the video data and transmits the remainder of the video data during a two-way video sharing session with the second communication device.
  1. The method of claim 16, wherein the first communication device stores the model, and wherein the second communication device stores the model.