Stealth Startup Stitches Mobile Live Event Video

Keep an eye on San Diego, Calif.-based startup OIM Squared, which has built an iOS app designed to drive consumers to concerts and sporting events where they will be encouraged to shoot and share video with friends using its Stitch Live app on iPhones and iPads.

Williams
Williams

Stitch CEO and co-founder Eric Williams told The Donohue Report Thursday that Stitch will “directly sponsor” 100 artists over the next year. Stitch, with offices in San Diego and Skaneateles, N.Y., will soon open an office in New York City, Williams added.

Stitch flipped the switch on its Stitch Live app last month. The app gives users “access to exclusive content before, during, and after the show that can’t be found anywhere else. Go backstage and be where only Stitch Live can take you,” Stitch writes in description posted on Apple’s iTunes Store.

A patent issued recently to OIM Squared, Stitch’s privately held parent company, indicates Stitch is also looking to add e-commerce capabilities to its app. Its invention would rely in part on generating “hotspot packages” related to reference objects that it is able to identify in video programming.

One of the things that differentiates Stitch is that it has created a “single algorithm to sort out and identify objects,” Williams said Thursday.

Inventors named on the patent, titled, “Interactive content generation,” include Williams, Bertin Cordova-Diba, Jose Munguia and Tyson Simon. It was filed on July 2, and is related to a patent application filed in July 2014.

Abstract: Generation of interactive content. In an embodiment, a representation of candidate object(s) in content of a digital media asset are received. For each of the candidate object(s), feature(s) of the candidate object are compared to corresponding feature(s) of a plurality of reference objects to identify reference object(s) that match the candidate object. For each of the matched candidate object(s), a hotspot package is generated. The hotspot package may comprise a visual overlay which comprises information associated with the reference object(s) matched to the respective candidate object.

Patent

Claims: 

1. A computer-implemented method that comprises using at least one hardware processor to: receive a representation of one or more candidate objects in content of a digital media asset, wherein the digital media asset comprises a video, and wherein the representation of one or more candidate objects comprises a first frame of the video; for each of the one or more candidate objects, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object; for each of the one or more candidate objects that is matched to one or more reference objects, generate a hotspot package, wherein the hotspot package comprises a visual overlay and a display position that is based on a position of the candidate object, and wherein the visual overlay comprises information associated with the one or more reference objects matched to the candidate object; and, after generating a hotspot package for each of the one or more candidate objects that is matched to one or more reference objects, receive a second frame of the video that is subsequent in time to the first frame of the video, determine whether the second frame represents a different scene than the first frame, when the second frame is determined to represent a different scene than the first frame, for each of one or more candidate objects in the second frame, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object, and, for each of the one or more candidate objects in the second frame that is matched to one or more reference objects, generate a hotspot package, and, when the second frame is not determined to represent a different scene than the first frame, for each hotspot package generated for the one or more candidate objects that are matched to one or more reference objects, determine a change in a position of the candidate object, for which the hotspot package was generated, from a frame preceding the second frame to the position of the candidate object in the second frame, and update the display position of the hotspot package based on the determined change in position of the candidate object.

2. The method of claim 1, wherein the representation of one or more candidate objects are received in a visual query that is received from a network device via at least one network.

3. The method of claim 2, further comprising using the at least one hardware processor of the network device to generate the visual query, wherein generating the visual query comprises: displaying the content of the digital media asset; receiving a selection of a portion of the displayed content via a user operation; and generating the visual query based on the selected portion of the displayed content.

4. The method of claim 1, wherein the representation of one or more candidate objects comprises, for each of the one or more candidate objects, an image of the candidate object.

5. The method of claim 4, wherein comparing one or more features of the candidate object to one or more corresponding features of a plurality of reference objects comprises deriving the one or more features of the candidate object from visual attributes in the image of the candidate object.

6. The method of claim 5, wherein deriving the one or more features of the candidate object from visual attributes in the image of the candidate object comprises detecting the candidate object in the image of the candidate object prior to deriving the one or more features of the candidate object.

7. The method of claim 6, wherein detecting the candidate object in the image of the candidate object comprises: normalizing the image of the candidate object; and determining a boundary that surrounds the candidate object within the image of the candidate object.

8. The method of claim 7, wherein normalizing the image of the candidate object comprises: removing image noise from the image of the candidate object; converting the image of the candidate object to a photometric invariant color space; and converting the image of the candidate object to one or both of a predefined size and predefined aspect ratio.

9. The method of claim 7, wherein determining a boundary that surrounds the candidate object within the image of the candidate object comprises: segmenting the image of the candidate object into regions; merging similar neighboring regions until no similar neighboring regions remain; and determining a boundary around one or more remaining regions as the boundary that surrounds the candidate object.

10. The method of claim 9, wherein merging similar neighboring regions comprises, for a pair of neighboring regions: calculating a variation between the pair of neighboring regions; merging the pair of neighboring regions when the variation is less than a threshold, and not merging the pair of neighboring regions when the variation is greater than the threshold.

11. The method of claim 1, wherein comparing one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object comprises, for each of the plurality of reference objects: for each of the one or more features of the candidate object, comparing the feature of the candidate object to a corresponding feature of the reference object, and generating a feature score based on the comparison of the feature of the candidate object to the corresponding feature of the reference object; and generating a matching score based on each feature score.

12. The method of claim 11, wherein the one or more features of the candidate object comprise a plurality of features of the candidate object, wherein the one or more corresponding features of the reference object comprise a plurality of features of the reference object, and wherein comparing one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object further comprises: determining a weight for each feature score; and weighting each feature score according to the weight determined for that feature score; wherein the matching score is based on each weighted feature score.

13. The method of claim 12, wherein the plurality of features comprise two or more of a color feature, a texture feature, a shape feature, and a keypoints feature.

14. The method of claim 1, wherein, for each of the one or more candidate objects that is matched to one or more reference objects, the matched one or more reference objects comprise an object that is identified as the candidate object.

15. The method of claim 14, wherein the visual overlay of each hotspot package comprises one or more of an image, title, description, and price associated with the object that is identified as the candidate object.

16. The method of claim 1, wherein, for each of the one or more candidate objects that is matched to one or more reference objects, the matched one or more reference objects comprise an object that is visually similar to the candidate object.

17. The method of claim 16, wherein the visual overlay of each hotspot package comprises a representation of each object, from the matched one or more reference objects, that is visually similar to the candidate object.

18. The method of claim 1, wherein generating a hotspot package for each of the one or more candidate objects that is matched to one or more reference objects comprises, for each of the one or more reference objects matched to the candidate object: retrieving information associated with the reference object using an identifier of the reference object; and incorporating the information into the hotspot package.

19. The method of claim 18, wherein each of the plurality of reference objects represents a product, and wherein the information associated with the one or more reference objects comprises one or more of an image, a title, a description, and a price.

20. The method of claim 1, further comprising using the at least one hardware processor to generate a navigation hotspot package, wherein the navigation hotspot package comprises a visual overlay, and wherein the visual overlay comprises one or more inputs for one or both of searching and selecting each of the hotspot packages for the one or more candidate objects that were matched to one or more reference objects.

21. The method of claim 1, further comprising using the at least one hardware processor to embed each hotspot package with the digital media asset, wherein embedding each hotspot package into the digital media asset comprises: generating an asset template; embedding a viewer for the digital media asset into the asset template; and, for each hotspot package, generating a hotspot package template, and embedding the hotspot package template into the asset template.

22. The method of claim 21, wherein the asset template and each hotspot package template are generated in a markup language.

23. The method of claim 1, wherein the visual overlay of each hotspot package for the one or more candidate objects that were matched to one or more reference objects comprises one or more inputs for initiating a purchase for at least one of the matched one or more reference objects.

24. The method of claim 1, further comprising using the at least one hardware processor to, for each of the one or more candidate objects that is matched to one or more reference objects: determine a position of the candidate object in the content of the digital media asset; and generate a hotspot, wherein the hotspot comprises a visual indication to be overlaid at a hotspot position in the content of the digital media asset corresponding to the determined position of the candidate object, and wherein the hotpot is associated with the hotspot package generated for the candidate object.

25. The method of claim 24, wherein the visual indication of each hotspot is selectable via a user operation so as toggle the associated hotspot package between a visible and invisible state.

26. The method of claim 1, wherein comparing one or more features of a candidate object to one or more corresponding features of a plurality of reference objects is performed according to a first feature-matching algorithm defined by a first feature-matching software module, and wherein the method further comprises using the at least one hardware processor to: receive a second feature-matching software module, defining a second feature-matching algorithm, via an interface; and, subsequently, compare one or more features of a candidate object to one or more corresponding features of a plurality of reference objects according to the second feature-matching algorithm defined by the second feature-matching software module, instead of the first feature-matching algorithm defined by the first feature-matching software module.

27. The method of claim 1, wherein determining whether the second frame represents a different scene than the first frame is performed according to a first scene-change-detection algorithm defined by a first scene-change-detection software module, wherein determining the change in position of the candidate object is performed according to a first object-tracking algorithm defined by a first object-tracking software module, and wherein the method further comprises using the at least one hardware processor to: receive a second scene-change-detection software module, defining a second scene-change-detection algorithm, via an interface, and, subsequently, determine whether a frame represents a different scene than a preceding frame according to the second scene-change-detection algorithm defined by the scene-change-detection software module, instead of the first scene-change-detection algorithm defined by the first scene-change-detection software module; and receive a second object-tracking software module, defining a second object-tracking algorithm, via an interface, and, subsequently, determine the change in position of the candidate object according to the second object-tracking algorithm defined by the second object-tracking software module, instead of the first object-tracking algorithm defined by the first object-tracking software module.

28. The method of claim 1, further comprising using the at least one hardware processor to embed each hotspot package with the digital media asset.

29. A system comprising: at least one hardware processor; and one or more software modules that are configured to, when executed by the at least on hardware processor, receive a representation of one or more candidate objects in content of a digital media asset, wherein the digital media asset comprises a video, and wherein the representation of one or more candidate objects comprises a first frame of the video, for each of the one or more candidate objects, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object, for each of the one or more candidate objects that is matched to one or more reference objects, generate a hotspot package, wherein the hotspot package comprises a visual overlay and a display position that is based on a position of the candidate object, and wherein the visual overlay comprises information associated with the one or more reference objects matched to the candidate object, and, after generating a hotspot package for each of the one or more candidate objects that is matched to one or more reference objects, receive a second frame of the video that is subsequent in time to the first frame of the video, determine whether the second frame represents a different scene than the first frame, when the second frame is determined to represent a different scene than the first frame, for each of one or more candidate objects in the second frame, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object, and, for each of the one or more candidate objects in the second frame that is matched to one or more reference objects, generate a hotspot package, and, when the second frame is not determined to represent a different scene than the first frame, for each hotspot package generated for the one or more candidate objects that are matched to one or more reference objects, determine a change in a position of the candidate object, for which the hotspot package was generated, from a frame preceding the second frame to the position of the candidate object in the second frame, and update the display position of the hotspot package based on the determined change in position of the candidate object.

30. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, cause the processor to: receive a representation of one or more candidate objects in content of a digital media asset, wherein the digital media asset comprises a video, and wherein the representation of one or more candidate objects comprises a first frame of the video; for each of the one or more candidate objects, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object; for each of the one or more candidate objects that is matched to one or more reference objects, generate a hotspot package, wherein the hotspot package comprises a visual overlay and a display position that is based on a position of the candidate object, and wherein the visual overlay comprises information associated with the one or more reference objects matched to the candidate object; and, after generating a hotspot package for each of the one or more candidate objects that is matched to one or more reference objects, receive a second frame of the video that is subsequent in time to the first frame of the video, determine whether the second frame represents a different scene than the first frame, when the second frame is determined to represent a different scene than the first frame, for each of one or more candidate objects in the second frame, compare one or more features of the candidate object to one or more corresponding features of a plurality of reference objects to identify one or more reference objects that match the candidate object, and, for each of the one or more candidate objects in the second frame that is matched to one or more reference objects, generate a hotspot package, and, when the second frame is not determined to represent a different scene than the first frame, for each hotspot package generated for the one or more candidate objects that are matched to one or more reference objects, determine a change in a position of the candidate object, for which the hotspot package was generated, from a frame preceding the second frame to the position of the candidate object in the second frame, and update the display position of the hotspot package based on the determined change in position of the candidate object.

1 Comment on Stealth Startup Stitches Mobile Live Event Video

Comments are closed.