Microsoft Envisions HoloLens Dating Use Case

If Microsoft succeeds in commercializing the invention detailed in a patent published on Tuesday, a consumer wearing a pair of HoloLens augmented reality glasses may be able to walk into a bar, and immediately identify patrons with “single marital status.”

GeisnerFormer Microsoft director Kevin Geisner is named as lead inventor on the patent, titled, “Providing contextual personal information by a mixed reality device.”

Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.

Patent

Claims

1. One or more processor-readable storage devices having instructions encoded thereon for causing one or more software controlled processors to execute a method for providing location-relevant contextual personal information by a mixed reality display device system, the method comprising: receiving and storing person selection criteria having been provided by a user wearing a mixed reality display device of the system, the person selection criteria being for identifying another person who satisfies the person selection criteria; sending a request including a location of the user and the person selection criteria to a personal information service engine executing on one or more remote computer systems for a personal identification data set for each person sharing the location and satisfying the person selection criteria, the location being one shared by the user and one or more other persons such that face to face meetings can occur at the location between the user and the one or more other persons, the location including a scene in a field of view of the mixed reality display device; receiving at least one personal identification data set from the personal information service engine for a person sharing the location; determining whether the person associated with the at least one personal identification data set is in the field of view of the mixed reality display device; responsive to the person associated with the at least one personal identification data set not being currently within the field of view of the mixed reality display device, determining a position of the person within the location, and outputting data which indicates the out-of-field-of-view position of the person within the location; and responsive to the person associated with the at least one personal identification data set being in the field of view, outputting data which identifies the in-field-of-view position and identity of the person in the field of view.

2. The one or more processor-readable storage devices of claim 1 further comprising: determining a ready-for-face-to-face personal interaction status for an in-field-of-view person associated with the at least one personal identification data set based on audio and image data captured by the mixed reality display device system; and outputting a personal interaction status stored for the person.

3. The one or more processor-readable storage devices of claim 2 wherein the ready-for-face-to-face personal interaction status identifies one or more of the following: none; that interaction status is that associated with a greeting; that interaction status is that associated with a conversation; that interaction status is that associated with a conversation related to personal information selection criteria; that interaction status is that associated with a conversation related to user profile data; and that interaction status is that associated with a gesture acknowledgement.

4. The one or more processor-readable storage devices of claim 2 wherein the determining of the ready-for-face-to-face personal interaction status for the in-field-of-view person associated with the at least one personal identification data set based on audio and image data captured by the mixed reality display device system further comprises: identifying one or more interactions from the in-field-of-view person within a time period of the person coming into the field of view of the mixed reality display device; identifying one or more user physical actions within the time period of the one or more interactions while the in-field-of-view person is an object of interest; identifying one or more personal interaction status candidates which may apply based on the one or more interactions and the one or more user physical actions within the time period; and selecting as a personal interaction status the candidate indicating the highest level of ready-for-face-to-face personal interaction.

5. The one or more processor-readable storage devices of claim 1 wherein outputting data which identifies the in-field-of-view position and identity of the person in the field of view further comprises: displaying an identification providing visual indicator registered to the in-field-of-view position of the person in a display of the mixed reality display device.

6. The one or more processor-readable storage devices of claim 1 wherein outputting data which identifies the in-field-of-view position and identity of the person in the field of view further comprises: playing audio including a personal information item of the identified in-field-of-view person which satisfies the person selection criteria through at least one earphone of the mixed reality, display device system.

7. The one or more processor-readable storage devices of claim 1 further comprising: responsive to the person associated with the at least one personal identification data set not being in the field of view, displaying a personal identifier and a position within the location of the person associated with the at least one personal identification data set.

8. A method for providing location-relevant contextual personal information to a head mounted, mixed reality, display device of a mixed reality system, the method comprising: receiving a request indicating a location of a user of a head mounted, mixed reality display device of the system and indicating person selection criteria for at least one personal identification data set for each of other persons sharing the location with the user and satisfying the person selection criteria, the shared location being such that face to face meetings can occur at the location between the user and one or more of the other persons at the location, the location including a scene in a field of view of the mixed reality display device of the user; determining whether there is a person sharing the location and satisfying the indicated person selection criteria based on accessible user profile data; responsive to there being a person sharing the location and satisfying the indicated person selection criteria, providing at least one personal identification data set for the person; determining whether the person associated with the provided at least one personal identification data set is currently within the field of view of the head mounted, mixed reality, display device of the user; responsive to the person not being currently within the field of view of the display device determining a position of the person within the location, and sending an indication of the position in the at least one personal identification data set; and responsive to the person being currently within the field of view of the display device of the user, sending the at least one personal identification data set including a positional indicator and a personal identifier for the person in the field of view.

9. The method of claim 8 wherein determining whether there is a person sharing the location and satisfying the person selection criteria based on accessible user profile data further comprises: searching user profile data of one or more persons having a relationship link in the user profile data of the user for data which satisfies the person selection criteria and for which the user has been permitted access.

10. The method of claim 8 wherein the personal identifier is an object identifier unique for the location.

11. The method of claim 8 wherein determining whether the person associated with the at least one personal identification data set is currently within a field of view of the user head mounted, mixed reality, display device system further comprises: receiving image data of one or more users in the location; assigning a unique object identifier to each of the one or more users; tracking a position of each of the one or more users within the location based on an image mapping of the location; and determining whether an object associated with the unique object identifier assigned to the person is within the field of view of the user display device system based on correlating image data received from at least one field of view camera of the user display device system and the position of the person in the image mapping of the location.

12. The method of claim 8 wherein determining a position of the person within the location further comprises: determining whether there is a head mounted, mixed reality, display device system associated with the personal identifier of the person in the location; and responsive to there being a display device system associated with the personal identifier of the person in the location, retrieving from the display device system associated with the personal identifier field of view image data, identifying the position of the person within the location based on image mapping data of the location and the field of view image data of the display device system associated with the personal identifier, identifying a position of the user head mounted, mixed reality, display device system, generating virtual data for directing the user to the person in the location, and sending the virtual data to the user display device system.

13. The method of claim 12 further comprising: responsive to there not being a head mounted, mixed reality, display device system associated with the personal identifier of the person in the location, determining whether there is a second head mounted, mixed reality, display device system associated with another personal identifier in the location having the person in its field of view; and responsive to there being a second head mounted, mixed reality display device system having the person in its field of view, retrieving field of view image data from the second display device system, identifying the position of the person within the location based on image mapping data of the location and the field of view image data from the second display device system, identifying a position of the user display device system, generating virtual data for directing the user to the person in the location, and sending the virtual data to the user display device system.

14. The method of claim 13 further comprising: responsive to there not being a head mounted, mixed reality, display device system associated with the personal identifier of the person in the location, determining whether there is a second head mounted, mixed reality, display device system associated with another personal identifier in the location having non-image location data indicating a relative position of the person from a position of the second display device system; and responsive to there being a second head mounted, mixed reality, display device system having the non-image location data indicating the relative position of the person from the second display device system, identifying the position of the second display device system within the location based on image mapping data of the location and field of view image data of the second display device system, identifying the position of the person within the location based on the relative position of the person from the second display device system and the position of the second display device system within the location, identifying a position of the user display device system, generating virtual data for directing the user to the person in the location, and sending the virtual data to the user display device system.

15. The method of claim 14 wherein the non-image location data is data from one or more directional sensors on the second display device system.

16. A head mounted, mixed reality display device system for providing contextual personal information comprising: a mixed reality display positioned by a head mounted support structure worn by a user; at least one front facing camera positioned on the support structure for capturing image data of a field of view of the mixed reality display; one or more directional sensors attached to the support structure, each having a sensor position with reference to a body part of the user and transmitting an identity data set including the sensor position; one or more software controlled processors communicatively coupled to the at least one front facing camera for receiving the image data of the field of view; the one or more software controlled processors being communicatively coupled to a remote computer system executing a personal information service engine for sending a request with person selection criteria and a location of the user wearing the head mounted support structure, for receiving a personal identification data set of one or more other persons sharing the location with the user and satisfying the person selection criteria, and for determining whether the person associated with the personal identification data set is in the field of view of the mixed reality display, where the location shared by the user and the one or more other persons is such that face to face meetings can occur at the location between the user and the one or more other persons and the location includes a scene in a field of view of the mixed reality display of the user; and at least one image generation unit communicatively coupled to the one or more software controlled processors and optically coupled to the mixed reality display for, responsive to the person associated with the personal identification data set being in the field of view, tracking virtual data to an in-field-of-view position of the person, and responsive to the person associated with the personal identification data set not being currently within the field of view, displaying image data which indicates a position of the person within the location but outside the field of view.

17. The system of claim 16 further comprising: a microphone supported by the support structure for capturing audio data and being communicatively coupled to the one or more software controlled processors for sending audio data; one or more eye tracking assemblies positioned by the support structure for capturing image data of each eye and communicatively coupled to the one or more software controlled processors for sending image data of each eye; the one or more software controlled processors identifying a person object of interest based on the image data of each eye and image data of the field of view of the mixed reality display; and the one or more software controlled processors determining a personal interaction status for the person object of interest based on audio and image data captured of the person object of interest and a personal interaction rule set stored in an accessible memory.

18. The system of claim 16 further comprising: a microphone supported by the support structure for capturing audio data and being communicatively coupled to the one or more software controlled processors for sending audio data; the one or more software controlled processors identifying a person object of interest based on a gesture of the user recognized in image data of the field of view of the mixed reality display; and the one or more software controlled processors determining a personal interaction status for the person object of interest based on audio and image data captured of the person object of interest and a personal interaction rule set stored in an accessible memory.

19. The system of claim 17 further comprising: the one or more software controlled processors storing in an accessible memory a personal interaction status for the person satisfying the person selection criteria; and the at least one image generation unit displaying a personal interaction status for the person satisfying the person selection criteria responsive to user input.

20. The system of claim 16 wherein the one or more directional sensors comprise at least one of the following: a directional antenna; an infrared device; a Global Positioning System (GPS) device; a radio frequency device; a network access point; a cellular telecommunication based device; and a wireless Universal Serial Bus device.