US 10,838,484 C1 (12,652nd)
Visual aura around field of view
Alysha Naples, Ft. Lauderdale, FL (US); Jonathan Lawrence Mann, Seattle, WA (US); and Paul Armistead Hoover, Bothell, WA (US)
Filed by Magic Leap, Inc., Plantation, FL (US)
Assigned to MAGIC LEAP, INC., Plantation, FL (US)
Reexamination Request No. 90/019,207, May 10, 2023.
Reexamination Certificate for Patent 10,838,484, issued Nov. 17, 2020, Appl. No. 15/491,571, Apr. 19, 2017.
Claims priority of provisional application 62/325,685, filed on Apr. 21, 2016.
Ex Parte Reexamination Certificate issued on Jul. 16, 2024.
Int. Cl. G06F 3/01 (2006.01); G06F 3/04815 (2022.01); G06V 20/20 (2022.01); G06V 40/18 (2022.01)
CPC G06F 3/011 (2013.01) [G06F 3/012 (2013.01); G06F 3/013 (2013.01); G06F 3/04815 (2013.01); G06V 20/20 (2022.01); G06V 40/193 (2022.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1, 11, 15 and 24 are determined to be patentable as amended.
Claims 2-10, 12-14, 16-23 and 25-27, dependent on an amended claim, are determined to be patentable.
New claims 28-31 are added and determined to be patentable.
1. A system for providing an indication of a hidden interactable object in a three-dimensional (3D) environment of a user, the system comprising:
a display system of a wearable device configured to present a three-dimensional view to a user and permit a user interaction with objects in a field of regard (FOR) of a user, the FOR comprising a portion of the environment around the user that is capable of being perceived by the user via the display system, wherein the wearable device is configured to be head mounted, and wherein the display system is coupled to the wearable device such that the display system moves in conjunction with movements of the head of the user, wherein the display system comprises a first light field display for a first eye of the user and a second light field display for a second eye of the user, and wherein the first and second light field displays are configured to render a visual representation of an aura;
a sensor configured to acquire data associated with a pose of the user; and
a hardware processor in communication with the sensor and the display system, the hardware processor programmed to:
determine a pose of the user based on the data acquired by the sensor;
determine a field of view (FOV) of the user through the display system based at least partly on the pose of the user, the FOV comprising a portion of the FOR that is capable of being perceived at a given time by the user via the display system;
identify a hidden interactable object, wherein the hidden interactable object comprises a virtual object within the FOV of the user that the user cannot directly perceive via the display system [ because the virtual object is obscured from the user by an obscuring object] ;
access contextual information associated with the hidden interactable object, wherein the contextual information comprises a location of the hidden interactable object in relation to the pose of the user;
determine a visual representation of the aura associated with the hidden interactable object based on the contextual information that indicates the location of the hidden interactable object to the user, the visual representation comprising:
a first visual representation of the aura associated with a first FOV of the first eye; and
a second visual representation of the aura associated with a second FOV of the second eye; and
render the visual representation of the aura associated with the hidden interactable object such that at least a portion of the visual aura perceivable by the user is on an edge of the FOV of the user by:
render the visual representation of the aura associated with the hidden interactable object such that at least a portion of the visual aura perceivable by the user is on an edge of the FOV of the user by:
rendering the first visual representation of the aura by the first light field display at a first edge of the first FOV; and
rendering the second visual representation of the aura by the second light field display at a second edge of the second FOV, wherein the second visual representation does not match stereoscopically with the first visual representation and is rendered at a depth that is different than other virtual content rendered by the display system.
11. The system of claim 1, wherein the hidden interactable object comprises a virtual object that becomes perceivable in response to a user interface operation.
15. A method for providing an indication of an interactable object in a three-dimensional (3D) environment of a user, the method comprising:
under control of a wearable device having a display system configured to present a three-dimensional (3D) view to a user and permit a user interaction with objects in a field of regard (FOR) of a user, the FOR comprising a portion of the environment around the user that is capable of being perceived by the user via the display system, wherein the wearable device is configured to be head mounted and wherein the display system is coupled to the wearable device such that the display system moves in conjunction with movements of the head of the user; a sensor configured to acquire data associated with a pose of the user; and a hardware processor in communication with the sensor and the display system, wherein the display system comprises a first light field display for a first eye of the user and a second light field display for a second eye of the user, and wherein the first and second light field displays are configured to render a visual representation of an aura:
determining a field of view (FOV) of the user through the display system based at least partly on the pose of the user, the FOV comprising a portion of the FOR that is capable of being perceived at a given time by the user via the display system;
identifying a hidden interactable object, wherein the hidden interactable object comprises a virtual object within the FOV of the user that the user cannot directly perceive via the display system [ because the virtual object is obscured from the user by an obscuring object] ;
accessing contextual information associated with the hidden interactable object, wherein the contextual information comprises a location of the hidden interactable object in relation to the pose of the user;
determining a first visual representation of a first aura associated with a FOV of the first eye based on the contextual information that indicates the location of the hidden interactable object to the user;
determining a second visual representation of a second aura associated with a FOV of the second eye based on the contextual information that indicates the location of the hidden interactable object to the user, wherein the first visual representation of the aura is different from the second visual representation of the aura;
rendering the first visual representation of the first aura by the first light field display at a first edge of a first FOV associated with the first eye; and
rendering the second visual representation of the second aura by the second light field display at a second edge of the second FOV associated with the second eye.
24. The method of claim 15, wherein the hidden interactable object comprises a virtual object that becomes perceivable in response to a user interface operation.
[ 28. The system of claim 1, wherein the obscuring object is a physical obscuring object.]
[ 29. The system of claim 1, wherein the obscuring object is a virtual obscuring object.]
[ 30. The method of claim 15, wherein the obscuring object is a physical obscuring object.]
[ 31. The method of claim 15, wherein the obscuring object is a virtual obscuring object.]