Great research starts with great data.

Learn More
More >
Patent Analysis of

Light field light source orientation method for augmented reality and virtual reality and front-end device

Updated Time 12 June 2019

Patent Registration Data

Publication Number

US10002464

Application Number

US15/381137

Application Date

16 December 2016

Publication Date

19 June 2018

Current Assignee

PACIFIC FUTURE LIMITED

Original Assignee (Applicant)

PACIFIC FUTURE LIMITED

International Classification

G09G5/00,G09G5/10,G06T19/00,G06F3/01

Cooperative Classification

G06T19/006,G06F3/011,G09G5/10,G09G2320/0646,G09G2320/0666

Inventor

THEW, IAN ROY,LEE, KIEN YI

Patent Images

This patent contains figures and images illustrating the invention and its embodiment.

US10002464 Light field light source orientation 1 US10002464 Light field light source orientation 2 US10002464 Light field light source orientation 3
See all images <>

Abstract

The present invention relates to a light field light source orientation method for augmented reality and virtual reality and a front-end device. The method comprises: A: identifying a target marker; B. tracking the target marker; C. analyzing the pixel color shape on the marker object; D. analyzing color difference zones on the marker object and analyzing the shape cast to calculate direction of environmental light source; E. pushing the light source directional data to an augmented reality object; and F. compensating and adjusting the imaging of an augmented reality object: the present invention can collect surrounding environmental factors, such as the light source direction, so as to project a computer-generated object into a reality environment to possess a shadow consistent with that in the reality, so that an augmented reality environment is more realistic and represents a realist shadow effect of the object.

Read more

Claims

1. A light field light source orientation method for augmented reality and virtual reality, comprising the steps of: identifying a target mark as follows: collecting image data through a front-end device and identifying the collected image data, the front-end device being a mobile phone or helmet glasses; tracking the target mark as follows: tracking the position of a certain object in the image data by the front-end device; obtaining the surface shape and feature of the certain object, and obtaining a data value of a color space of the certain object, the data value comprising: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value; analyzing a grey zone on the certain object as follows: analyzing, by the front-end device, a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone; pushing the data to a light field of a virtual object as follows: pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and compensating and adjusting the imaging of the virtual object: adjusting the virtual object and a shadow of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.

2. The method of claim 1, wherein the step of analyzing a grey zone on the certain object is as follows: converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone; and determining the intensity of the light source according to identification codes of the brightness of the grey zone.

3. The method of claim 1, comprising steps of adjusting brightness, hue and saturation of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

4. A front-end device, the front-end device comprising a mobile phone or helmet glasses comprising: a processor; a memory coupled with the processor, wherein the processor is configured to execute programmed instructions stored in the memory for a target mark identifying process used for collecting image data and identifying the collected image data; a target mark tracking process used for tracking the position of a certain object in the image data, obtaining the surface shape and feature of the certain object, and obtaining a data value of a color space of the certain object, the data value comprising: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value; a grey zone analyzing process used for analyzing a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone; a data pushing process used for pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and a compensation and adjustment process used for adjusting the virtual object and a shadow of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.

5. The front-end device of claim 4, wherein the grey zone analyzing process is used for converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone; and determining the intensity of the light source according to identification codes of the brightness of the grey zone.

6. The front-end device of claim 4, wherein the compensation and adjustment process is used for adjusting brightness, hue and saturation of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

Read more

Claim Tree

  • 1
    1. A light field light source orientation method for augmented reality and virtual reality, comprising
    • the steps of: identifying a target mark as follows: collecting image data through a front-end device and identifying the collected image data, the front-end device being a mobile phone or helmet glasses
    • tracking the target mark as follows: tracking the position of a certain object in the image data by the front-end device
    • obtaining the surface shape and feature of the certain object, and obtaining a data value of a color space of the certain object, the data value comprising: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value
    • analyzing a grey zone on the certain object as follows: analyzing, by the front-end device, a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone
    • pushing the data to a light field of a virtual object as follows: pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source
    • and compensating and adjusting the imaging of the virtual object: adjusting the virtual object and a shadow of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.
  • 2
    2. The method of claim 1, wherein
    • the step of analyzing a grey zone on the certain object is as follows: converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone; and determining the intensity of the light source according to identification codes of the brightness of the grey zone.
  • 3
    3. The method of claim 1, comprising
    • steps of adjusting brightness, hue and saturation of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source
    • adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source
    • and the virtual scenario is more realistic and clear.
  • 4
    4. A front-end device, the front-end device comprising
    • a mobile phone or helmet glasses comprising: a processor
    • a memory coupled with the processor, wherein the processor is configured to execute programmed instructions stored in the memory for a target mark identifying process used for collecting image data and identifying the collected image data
    • a target mark tracking process used for tracking the position of a certain object in the image data, obtaining the surface shape and feature of the certain object, and obtaining a data value of a color space of the certain object, the data value comprising: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value
    • a grey zone analyzing process used for analyzing a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone
    • a data pushing process used for pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source
    • and a compensation and adjustment process used for adjusting the virtual object and a shadow of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.
    • 5. The front-end device of claim 4, wherein
      • the grey zone analyzing process is used for converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone; and determining the intensity of the light source according to identification codes of the brightness of the grey zone.
    • 6. The front-end device of claim 4, wherein
      • the compensation and adjustment process is used for adjusting brightness, hue and saturation of the virtual object through the data value of the color space of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.
See all independent claims <>

Description

CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims the benefit of Chinese patent application No. 201611086214.0 filed on Nov. 29, 2016. All the above are hereby incorporated by reference.

TECHNICAL FIELD

The present invention relates to the technical field of computer simulation, in particular to a light field light source orientation method and system for augmented reality and virtual reality.

BACKGROUND

In augmented reality at the present stage, a virtual object and a light field of a reality environment lack of direct interaction and corresponding adjustment to achieve the reality of the object, including change of an object shadow and integration degree of an object surface. Interaction with the reality environment can only be conducted through two modes: a hardware photo sensor or GPS and position data. The augmented reality at the present stage has the following disadvantages:

Early 3D data collection of the virtual object has extremely high requirements; and too large data needs to be processed, causing low reaction speed. For a device of a user, such as an AR helmet, a mobile-end device has high hardware demands and high cost. Manufacture of contents is complex, has many conditional limitations and is low in efficiency. Device configuration is enhanced depending on hardware solutions, causing overlarge device and poor user experience.

SUMMARY

The main purpose of the present invention is to provide a light field light source orientation method for augmented reality and virtual reality and a front-end device. The present invention can collect surrounding environmental factors, such as the light source direction, so as to project a computer-generated object into a reality environment to possess a shadow consistent with that in the reality, so that an augmented reality environment is more realistic and restores a real effect of the object.

To solve the technical problem, the present invention adopts the following technical solution:

The present invention provides a light field light source orientation method for augmented reality and virtual reality, comprising steps:

A. identifying a target mark as follows: collecting image data through a front-end device and identifying the collected image data;

B. tracking the target mark as follows: tracking the position of a certain object in the image data by the front-end device; obtaining the surface shape and feature of the certain object, and locking a pixel color disc of the certain object through combined establishment of algorithm binding and engine matching;

C. analyzing the pixel color disc on the certain object as follows: collecting the locked pixel color disc of the certain object, and analyzing and obtaining a data value of the pixel color disc;

D. analyzing a grey zone on the certain object as follows: analyzing, by the front-end device, a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone;

E. pushing the data to a light field of a virtual object as follows: pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and

F. compensating and adjusting the imaging of the virtual object: adjusting the virtual object and a shadow of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.

Preferably, the data value of the pixel color disc comprises: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value.

Preferably, step D is as follows: converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone; and determining the intensity of the light source according to identification codes of the brightness of the grey zone.

Preferably, the front-end device is a mobile phone or helmet glasses.

Preferably, the method also comprises steps of adjusting brightness, hue and saturation of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

Embodiments of the present invention also provide a front-end device, comprising:

a target mark identifying unit used for collecting image data and identifying the collected image data;

a target mark tracking unit used for tracking the position of a certain object in the image data, obtaining the surface shape and feature of the certain object, and locking a pixel color disc of the certain object through combined establishment of algorithm binding and engine matching;

a pixel color disc analyzing unit used for collecting the locked pixel color disc of the certain object, and analyzing and obtaining a data value of the pixel color disc;

a grey zone analyzing unit used for analyzing a BLOB of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone;

a data pushing unit used for pushing the data into the light field of the virtual object after converting the data into identifiable data of the front-end device according to the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and

a compensation and adjustment unit used for adjusting the virtual object and a shadow of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.

Preferably, the grey zone analyzing unit is used for converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone (shadow); and determining the intensity of the light source according to identification codes of the brightness of the grey zone.

Preferably, the data value of the pixel color disc comprises: hue-color essential attributes, saturation-color purity of 0-100% value and lightness and brightness of 0-100% value.

Preferably, the compensation unit is used for adjusting brightness, hue and saturation of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, and adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

Preferably, the front-end device comprises a mobile phone or helmet glasses.

Through implementation of the technical solution of the present invention, the following beneficial effects are generated: the method and the device provided by the present invention solve the problems of too high configuration of front-end data collection hardware, high cost of data transmission device and poor user experience of a helmet device of a display end; solve the problems that the virtual object cannot be integrated into the light field of the reality environment, the shadow cannot be accurately tracked, a light ray angle is inaccurate and the object display is floating and is in the air; solve the problem of serious color difference caused by an influence of the light field on the color; and solve the problem of high manufacture cost because the virtual object is only limited to a digitized image and a projected real object. By detecting the light source direction of the reality environment, the object projected from a virtual augmented environment is consistent with a reality shadow direction, the reflective refraction angle and the brightness are adjusted, and the reality of the projected object in the reality environment is increased. Under the condition of completing light field reduction of the virtual object without a strong front-end data collection device, complicated early manufacture and subsequent terminal display hardware with high cost, the virtual object can be integrated into the light field of the reality environment in real time, so that naked eye experience of a user is closer to the reality.

DESCRIPTION OF THE DRAWINGS

FIG. 1 is a flow chart of a method provided by the embodiments of the present invention; and

FIG. 2 is a structural schematic diagram of a device provided by the embodiments of the present invention.

Realization of the purpose, functional characteristics and advantages of the present invention will be further described in combination with the embodiments and with reference to the drawings.

DETAILED DESCRIPTION

To make the purpose, the technical solution and the advantages of the present invention more clear, the present invention will be further described below in detail in combination with the drawings and the embodiments. It should be understood that specific embodiments described herein are only used for explaining the present invention, not used for limiting the present invention.

The embodiments of the present invention provide a light field light source orientation method for augmented reality and virtual reality. As shown in FIG. 1, the method comprises steps:

A. identifying a target mark as follows: collecting image data through a front-end device and identifying the collected image data; in the present embodiment, more specifically, the front-end device is a mobile phone or helmet glasses;

B. tracking the target mark as follows: tracking the position of a certain object in the image data by the front-end device; the certain object is an object in the image; for example, a bird or a tree on a mountain in a landscape painting can act as a certain object; subsequent description contents in the present embodiment are based on the certain object; obtaining the surface shape and feature of the certain object, and locking a pixel color disc of the certain object through combined establishment of algorithm binding and engine matching;

C. analyzing the pixel color disc on the certain object as follows: collecting the locked pixel color disc of the certain object, and analyzing and obtaining a data value of the pixel color disc; in the present embodiment, more specifically, the data value of the pixel color disc comprises: hue-color essential attributes (H), saturation-color purity of 0-100% value (S) and lightness (V) and brightness (L) of 0-100% value.

D. analyzing a grey zone on the certain object as follows: analyzing, by the front-end device, a BLOB (binary large object) of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone; in the present embodiment, more specifically, step D is as follows: converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone (shadow); and determining the intensity of the light source according to identification codes of the brightness of the grey zone;

E. pushing the data to a light field of a virtual object as follows: pushing the data into the light field of the virtual object (an image formed by projecting the certain object) after converting the data into identifiable data of the front-end device according to the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and

F. compensating and adjusting the imaging of the virtual object: adjusting the virtual object and a shadow of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear. In the present embodiment, more specifically, the step F is as follows: adjusting brightness, hue and saturation of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

The method provided by the above embodiment solves the problems of too high configuration of front-end data collection hardware, high cost of data transmission device and poor user experience of a helmet device of a display end; solve the problems that the virtual object cannot be integrated into the light field of the reality environment, the shadow cannot be accurately tracked, a light ray angle is inaccurate and the object display is floating and is in the air; solve the problem of serious color difference caused by an influence of the light field on the color; and solve the problem of high manufacture cost because the virtual object is only limited to a digitized image and a projected real object. By detecting the light source direction of the reality environment, the object projected from a virtual augmented environment is consistent with a reality shadow direction, the reflective refraction angle and the brightness are adjusted, and the reality of the projected object in the reality environment is increased. Under the condition of completing light field reduction of the virtual object without a strong front-end data collection device, complicated early manufacture and subsequent terminal display hardware with high cost, the virtual object can be integrated into the light field of the reality environment in real time, so that naked eye experience of a user is closer to the reality.

Embodiments of the present invention also provide a front-end device. As shown in FIG. 2, the front-end device comprises:

a target mark identifying unit used for collecting image data and identifying the collected image data;

a target mark tracking unit used for tracking the position of a certain object in the image data, obtaining the surface shape and feature of the certain object, and locking a pixel color disc of the certain object through combined establishment of algorithm binding and engine matching;

a pixel color disc analyzing unit used for collecting the locked pixel color disc of the certain object, and analyzing and obtaining a data value of the pixel color disc;

a grey zone analyzing unit used for analyzing a BLOB (binary large object) of the certain object, judging a refraction angle, length and brightness of the grey zone of the certain object according to the BLOB, and determining a light source direction in a place where the front-end device is located, a distance between the front-end device and a light source, and the intensity of the light source according to the refraction angle, the length and the brightness of the grey zone;

a data pushing unit used for pushing the data into the light field of the virtual object (an image formed by projecting the certain object) after converting the data into identifiable data of the front-end device according to the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and

a compensation and adjustment unit used for adjusting the virtual object and a shadow of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, so that a virtual scenario is more realistic and clear.

In the above embodiment, more specifically, the grey zone analyzing unit is used for converting the refraction angle, the length and the brightness into identifiable codes of the front-end device; determining the light source direction in the place where the front-end device is located according to identification codes of the refraction angle; determining the distance between the front-end device and the light source according to identification codes of the length of the grey zone (shadow); and determining the intensity of the light source according to identification codes of the brightness of the grey zone.

In the above embodiment, more specifically, the data value of the pixel color disc comprises: hue-color essential attributes (H), saturation-color purity of 0-100% value (S) and lightness (V) and brightness (L) of 0-100% value.

In the above embodiment, more specifically, the compensation unit is used for adjusting brightness, hue and saturation of the virtual object through the data value of the pixel color disc of the certain object, the light source direction, the distance between the front-end device and the light source and the intensity of the light source, and adjusting the refraction angle, the length and the brightness of the shadow of the virtual object according to the light source direction, the distance between the front-end device and the light source and the intensity of the light source; and the virtual scenario is more realistic and clear.

In the above embodiment, more specifically, the front-end device comprises a mobile phone or helmet glasses.

The device provided by the above embodiment solves the problems of too high configuration of front-end data collection hardware, high cost of data transmission device and poor user experience of a helmet device of a display end; solve the problems that the virtual object cannot be integrated into the light field of the reality environment, the shadow cannot be accurately tracked, a light ray angle is inaccurate and the object display is floating and is in the air; solve the problem of serious color difference caused by an influence of the light field on the color; and solve the problem of high manufacture cost because the virtual object is only limited to a digitized image and a projected real object. By detecting the light source direction of the reality environment, the object projected from a virtual augmented environment is consistent with a reality shadow direction, the reflective refraction angle and the brightness are adjusted, and the reality of the projected object in the reality environment is increased. Under the condition of completing light field reduction of the virtual object without a strong front-end data collection device, complicated early manufacture and subsequent terminal display hardware with high cost, the virtual object can be integrated into the light field of the reality environment in real time, so that naked eye experience of a user is closer to the reality.

The above only describes preferred embodiments of the present invention and is not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and the principle of the present invention shall be contained within the protection scope of the present invention.

Read more
PatSnap Solutions

Great research starts with great data.

Use the most comprehensive innovation intelligence platform to maximise ROI on research.

Learn More

Patent Valuation

$

Reveal the value <>

21.49/100 Score

Market Attractiveness

It shows from an IP point of view how many competitors are active and innovations are made in the different technical fields of the company. On a company level, the market attractiveness is often also an indicator of how diversified a company is. Here we look into the commercial relevance of the market.

42.0/100 Score

Market Coverage

It shows the sizes of the market that is covered with the IP and in how many countries the IP guarantees protection. It reflects a market size that is potentially addressable with the invented technology/formulation with a legal protection which also includes a freedom to operate. Here we look into the size of the impacted market.

72.54/100 Score

Technology Quality

It shows the degree of innovation that can be derived from a company’s IP. Here we look into ease of detection, ability to design around and significance of the patented feature to the product/service.

47.0/100 Score

Assignee Score

It takes the R&D behavior of the company itself into account that results in IP. During the invention phase, larger companies are considered to assign a higher R&D budget on a certain technology field, these companies have a better influence on their market, on what is marketable and what might lead to a standard.

19.0/100 Score

Legal Score

It shows the legal strength of IP in terms of its degree of protecting effect. Here we look into claim scope, claim breadth, claim quality, stability and priority.

Citation

Patents Cited in This Cited by
Title Current Assignee Application Date Publication Date
Systems and methods for contextually augmented video creation and sharing INTEL CORPORATION 23 December 2014 23 June 2016
Automatic skin color model face detection and mean-shift face tracking ARCSOFT, INC. 11 December 2006 11 January 2011
Video display modification based on sensor input for a see-through near-to-eye display MICROSOFT TECHNOLOGY LICENSING, LLC 26 September 2012 23 May 2013
Methods for object recognition and related arrangements DIGIMARC CORPORATION 22 February 2016 16 June 2016
Systems, Methods and Devices for Augmenting Video Content THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY 09 July 2009 18 March 2010
See full citation <>

More Patents & Intellectual Property

PatSnap Solutions

PatSnap solutions are used by R&D teams, legal and IP professionals, those in business intelligence and strategic planning roles and by research staff at academic institutions globally.

PatSnap Solutions
Search & Analyze
The widest range of IP search tools makes getting the right answers and asking the right questions easier than ever. One click analysis extracts meaningful information on competitors and technology trends from IP data.
Business Intelligence
Gain powerful insights into future technology changes, market shifts and competitor strategies.
Workflow
Manage IP-related processes across multiple teams and departments with integrated collaboration and workflow tools.
Contact Sales
Clsoe
US10002464 Light field light source orientation 1 US10002464 Light field light source orientation 2 US10002464 Light field light source orientation 3