Great research starts with great data.

Learn More
More >
Patent Analysis of

Method and apparatus to perform correlation-based entropy removal from quantized still images or quantized time-varying video sequences in transform

Updated Time 12 June 2019

Patent Registration Data

Publication Number

US10021423

Application Number

US15/189242

Application Date

22 June 2016

Publication Date

10 July 2018

Current Assignee

ZPEG, INC.

Original Assignee (Applicant)

ZPEG, INC.

International Classification

H04N19/625,H04N19/14,H04N19/129,H04N19/91,H04N19/18

Cooperative Classification

H04N19/625,H04N19/124,H04N19/129,H04N19/14,H04N19/91

Inventor

WESTWATER, RAYMOND JOHN

Patent Images

This patent contains figures and images illustrating the invention and its embodiment.

US10021423 Method perform 1 US10021423 Method perform 2 US10021423 Method perform 3
See all images <>

Abstract

Pure transform-based technologies, such as the DCT or wavelets, can leverage a mathematical model based on few or one parameters to generate the expected distribution of the transform components' energy, and generate ideal entropy removal configuration data continuously responsive to changes in video behavior. Construction of successive-refinement streams is supported by this technology, permitting response to changing channel conditions. Lossless compression is also supported by this process. The embodiment described herein uses a video correlation model to develop optimal entropy removal tables and optimal transmission sequence based on a combination of descriptive characteristics of the video source, enabling independent derivation of said optimal entropy removal tables and optimal transmission sequence in both encoder and decoder sides of the compression and playback process.

Read more

Claims

1. A method comprising: under control of one or more processors configured with executable instructions, receiving uncompressed visual data; measuring a plurality of characteristics of the uncompressed visual data, comprising: determining at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames; determining at least one measured correlation coefficient associated with the uncompressed visual data, the at least one measured correlation coefficient including an averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance; specifying an orthogonal transform; specifying a block size associated with the orthogonal transform; specifying one or more quantizing coefficients associated with the specified orthogonal transform; determining calculated variances for the one or more quantizing coefficients from the at least one measured variance and the at least one measured correlation coefficient; applying the orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data; quantizing individual blocks of the transformed visual data by the quantizing coefficients to generate blocks of the quantized transformed visual data; calculating a probability distribution for the individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the measured characteristics of the uncompressed visual data and the at least one of the calculated variances, the measured characteristics includes the at least one measured variance associated with the uncompressed visual data and the at least one measured correlation coefficient associated with the uncompressed visual data; entropy encoding the blocks of the quantized transformed visual data based at least in part on a relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data; and transmitting or storing the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data.

2. The method of claim 1, further comprising: collecting the individual blocks of the quantized transformed visual data into symbols based on a frequency band.

3. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes the averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per frame.

4. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value for each of the blocks of the transformed visual data.

5. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the blocks of the transformed visual data.

6. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the blocks of the transformed visual data.

7. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes one averaged value per dimension for each group of frames and the at least one correlation coefficients includes one averaged value per dimension for each of the groups of frames.

8. The method of claim 1, wherein the at least one measured variances of the uncompressed visual data includes one averaged value per dimension for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension for the each of the blocks of the transformed visual data.

9. The method of claim 1, further comprising transmitting or storing the at least one measured variance of the uncompressed visual data and the at least one measured correlation coefficient of the uncompressed visual data with the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data.

10. The method of claim 1, wherein the orthogonal transform is a discrete cosine transform.

11. The method of claim 1, wherein the uncompressed visual data comprises of two-dimensional still image.

12. The method of claim 1, wherein the block size comprises the entire image.

13. The method of claim 1, wherein the uncompressed visual data comprises a three-dimensional video data.

14. The method of claim 1, wherein the block size is a number of frames by a size of a single frame.

15. The method of claim 1, further comprising organizing the one or more quantizing coefficients associated into a decreasing order based on a component variance associated with each of the blocks of the transformed visual data.

16. The method of claim 1, further comprising organizing the one or more quantizing coefficients into a decreasing order based on a component variance.

17. The method of claim 1, further comprising organizing the one or more quantizing coefficients into bands of equal weight based on an order of decreasing component variance.

18. The method of claim 1, wherein the entropy encoding the blocks of the quantized transformed visual data is based at least in part on a Huffman coding.

19. The method of claim 1, wherein the entropy encoding the blocks of the quantized transformed visual data is based at least in part on an arithmetic coding.

20. The method of claim 1, wherein the transform is an orthonormal wavelet.

21. The method of claim 1, whereinthe at least one measured correlation coefficient is calculated using the formula:

PX(X+1)i=0N−2(pi−μX)(pi+1−μY)/√{square root over ((Σi=0N−2pi2−μX)*(Σi=0N−2pi−12−μY))}.

22. The method of claim 1, wherein the at least one measured variance is equal to Σpi2/N−(Σpi/N)2.

23. An method comprising: determining a plurality of characteristics of uncompressed visual data, the plurality of the characteristics including at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames and at least one measured correlation coefficient associated with the uncompressed visual data, the at least one correlation coefficient including one averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance; and specifying an orthogonal transform; specifying a block size associated with the orthogonal transform; specifying one or more quantizing coefficients associated with the specified orthogonal transform; determining calculated variances for the one or more quantizing coefficients from the at least one measured variance and the at least one measured correlation coefficient; applying the orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data; quantizing individual blocks of the transformed visual data by the quantizing coefficients to generate blocks of the quantized transformed visual data; calculating a probability distribution for the individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the at least one measured characteristics of the uncompressed visual data and at least one of the calculated variances; entropy encoding the blocks of the quantized transformed visual data based at least in part on a relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data; and sending the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data to another device.

24. The method of claim 23, wherein the at least one measured variances of the uncompressed visual data includes one averaged value of a dimension for each of the multiple blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value for a dimension of each of the multiple blocks of the transformed visual data.

25. The method of claim 23, wherein the at least one measured variances of the uncompressed visual data includes one averaged value for each of the multiple blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the multiple blocks of the transformed visual data.

26. The method of claim 23, further comprising collecting the individual blocks of the transformed visual data into symbols based on a frequency band associated with individual ones of the plurality of transmission sequences.

27. The method of claim 23, whereinthe at least one measured correlation coefficient is calculated using the formula:

PX(X+1)i=0N−2(pi−μX)(pi+1−μY)/√{square root over ((Σi=0N−2pi2−μX)*(Σi=0N−2pi−12−μY))}.

28. The method of claim 23, wherein the at least one measured variance is equal to Σpi2/N−(Σpi/N)2.

29. A method comprising: under control of one or more processors configured with executable instructions, receiving uncompressed visual data; measuring a plurality of characteristics of the uncompressed visual data, the plurality of characteristics including at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames, and at least one measured correlation coefficient associated with the uncompressed visual data, the correlation coefficient including one averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance; applying an orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data, each of the blocks of the transformed visual data having a specified block size; quantizing individual blocks of the transformed visual data by specified quantizing coefficients to generate blocks of quantized transformed visual data, each of the blocks of the transformed visual data having the specified block size; and calculating a probability distribution for individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the measured characteristics of the uncompressed visual data.

30. The method of claim 29, further comprising: entropy encoding the blocks of the quantized transformed visual data based at least in part on the relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data; and transmitting or storing the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of quantized transformed visual data.

Read more

Claim Tree

  • 1
    1. A method comprising:
    • under control of one or more processors configured with executable instructions, receiving uncompressed visual data
    • measuring a plurality of characteristics of the uncompressed visual data, comprising: determining at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames
    • determining at least one measured correlation coefficient associated with the uncompressed visual data, the at least one measured correlation coefficient including an averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance
    • specifying an orthogonal transform
    • specifying a block size associated with the orthogonal transform
    • specifying one or more quantizing coefficients associated with the specified orthogonal transform
    • determining calculated variances for the one or more quantizing coefficients from the at least one measured variance and the at least one measured correlation coefficient
    • applying the orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data
    • quantizing individual blocks of the transformed visual data by the quantizing coefficients to generate blocks of the quantized transformed visual data
    • calculating a probability distribution for the individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the measured characteristics of the uncompressed visual data and the at least one of the calculated variances, the measured characteristics includes the at least one measured variance associated with the uncompressed visual data and the at least one measured correlation coefficient associated with the uncompressed visual data
    • entropy encoding the blocks of the quantized transformed visual data based at least in part on a relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data
    • and transmitting or storing the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data.
    • 2. The method of claim 1, further comprising:
      • collecting the individual blocks of the quantized transformed visual data into symbols based on a frequency band.
    • 3. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes the averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per frame.
    • 4. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value for each of the blocks of the transformed visual data.
    • 5. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the blocks of the transformed visual data.
    • 6. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the blocks of the transformed visual data.
    • 7. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value per dimension for each group of frames and the at least one correlation coefficients includes one averaged value per dimension for each of the groups of frames.
    • 8. The method of claim 1, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value per dimension for each of the blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension for the each of the blocks of the transformed visual data.
    • 9. The method of claim 1, further comprising
      • transmitting or storing the at least one measured variance of the uncompressed visual data and the at least one measured correlation coefficient of the uncompressed visual data with the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data.
    • 10. The method of claim 1, wherein
      • the orthogonal transform is a discrete cosine transform.
    • 11. The method of claim 1, wherein
      • the uncompressed visual data comprises
    • 12. The method of claim 1, wherein
      • the block size comprises
    • 13. The method of claim 1, wherein
      • the uncompressed visual data comprises
    • 14. The method of claim 1, wherein
      • the block size is a number of frames by a size of a single frame.
    • 15. The method of claim 1, further comprising
      • organizing the one or more quantizing coefficients associated into a decreasing order based on a component variance associated with each of the blocks of the transformed visual data.
    • 16. The method of claim 1, further comprising
      • organizing the one or more quantizing coefficients into a decreasing order based on a component variance.
    • 17. The method of claim 1, further comprising
      • organizing the one or more quantizing coefficients into bands of equal weight based on an order of decreasing component variance.
    • 18. The method of claim 1, wherein
      • the entropy encoding the blocks of the quantized transformed visual data is based at least in part on a Huffman coding.
    • 19. The method of claim 1, wherein
      • the entropy encoding the blocks of the quantized transformed visual data is based at least in part on an arithmetic coding.
    • 20. The method of claim 1, wherein
      • the transform is an orthonormal wavelet.
    • 21. The method of claim 1, wherein
      • the at least one measured correlation coefficient is calculated using the formula: PX(X+1)i=0N−2(pi−μX)(pi+1−μY)/√{square root over ((Σi=0N−2pi2−μX)*(Σi=0N−2pi−12−μY))}.
    • 22. The method of claim 1, wherein
      • the at least one measured variance is equal to Σpi2/N−(Σpi/N)2.
  • 23
    23. An method comprising:
    • determining a plurality of characteristics of uncompressed visual data, the plurality of the characteristics including at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames and at least one measured correlation coefficient associated with the uncompressed visual data, the at least one correlation coefficient including one averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance
    • and specifying an orthogonal transform
    • specifying a block size associated with the orthogonal transform
    • specifying one or more quantizing coefficients associated with the specified orthogonal transform
    • determining calculated variances for the one or more quantizing coefficients from the at least one measured variance and the at least one measured correlation coefficient
    • applying the orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data
    • quantizing individual blocks of the transformed visual data by the quantizing coefficients to generate blocks of the quantized transformed visual data
    • calculating a probability distribution for the individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the at least one measured characteristics of the uncompressed visual data and at least one of the calculated variances
    • entropy encoding the blocks of the quantized transformed visual data based at least in part on a relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data
    • and sending the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of the quantized transformed visual data to another device.
    • 24. The method of claim 23, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value of a dimension for each of the multiple blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value for a dimension of each of the multiple blocks of the transformed visual data.
    • 25. The method of claim 23, wherein
      • the at least one measured variances of the uncompressed visual data includes one averaged value for each of the multiple blocks of the transformed visual data and the at least one correlation coefficients includes one averaged value per dimension of each of the multiple blocks of the transformed visual data.
    • 26. The method of claim 23, further comprising
      • collecting the individual blocks of the transformed visual data into symbols based on a frequency band associated with individual ones of the plurality of transmission sequences.
    • 27. The method of claim 23, wherein
      • the at least one measured correlation coefficient is calculated using the formula: PX(X+1)i=0N−2(pi−μX)(pi+1−μY)/√{square root over ((Σi=0N−2pi2−μX)*(Σi=0N−2pi−12−μY))}.
    • 28. The method of claim 23, wherein
      • the at least one measured variance is equal to Σpi2/N−(Σpi/N)2.
  • 29
    29. A method comprising:
    • under control of one or more processors configured with executable instructions, receiving uncompressed visual data
    • measuring a plurality of characteristics of the uncompressed visual data, the plurality of characteristics including at least one measured variance associated with the uncompressed visual data, the at least one measured variance including one averaged value for each group of frames, and at least one measured correlation coefficient associated with the uncompressed visual data, the correlation coefficient including one averaged value for each of the groups of frames respectively corresponding to each of the groups of frames used to determine the at least one measured variance
    • applying an orthogonal transform to the uncompressed visual data to produce multiple blocks of transformed visual data, each of the blocks of the transformed visual data having a specified block size
    • quantizing individual blocks of the transformed visual data by specified quantizing coefficients to generate blocks of quantized transformed visual data, each of the blocks of the transformed visual data having the specified block size
    • and calculating a probability distribution for individual blocks of the quantized transformed visual data, each of the blocks of the quantized transformed visual data having the block size and wherein the calculating the probability distribution is based at least in part on the measured characteristics of the uncompressed visual data.
    • 30. The method of claim 29, further comprising:
      • entropy encoding the blocks of the quantized transformed visual data based at least in part on the relative probability of each of the blocks of the quantized transformed visual data to generate entropy-coded blocks of the quantized transformed visual data
      • and transmitting or storing the plurality of characteristics of the uncompressed visual data and the entropy-coded blocks of quantized transformed visual data.
See all independent claims <>

Description

BACKGROUND

The present invention relates generally to compression of still image and moving video data, and more particularly to the application of calculation of statistics of the behavior of the quantized transform representation of the video from the measured variances and the measured correlations in pixel space. Once the statistical behavior of the video is modeled, the video streams can be collected into successive refinement streams (progressive mode), and the probabilities of the constructed symbols can be calculated for the purposes of entropy removal. The measured variances and correlations suffice to reconstruct the compressed video streams for any frame or group of frames.

SUMMARY

In accordance with one aspect of the invention, a method is provided for the optimal rearrangement of components into a transmission stream based on the calculated variance of individual quantized transform components from the measured variance and correlation of the raw untransformed visual samples.

A second aspect of the invention provides a method for the optimal calculation of entropy reduction tables for a transmission stream based on the calculated symbol probabilities based on the calculated probability distributions of individual quantized transform components.

A final aspect of the invention provides a method for the parallel construction of transmission stream rearrangement, symbol construction and entropy tables between compressing apparatus and decompressing apparatus via communication of the measured variances and correlations of the raw untransformed visual samples.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 depicts a prior-art compressor decomposed into the steps of transformation, quantization, transmission order sequencing, symbol collection, and entropy removal.

FIG. 2 depicts a prior-art compressor featuring per-block transmission order sequencing.

FIG. 3 depicts a prior-art compressor featuring three forms of progressive transmission order encoding; spectral selection, successive refinement, and hierarchical.

FIG. 4 depicts a prior-art compressor featuring various means of communication of entropy coding tables; pre-shared, in-band, and multiple tables.

FIG. 5 depicts a typical embodiment of the current invention into a compression apparatus and a decompression apparatus.

FIG. 6 depicts the steps typically required of a compression unit in order to perform block-by-block compression.

FIG. 7 illustrated hierarchical subband decomposition and compression.

FIG. 8 illustrates the calculations required to model per-quantized transform component variance from pixel variance and pixel correlation.

FIG. 9 illustrates the calculations required to model per-quantized transform component variance from pixel variance and pixel correlation.

DETAILED DESCRIPTION

As depicted in FIG. 1, most prior-art still image and motion video compression algorithms perform a similar sequence of steps: an input stream 1010 is transform coded 1020, after which a process of motion estimation 1030 followed by equal-weight quantization 1040 or a process of visually-weighted quantization 1050 takes place, the resulting data is sequenced into transmission order 1060, symbols are collected 1070, and an entropy removal step 1080 results in a compressed data stream 1090. The essential innovation of the current invention is in the area of prediction of statistical behavior, which influences the process of transmission order sequence, symbol collection, and entropy removal. The current invention does not address the topic of motion estimation.

The JPEG zig-zag transmission order illustrated in FIG. 2 is a standard prior-art means of sequencing quantized coefficients into transmission order using a fixed pattern based on the average of statistics collected across a variety of sample content. The JPEG zig-zag order of increasing probability of zero. The JPEG zig-zag order is applied to one fixed-size block of the image at a time.

As depicted in FIG. 3, the prior-art JPEG-2000 standard implements various forms of progressive transmission, including spectral selection (FIG. 3a), successive refinement (FIG. 3b), and hierarchical (FIG. 3c). FIG. 3a depicts a plurality of two 8×8 quantized transform blocks 3010 covered by a plurality of three spectral bands, of which spectral band 3020 is typical. Data collected within the band across all blocks 3030 and symbols are collected from the data within each band, which is then entropy coded and transmitted.

FIG. 3b depicts a plurality of two 8×8 quantized transform blocks represented by a typical 2×2 entry 3110. The 2×2 entry of eight-bit numbers is divided into two successive refinement bands of four-bit representation, one of which is depicted 3120. The first four bits are collected across transform blocks into a transmission stream 3130 from which symbols will be collected and entropy coding will take place. The second four bits are similarly collected into transmission stream 3140.

FIG. 3c depicts a first-transmitted low-resolution image 3210, followed by a second-transmitted medium-resolution image 3220, and a final high-resolution image 3230. Each separate-resolution image is used to create its own transmission stream.

FIG. 4 depicts typical prior-art means of communicating entropy encoding statistics between compressing and decompressing apparatuses. It should be noted that these entropy statistics may be represented directly as a table of relative probabilities for the purposes of arithmetic encoding, or as Huffman tables.

The original JPEG specification provides for a pre-defined entropy pre-shared encoding table as depicted in FIG. 4a. A preshared table 4020 is known to compressor 4010, and is used to generate the compressed data stream 4030. The preshared table 4050 known to the decompressor 4040 is used to decompress the received data stream. Pre-shared tables are intended to provide good compression based on the collection of statistics for a large collection of images.

As illustrated in FIG. 4b, tables may be dynamically calculated and embedded in the transmission stream. A compressor 4110 calculates a table 4120, which it then transmits in-band 4130 in the compressed transmission stream 4140. The decompressor 4150 reads an in-band table 4160 and uses it to decompress the following compressed stream. This strategy enables better compression at the overhead cost of hundreds to thousands of bytes.

Each JPEG-2000 progressive transmission approach described above in FIG. 3 requires assembly of symbols over each progressive decomposition step (spectral band, successively bit representation, or resolution), giving different symbols and symbol distributions. As depicted in FIG. 4c, a JPEG-2000 compressor 4210 calculates up to four entropy coding tables 4220 which it then transmits in-band 4230 in the compressed transmission stream 4240. The decompressor 4250 reads the in-band tables 4260 and uses them, as selected by each progressive stream, to decompress the following compressed data. If the tables are calculated to reflect typical progressive stream behavior, the tables may potentially be reusable.

Much effort has been expended in the incremental increase of efficiency in the communication of entropy coding statistics between compressing and decompressing apparatuses, but no significant advances can be claimed over the prior-are techniques described herein. The current invention discloses a far more efficient means of developing entropy tables independently in compressor and decompessor.

FIG. 5 depicts a compression apparatus 5010 and a decompression apparatus 5020. Said compression apparatus 5010 is fed a sequential stream of visual data 5110, and factors said sequential stream of visual data 5110 into a plurality of multidimensional subblocks 5120. Said plurality of multidimensional subblocks 5120 is processed singly or jointly by a correlation measurement unit 5130 to produce a flow of measured variance values and measured correlation values to the decompression apparatus 5210 and a duplicate flow of measured variance values and measured correlation values to a compressor 5140. A compression unit 5150 uses said duplicate flow of measured variance values and measured correlation values to a compressor 5140 and said plurality of multidimensional subblocks 5120 to produce a compressed stream to the decompression apparatus 5220. Said decompression apparatus 5020 is compressed of a decompressor 5310 which processes said flow of measured variance values and measured correlation values to the decompression apparatus 5210 and said compressed stream to the decompression apparatus 5220 to produce a plurality of reconstructed multidimensional subblocks 5320.

FIG. 6 depicts a decomposition of said compression apparatus 6010 into typical processing steps used to perform individual block-by-block compression. Said flow of measured variance values and measured correlation values to the decompression apparatus 5210 results in a set of variance values and correlation values in the x, y and z directions valid for one subblock 6110 of said plurality of multidimensional subblocks. Said set of kmeasured variance values and measured correlation values in the x, y and z directions valid for one subblock 6020 is processed through a step 6030 which calculates the variances for said quantized transform components of said one subblock 6110. In a further processing step 6040, said calculated variances for said quantized transform components of said one subblock from said step 6030 is used to calculate relative probabilities for each symbol.

The quantized transform components of said one subblock 6110 of said plurality of multidimensional subblocks processed through a step 6120 to reorder quantized transform components into order of greatest probability of zero (lowest variance). Said step 6120 uses said calculated variances for said quantized transform components from said step 6030 to perform its sort processing.

Said reordered quantized transform components are then processed through a step 6130 of collection of said reordered quantized transform components into symbols. Each said collected symbol is then processed through a step 6140 of entropy coding of said symbol into a short sequence of bits. Said step 6140 uses said calculated relative probabilities for each symbol from said step 6040 in its entropy-removing calculations.

Said short sequence of bits is finally processed through an aggregation step 6150 to concatenate generated bit sequences into a transport stream.

FIG. 7 depicts a typical implementation of the hierarchical type of progressive transmission. A sequential stream of visual data 7010 is subsampled from said sequential stream of visual data 5110. Said subsampled sequential stream of visual data 7010 is factored into a plurality of multidimensional subblocks 7020. Said plurality of multidimensional subblocks 7020 is then processed subblock by subblock by said compression unit 5150 to produce a sequence of compressed bits for transmission.

Once said subsampled sequential stream of visual data 7010 has been processed through said compression unit 5150, a higher-resolution sequential stream of visual data less subband data 7110 may be processed. Said higher-resolution sequential stream of visual data less subband data 7110 is comprised of sequential stream of visual data 5110 where each and every coefficient comprising said subsampled sequential stream of visual data 7010 is set to 0 with a variance of 0. Said higher-resolution sequential stream of visual data less subband data 7110 is factored into a plurality of multidimensional subblocks 7120. Said plurality of multidimensional subblocks 7120 is then processed subblock by subblock by said compression unit 5150 to produce a sequence of compressed bits for transmission.

FIG. 8 illustrates the calculations required to model per-quantized transform component variance from pixel variance and pixel correlation. A matrix DCTx 8010 is comprised of the individual constants of discrete cosine transform convolution. Said matrix DCTx 8010 is shown with the discrete cosine transform of a 4×4 convolution, but may in practice be composed of any orthonormal transform. Similar matrices DCTv and DCTz (in the case of three-dimensional said multidimensional subblocks 7120) will assume the length of each dimension of the said multidimensional subblocks 7020.

A covariance matrix Apixel,x 8020 is composed of the multiplication of said measured pixel variance in the x direction by the autocorrelation matrix derived from said measured pixel correlation in the x direction. Similar matrices Apixel,y and Apixel,z (in the case of three-dimensional said multidimensional subblocks 7120) will utilize the measured pixel variance, pixel correlations and length of each dimension of the said multidimensional subblocks 7020.

DCT covariance matrix x 8030 is calculated as the product of said matrix DCTx 8010, said covariance matrix Apixel,x 8020, and the transpose of said matrix DCTx 8010.

The variance of the quantized transform component 8040 of index u,v,w within said multidimensional subblocks 7020, a-2u,v,w, is calculated as the product of the trace of said DCT covariance matrix Ax 8030 with the trace of said DCT covariant matrix Av (and with the trace of said DCT covariant matrix Az if said multidimensional subblocks 7020 are three-dimensional) divided by the quantizer value for said quantized transform component 8040 of index u,v,w within said multidimensional subblocks 7020.

FIG. 9 illustrates the process of calculating symbol probabilities. The maximum number of bits subblock N MAX,u,v,w 9010 required to encode any said quantized transform component of index u,v,w within said quantized transform is calculated as the rounded-up integer of the logarithm base 2 of the product of the number of bits representing each pixel N1N, the square root of the product of the lengths of said multidimensional blocks divided by the quantizer Ou,v,w of said quantized transform component.

The probability Pu,v,w(x==O) 9020 that any quantized transform component of index u,v,w within said quantized transform subblock is 0 is calculated from the Cumulative Distribution Function of a normal distribution with expectation of 0 and variance equal to that of said quantized transform component of index u,v,w within said quantized transform.

The probability Pu,v,w(log 2(x)==n) 9030 that any quantized transform component of index u,v,w within said quantized transform subblock has n bits in its representation is calculated from the Cumulative Distribution Function of a normal distribution with expectation of 0 and variance equal to that of said quantized transform component of index u,v,w within said quantized transform.

A typical symbol Su,v,w(r, b) 9040 comprised of a run length of r zeros followed by a non-zero value of length b is calculated as the conditional probability the each symbol in the order of said rearrangement of said quantized transform component within said quantized transform subblock. The probability of the ith quantized transform component following quantized transform component index u,v,w within said quantized transform subblock being 0 is written P(u,v,w)+i(x==O). The probability of the rth quantized.

While the present invention has been described in its preferred version or embodiment with some degree of particularity, it is understood that this description is intended as an example only, and that numerous changes in the composition or arrangements of apparatus elements and process steps may be made within the scope and spirit of the invention. In particular, rearrangement and recalculation of statistics may be made to support various modes of progressive transmission, including spectral banding or bitwise refinement. Further, pixel statistics may be measured and transmitted on a per-block or global basis, and may be measured in each dimension or averaged across all dimensions. Block sizes may also be taken to be as large as the entire frame, as would be typical when using the wavelet transform.

With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claimed invention.

Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the arts discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the invention is capable of modification and variation and is limited only by the following claims.

All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those skilled in the art unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,”“the,”“said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.

Read more
PatSnap Solutions

Great research starts with great data.

Use the most comprehensive innovation intelligence platform to maximise ROI on research.

Learn More

Patent Valuation

$

Reveal the value <>

30.0/100 Score

Market Attractiveness

It shows from an IP point of view how many competitors are active and innovations are made in the different technical fields of the company. On a company level, the market attractiveness is often also an indicator of how diversified a company is. Here we look into the commercial relevance of the market.

77.0/100 Score

Market Coverage

It shows the sizes of the market that is covered with the IP and in how many countries the IP guarantees protection. It reflects a market size that is potentially addressable with the invented technology/formulation with a legal protection which also includes a freedom to operate. Here we look into the size of the impacted market.

72.73/100 Score

Technology Quality

It shows the degree of innovation that can be derived from a company’s IP. Here we look into ease of detection, ability to design around and significance of the patented feature to the product/service.

34.0/100 Score

Assignee Score

It takes the R&D behavior of the company itself into account that results in IP. During the invention phase, larger companies are considered to assign a higher R&D budget on a certain technology field, these companies have a better influence on their market, on what is marketable and what might lead to a standard.

23.0/100 Score

Legal Score

It shows the legal strength of IP in terms of its degree of protecting effect. Here we look into claim scope, claim breadth, claim quality, stability and priority.

Citation

Patents Cited in This Cited by
Title Current Assignee Application Date Publication Date
Distortion-adaptive visual frequency weighting SHARP KABUSHIKI KAISHA 02 October 2003 01 April 2004
Spatial standard observer USA AS REPRESENTED BY THE ADMINISTRATOR OF THE NASA 24 January 2005 27 July 2006
Method and Apparatus of Shifting Functional Entity In Wimax Network HUAWEI TECHNOLOGIES CO., LTD. 15 December 2008 09 April 2009
Multi-program viewing in a wireless apparatus QUALCOMM INCORPORATED 12 January 2007 23 August 2007
System and method for video tone scale reduction EASTMAN KODAK COMPANY 20 January 2004 21 July 2005
See full citation <>

More like this

Title Current Assignee Application Date Publication Date
Method and apparatus for processing video signals using coefficient derivation prediction LG ELECTRONICS INC. 21 September 2016 30 March 2017
Inter prediction mode-based image processing method and apparatus therefor LG ELECTRONICS INC. 05 July 2016 02 November 2017
Image processing device, image processing method, and program SONY CORPORATION 07 July 2017 01 March 2018
Method and apparatus for scan order selection HUAWEI TECHNOLOGIES CO., LTD. 12 February 2016 17 August 2017
Method for encoding image, method for decoding image, image encoder, and image decoder KT CORPORATION 26 January 2015 21 May 2015
Method and apparatus for transform coefficient coding of non-square blocks MEDIATEK SINGAPORE PTE. LTD.,HUANG, HAN,AN, JICHENG,ZHANG, KAI 23 June 2016 29 December 2016
Non-separable secondary transform for video coding QUALCOMM INCORPORATED 21 September 2016 06 April 2017
Method and device for encoding or decoding image SAMSUNG ELECTRONICS CO., LTD. 13 October 2016 15 June 2017
Image processing device, image processing method and image processing program OLYMPUS CORPORATION 14 June 2016 21 December 2017
Device for determining a quantizing number of an image signal LG ELECTRONICS INC. 17 May 1996 13 October 1998
Method for processing video signal on basis of arbitrary partition transform LG ELECTRONICS INC. 12 May 2017 16 November 2017
Image processing system and image processing method MEGACHIPS CORPORATION 11 March 2016 06 October 2016
Image processing system and image processing method MEGACHIPS CORPORATION 11 March 2016 06 October 2016
Variations of RHO-domain rate control MICROSOFT TECHNOLOGY LICENSING, LLC 06 January 2016 14 July 2016
Pre-charge phase data compression SONY CORPORATION,CHEONG, HYE-YEON,TABATABAI, ALI 09 June 2016 15 December 2016
Method and apparatus for processing video signal on basis of combination of pixel recursive coding and transform coding LG ELECTRONIC INC. 02 February 2017 10 August 2017
Method and device for encoding/decoding a video signal by using graph-based lifting transform LG ELECTRONICS INC.,UNIVERSITY OF SOUTHERN CALIFORNIA 29 September 2016 06 April 2017
Method and device for context-adaptive binary arithmetic coding a sequence of binary symbols representing a syntax element related to video data THOMSON LICENSING 05 May 2017 16 November 2017
Geometric transforms for filters for video coding QUALCOMM INCORPORATED 15 February 2017 24 August 2017
Data-charge phase data compression tool SONY CORPORATION,CHEONG, HYE-YEON,IKEDA, MASARU,NAGUMO, TAKEFUMI,TABATABAI, ALI 09 June 2016 15 December 2016
See all similar patents <>

More Patents & Intellectual Property

PatSnap Solutions

PatSnap solutions are used by R&D teams, legal and IP professionals, those in business intelligence and strategic planning roles and by research staff at academic institutions globally.

PatSnap Solutions
Search & Analyze
The widest range of IP search tools makes getting the right answers and asking the right questions easier than ever. One click analysis extracts meaningful information on competitors and technology trends from IP data.
Business Intelligence
Gain powerful insights into future technology changes, market shifts and competitor strategies.
Workflow
Manage IP-related processes across multiple teams and departments with integrated collaboration and workflow tools.
Contact Sales
Clsoe
US10021423 Method perform 1 US10021423 Method perform 2 US10021423 Method perform 3