Great research starts with great data.

Learn More
More >
Patent Analysis of

Adaptive coding of a prediction error in hybrid video coding

Updated Time 12 June 2019

Patent Registration Data

Publication Number

US10021424

Application Number

US14/227922

Application Date

27 March 2014

Publication Date

10 July 2018

Current Assignee

NARROSCHKE, MATTHIAS,MUSMANN, HANS-GEORG

Original Assignee (Applicant)

NARROSCHKE, MATTHIAS,MUSMANN, HANS-GEORG

International Classification

H04N7/12,H04N19/129,H04N19/13,H04N19/12,H04N19/124

Cooperative Classification

H04N19/65,H04N19/70,H04N19/12,H04N19/124,H04N19/126

Inventor

NARROSCHKE, MATTHIAS,MUSMANN, HANS-GEORG

Patent Images

This patent contains figures and images illustrating the invention and its embodiment.

US10021424 Adaptive coding prediction 1 US10021424 Adaptive coding prediction 2 US10021424 Adaptive coding prediction 3
See all images <>

Abstract

The present invention relates to a method for coding a video signal using hybrid coding, comprising: reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, deciding whether to transform the prediction error signal into the frequency domain, or to maintain the prediction error signal in the spatial domain for encoding.

Read more

Claims

1. A method, comprising: receiving a video signal;coding the video signal using hybrid coding, the coding including: reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, deciding whether to code the transformed signal resulting from transforming the prediction error signal into the frequency domain, or to code the prediction error signal in the spatial domain, coding the transformed signal in the frequency domain in response to deciding to code the transformed signal, and coding the prediction error signal in the spatial domain in response to deciding to code the prediction error signal in the spatial domain, wherein coding the prediction error signal in the spatial domain includes coding samples of the prediction error signal in the spatial domain, and coding the transformed signal in the frequency domain includes coding transform coefficients of the transformed signal in the frequency domain using a same method as used for coding the samples of the prediction error signal in the spatial domain, wherein using the same method includes coding according to CABAC or CAVLC; and transmitting or storing the coded prediction error signal.

2. The method according to claim 1, wherein coding the samples of the prediction error signal in the spatial domain and coding the transform coefficients of the transformed signal in the frequency domain both use a same context modeling.

3. The method according to claim 1, wherein: the prediction error signal and the transformed signal each includes a plurality of blocks having a block size, the deciding includes deciding whether to code each block of the transformed signal into the frequency domain, or code each block of the prediction error signal in the spatial domain, and coding the transformed signal includes using a transform having a size equal to the block size.

4. The method according to claim 3, wherein the block size is 4×4.

5. The method according to claim 1, wherein: the prediction error signal includes prediction error samples provided in blocks, the transformed signal includes transform coefficients provided in blocks, coding the prediction error signal in the spatial domain includes scanning the prediction error samples of each block of prediction error samples according to a first scanning order, and coding the transformed signal in the frequency domain includes scanning the transform coefficients of each block of transform coefficients according to a second scanning order that is different from the first scanning order.

6. The method according to claim 1, wherein the prediction error signal includes prediction error samples provided in blocks, and coding the prediction error signal in the spatial domain includes scanning the predication error samples of each block according to a scanning order and providing signaling information indicating the scanning order used for the block.

7. A non-transitory computer readable medium including instructions that cause one or more computers to implement a method comprising: producing a coded video signal that includes coded information of a prediction error signal, the producing including coding the prediction error signal partially in the spatial domain and partially in the frequency domain, wherein the coding is performed according to the method of claim 1.

8. The non-transitory computer readable medium according to claim 7, wherein the producing includes producing information relating to the domain in which a slice, or a macroblock, or a block is coded, in particular information whether a slice, or a macroblock, or a block is coded in the spatial domain or in the frequency domain.

9. The non-transitory computer readable medium of claim 8, comprising at least one of a slice_fd_sd_coding_flag, a mb_fd_sd_coding_flag, and a fd_or_sd_flag information relating to the coding used for a slice, a macroblock, and a block, respectively.

10. A method, comprising: receiving a transmitted or stored, coded video signal comprising coded video data that include coded frequency domain data representing a prediction error signal of the coded video signal in the frequency domain and coded spatial domain data representing a prediction error signal of the coded video signal in the spatial domain,decoding the coded video signal using hybrid decoding, the decoding including: decoding the coded frequency domain data to obtain decoded frequency domain data, decoding the coded spatial domain data to obtain decoded spatial domain data, performing an inverse transform of the decoded frequency domain data from the frequency domain into the spatial domain, and skipping an inverse transform of the decoded spatial domain data, wherein decoding the coded spatial domain data includes decoding samples of the coded spatial domain data, and decoding the coded frequency domain data includes decoding transform coefficients of the coded frequency domain data using a same method as used for decoding the samples of the coded spatial domain data, wherein using the same method includes decoding according to CABAC or CAVLC, and outputting the decoded video signal.

11. The method according to claim 10, wherein decoding the samples of the coded spatial domain data and decoding the transform coefficients of the coded frequency domain data both use a same context modeling.

12. The method according to claim 10, wherein: the coded spatial domain data and coded frequency domain data include a plurality of coded blocks having a block size, and decoding the coded frequency domain data includes using an inverse transform having a size equal to the block size.

13. The method according to claim 12, wherein the block size is 4×4.

14. The method for decoding according to claim 10, wherein: the coded spatial domain data include prediction error samples in a first scanning order, the coded frequency domain data include transform coefficients received in a second scanning order, decoding the predication error samples into spatial domain blocks is performed according to the first scanning order, and decoding the transform coefficients is performed according to the second scanning order.

15. An apparatus, comprising: a receiver for receiving a video signal,a coder for coding the video signal using hybrid coding, the coder including: temporal redundancy reduction means for reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, transforming means for transforming the prediction error signal into a transformed signal in the frequency domain, adaptive control means for deciding whether to code a transformed signal resulting from transforming the prediction error signal into the frequency domain, or to code the prediction error signal in the spatial domain, and coding means for coding the transformed signal in the frequency domain in response to deciding to code the transformed signal, and for coding the prediction error signal in the spatial domain in response to deciding to code the prediction error signal in the spatial domain, wherein the coding means are configured to code samples of the prediction error signal in the spatial domain, and to code transform coefficients of the transformed signal in the frequency domain using a same method as used for coding the samples of the prediction error signal in the spatial domain, wherein using the same method includes coding according to CABAC or CAVLC, and a transmitter for transmitting the coded video signal or storage for storing the coded video signal.

16. The apparatus according to claim 15, wherein the coding means are configured to code the samples of the prediction error signal in the spatial domain and to code the transform coefficients of the transformed signal in the frequency domain both by using a same context modeling.

17. An apparatus, comprising: a receiver for receiving a transmitted or stored, coded video signal comprising coded video data that include coded frequency domain data representing a prediction error signal of the coded video signal in the frequency domain and coded spatial domain data representing a prediction error signal of the coded video signal in the spatial domain,a decoder for decoding the coded video signal using hybrid decoding, the decoder including: decoding means for decoding the coded frequency domain data to obtain decoded frequency domain data, and for decoding the coded spatial domain data to obtain decoded spatial domain data, adaptive control means for adaptively determining whether a portion of the coded video data represents coded frequency domain data or coded spatial domain data, and inverse transforming means for performing an inverse transform of the decoded frequency domain data from the frequency domain into the spatial domain, wherein the adaptive control means are adapted to skip the inverse transforming means for the decoded spatial domain data, wherein the decoding means are configured to decode samples of the coded spatial domain data, and to decode transform coefficients of the coded frequency domain data using a same method as used for decoding the samples of the coded spatial domain data, wherein using the same method includes decoding according to CABAC or CAVLC, and an output for outputting the decoded video signal.

18. The apparatus according to claim 17, wherein the decoding means are configured to decode the samples of the coded spatial domain data and to decode the transform coefficients of the coded frequency domain data both by using a same context modeling.

Read more

Claim Tree

  • 1
    1. A method, comprising:
    • receiving a video signal
    • coding the video signal using hybrid coding, the coding including: reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, deciding whether to code the transformed signal resulting from transforming the prediction error signal into the frequency domain, or to code the prediction error signal in the spatial domain, coding the transformed signal in the frequency domain in response to deciding to code the transformed signal, and coding the prediction error signal in the spatial domain in response to deciding to code the prediction error signal in the spatial domain, wherein coding the prediction error signal in the spatial domain includes coding samples of the prediction error signal in the spatial domain, and coding the transformed signal in the frequency domain includes coding transform coefficients of the transformed signal in the frequency domain using a same method as used for coding the samples of the prediction error signal in the spatial domain, wherein using the same method includes coding according to CABAC or CAVLC
    • and transmitting or storing the coded prediction error signal.
    • 2. The method according to claim 1, wherein
      • coding the samples of the prediction error signal in the spatial domain and coding the transform coefficients of the transformed signal in the frequency domain both use a same context modeling.
    • 3. The method according to claim 1, wherein
      • : the prediction error signal and the transformed signal each includes a plurality of blocks having
    • 5. The method according to claim 1, wherein
      • : the prediction error signal includes prediction error samples provided in blocks, the transformed signal includes transform coefficients provided in blocks, coding the prediction error signal in the spatial domain includes scanning the prediction error samples of each block of prediction error samples according to a first scanning order, and coding the transformed signal in the frequency domain includes scanning the transform coefficients of each block of transform coefficients according to a second scanning order that is different from the first scanning order.
    • 6. The method according to claim 1, wherein
      • the prediction error signal includes prediction error samples provided in blocks, and coding the prediction error signal in the spatial domain includes scanning the predication error samples of each block according to a scanning order and providing signaling information indicating the scanning order used for the block.
  • 7
    7. A non-transitory computer readable medium including
    • instructions that cause one or more computers to implement a method comprising: producing a coded video signal that includes coded information of a prediction error signal, the producing including coding the prediction error signal partially in the spatial domain and partially in the frequency domain, wherein the coding is performed according to the method of claim 1.
    • 8. The non-transitory computer readable medium according to claim 7, wherein
      • the producing includes producing information relating to the domain in which
  • 10
    10. A method, comprising:
    • receiving a transmitted or stored, coded video signal comprising coded video data that include coded frequency domain data representing a prediction error signal of the coded video signal in the frequency domain and coded spatial domain data representing a prediction error signal of the coded video signal in the spatial domain,decoding the coded video signal using hybrid decoding, the decoding including: decoding the coded frequency domain data to obtain decoded frequency domain data, decoding the coded spatial domain data to obtain decoded spatial domain data, performing an inverse transform of the decoded frequency domain data from the frequency domain into the spatial domain, and skipping an inverse transform of the decoded spatial domain data, wherein decoding the coded spatial domain data includes decoding samples of the coded spatial domain data, and decoding the coded frequency domain data includes decoding transform coefficients of the coded frequency domain data using a same method as used for decoding the samples of the coded spatial domain data, wherein using the same method includes decoding according to CABAC or CAVLC, and outputting the decoded video signal.
    • 11. The method according to claim 10, wherein
      • decoding the samples of the coded spatial domain data and decoding the transform coefficients of the coded frequency domain data both use a same context modeling.
    • 12. The method according to claim 10, wherein
      • : the coded spatial domain data and coded frequency domain data include a plurality of coded blocks having
    • 14. The method for decoding according to claim 10, wherein
      • : the coded spatial domain data include prediction error samples in a first scanning order, the coded frequency domain data include transform coefficients received in a second scanning order, decoding the predication error samples into spatial domain blocks is performed according to the first scanning order, and decoding the transform coefficients is performed according to the second scanning order.
  • 15
    15. An apparatus, comprising:
    • a receiver for receiving a video signal,a coder for coding the video signal using hybrid coding, the coder including: temporal redundancy reduction means for reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, transforming means for transforming the prediction error signal into a transformed signal in the frequency domain, adaptive control means for deciding whether to code a transformed signal resulting from transforming the prediction error signal into the frequency domain, or to code the prediction error signal in the spatial domain, and coding means for coding the transformed signal in the frequency domain in response to deciding to code the transformed signal, and for coding the prediction error signal in the spatial domain in response to deciding to code the prediction error signal in the spatial domain, wherein the coding means are configured to code samples of the prediction error signal in the spatial domain, and to code transform coefficients of the transformed signal in the frequency domain using a same method as used for coding the samples of the prediction error signal in the spatial domain, wherein using the same method includes coding according to CABAC or CAVLC, and a transmitter for transmitting the coded video signal or storage for storing the coded video signal.
    • 16. The apparatus according to claim 15, wherein
      • the coding means are configured to code the samples of the prediction error signal in the spatial domain and to code the transform coefficients of the transformed signal in the frequency domain both by using a same context modeling.
  • 17
    17. An apparatus, comprising:
    • a receiver for receiving a transmitted or stored, coded video signal comprising coded video data that include coded frequency domain data representing a prediction error signal of the coded video signal in the frequency domain and coded spatial domain data representing a prediction error signal of the coded video signal in the spatial domain,a decoder for decoding the coded video signal using hybrid decoding, the decoder including: decoding means for decoding the coded frequency domain data to obtain decoded frequency domain data, and for decoding the coded spatial domain data to obtain decoded spatial domain data, adaptive control means for adaptively determining whether a portion of the coded video data represents coded frequency domain data or coded spatial domain data, and inverse transforming means for performing an inverse transform of the decoded frequency domain data from the frequency domain into the spatial domain, wherein the adaptive control means are adapted to skip the inverse transforming means for the decoded spatial domain data, wherein the decoding means are configured to decode samples of the coded spatial domain data, and to decode transform coefficients of the coded frequency domain data using a same method as used for decoding the samples of the coded spatial domain data, wherein using the same method includes decoding according to CABAC or CAVLC, and an output for outputting the decoded video signal.
    • 18. The apparatus according to claim 17, wherein
      • the decoding means are configured to decode the samples of the coded spatial domain data and to decode the transform coefficients of the coded frequency domain data both by using a same context modeling.
See all independent claims <>

Description

BACKGROUND

Technical Field

The invention relates to a method of coding and decoding, a coder and a decoder, and data signals using adaptive coding of the prediction error.

Description of the Related Art

Up to date standardized video coding methods are based on hybrid coding. Hybrid coding provides a coding step in the time domain and a coding step in the spatial domain. First, the temporal redundancy of video signals is reduced by using a block based motion compensated prediction between the image block to be coded and a reference block from an image that has already been transmitted determined by a motion vector. The remaining prediction error samples are arranged in blocks and are transformed into the frequency domain resulting in a block of coefficients. These coefficients are quantised and scanned according to a fixed and well-known zigzag scanning scheme, which starts with the coefficient representing the DC value. According to a typical representation, this coefficient is positioned among the low frequency coefficients in the top left corner of a block. The zigzag scanning produces a one-dimensional array of coefficients, which are entropy-coded by a subsequent coder. The coder is optimised for an array of coefficients with decreasing energy. Since the order of coefficients within a block is predetermined and fixed, the zigzag scanning produces an array of coefficients of decreasing energy, if the prediction error samples are correlated. The subsequent coding step may then be optimised for such a situation. For this purpose, the latest standard H.264/AVC proposes Context-Based Adaptive Binary Arithmetic Coding (CABAC) or Context-Adaptive Variable-Length Coding (CAVLC). However, the coding efficiency of the transform only is high, if the prediction error samples are correlated. For samples being only marginally correlated in the spatial domain, the transform is less efficient.

BRIEF SUMMARY

It is an object of the present invention to provide a coding and decoding method, respective coders and decoders, data signals as well as corresponding systems and semantics for coding and decoding video signals being more efficient than the prior art.

According to an aspect of the present invention, a method for coding a video signal is provided being based on hybrid coding. The method comprises the steps of reducing temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, and deciding whether to transform the prediction error signal into the frequency domain, or to maintain the prediction error signal in the spatial domain.

According to a corresponding aspect of the present invention, a coder is provided, which is adapted to apply hybrid coding of a video signal. The coder includes means for reducing the temporal redundancy by block based motion compensated prediction in order to establish a prediction error signal, and means for deciding whether to transform the prediction error signal into the frequency domain, or to maintain the prediction error signal in the spatial domain. According to this aspect of the invention, a concept and corresponding apparatuses, signals and semantics are provided to decide adaptively whether to process the prediction error signal in the frequency or in the spatial domain. If the prediction error samples have only small correlation, the subsequent steps of coding the samples may be more efficient and they would lead to a reduced data rate compared to coding the coefficients in the frequency domain. Therefore, an adaptive deciding step and adaptive control means to make the decision are implemented by the present invention. Accordingly, in view of the prediction error signal, it is decided whether to use frequency domain transform or to maintain the prediction error signal in the spatial domain. The subsequent coding mechanisms may be the same as for the frequency domain, or they may be adapted especially to the needs of the samples in the spatial domain.

According to another aspect of the invention, the method for coding a video signal, and in particular the deciding step is based on a cost function. Generally, the decision whether to use the coefficients in the frequency domain or the samples in the spatial domain may be based on various kinds of deciding mechanisms. The decision may be made for all samples within a specific portion of a video signal at once, or e.g. even for a specific number of blocks, macroblocks, or slices. The decision may be based on a cost function, as for example a Lagrange function. The costs are calculated for both, coding in the frequency domain and coding in the spatial domain. The decision is made for the coding with lower costs.

According to another aspect of the present invention, the cost function includes the rate distortion costs for the coding in the spatial and in the frequency domain. According to still another aspect of the invention, the rate distortion costs may be calculated by the required rate and the resulting distortion weighted by a Lagrange parameter. Further, the distortion measure may be the mean square quantisation error or the mean absolute quantisation error.

According to an aspect of the present invention, the samples in the spatial domain may be coded by essentially the same methods as being used for the coefficients in the frequency domain. These methods may include the CABAC or CAVLC coding methods. Accordingly, only little or no adaption of the coding mechanisms is necessary, if the adaptive control means decide to switch between the frequency and the spatial domain. However, it might also be provided to use different coding schemes for the coefficients in the two domains.

According to another aspect of the invention, a method for coding a video signal is provided, which is based on hybrid coding. According to this aspect of the invention, the temporal redundancy is reduced by block based motion compensated prediction, and the samples of the prediction error signal are provided in the prediction error block in the spatial domain. The samples are scanned from the prediction error block in order to provide an array of samples in a specific order. According to this aspect of the invention it is provided that the scanning scheme is derived from a prediction error image or a prediction image. The scanning scheme according to this aspect of the invention takes account of the effect that the zigzag scan according to prior art for the frequency domain may not be the most efficient scanning order for the spatial domain. Therefore, an adaptive scanning scheme is provided, which takes account of the distribution of the samples and the magnitude of the samples in the spatial domain. The scanning scheme may preferably be based on a prediction error image or a prediction image. This aspect of the invention takes account of the most probable positions of the samples having the highest magnitude and samples being most probably zero. As the coding gain for the frequency domain is mainly based on the phenomenon that the low frequency components have larger magnitudes, and most of the high frequency coefficients are zero, a very effective, variable code length coding scheme like CABAC or CAVLC may be applied. However, in the spatial domain, the samples having the highest magnitude may be located anywhere within a block. However, as the prediction error is usually the highest at the edges of a moving object, the prediction image or the prediction error image may be used to establish the most efficient scanning order.

According to an aspect of the present invention, the gradients of the prediction image may be used to identify the samples with large magnitudes. The scanning order follows the gradients within the prediction image in their order of magnitude. The same scanning order is then applied to the prediction error image, i.e. the samples in the prediction error image in the spatial domain.

Further, according to still another aspect of the present invention, the scanning scheme may be based on a motion vector in combination with the prediction error image of the reference block. The scan follows the magnitudes of the prediction error in decreasing order.

According to one aspect of the invention, the scanning scheme is derived from a linearcombination of the gradient of the prediction image and the prediction error image of the reference block in combination with a motion vector

According to another aspect of the present invention, a specific code for the coding mechanisms, as for example CABAC or the like is used based on separately determined probabilities for the coefficients in the frequency domain or the samples in the spatial domain. Accordingly, the well-known prior art coding mechanisms may be adapted at least slightly in order to provide the most efficient coding mechanism for the spatial domain. Accordingly, the switching mechanism being adaptively controlled in order to code either in the spatial or in the frequency domain may be further adapted to switch the subsequent coding steps for the samples or coefficients in the respective domains.

According to an aspect of the present invention, a method for coding a video signal is provided including a step of quantising the prediction error samples in the spatial domain by a quantiser, which has either subjectively weighted quantisation error optimisation or mean squared quantization error optimization. According to this aspect of the invention, the quantiser used for quantising the samples in the spatial domain may be adapted to take account of the subjectively optimal visual impression of a picture. The representative levels and decision thresholds of a quantiser may then be adapted based on corresponding subjective or statistical properties of the prediction error signal.

Further, the present invention relates also to a decoding method and a decoding apparatus in accordance with the aspects set out here above. According to an aspect of the present invention, a decoder is provided including adaptive control means for adaptively deciding whether an input stream of a coded video signal represents the prediction error signal of the coded video signal in the spatial domain or in the frequency domain. Accordingly, the decoder according to this aspect of the present invention is adapted to decide for an incoming data stream, i.e. whether the prediction error signal is coded in the frequency or in the spatial domain. Further, the decoder provides respective decoding means for each of the two domains, either the spatial or the frequency domain.

Further, according to still another aspect of the present invention, the decoder comprises a scan control unit for providing a scanning order based on a prediction signal or a prediction error signal. The scan control unit according to this aspect of the invention is adapted to retrieve the necessary information about the scanning order, in which the incoming samples of a block have been scanned during coding the video signals. Further, the decoder may comprise all means in order to inverse quantise and inverse transform the coefficients in the frequency domain or to inverse quantise the samples in the spatial domain. The decoder may also include a mechanism to provide motion compensation and decoding. Basically, the decoder may be configured to provide all means in order to implement the method steps corresponding to the coding steps explained here above.

According to still another aspect of the present invention, a data signal representing a coded video signal is provided, wherein the coded information of the prediction error signal in the data signal is partially coded in the spatial domain and partially coded in the frequency domain. This aspect of the invention relates to the coded video signal, which is a result of the coding mechanisms as set out above.

Further, according to still another aspect of the invention, the data signal may include side information indicating the domain in which a slice, a macroblock, or a block is coded, in particular information whether a slice, a macroblock or a block is coded in the spatial or in the frequency domain. As the adaptive control according to the present invention provides that the prediction error signal is either coded in the spatial domain or in the frequency domain, it is necessary to include corresponding information into the coded video signal. Therefore, the present invention provides also a specific information, which indicates the domain in which the specific portion, such as a slice, macroblock, or block has been coded. Further, this aspect of the invention takes also account of the possibility that a whole macroblock or a whole slice may be coded only in one of the two domains. So, if for example an entire macroblock is coded in the spatial domain, this may be indicated by a single flag or the like. Further, even a whole slice may be coded only in the frequency or in the spatial domain, and a corresponding indicator could be included for the whole slice into the data stream. This results in a decreased data rate and a more efficient coding mechanism for the side information.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The aspects of the present invention are explained with respect to the preferred embodiments which are elucidated by reference to the accompanying drawings.

FIG. 1 shows a simplified block diagram of an encoder implementing aspects according to the present invention,

FIG. 2 shows a simplified block diagram of a decoder implementing aspects of the present invention,

FIG. 3 shows a scanning scheme according to the prior art,

FIG. 4 shows scanning schemes according to the present invention, and

FIG. 5 illustrates the parameters used for an optimised quantiser according to the present invention.

FIG. 6 shows a simplified representation of the measured mean absolute reconstruction error of a picture element in the case of the subjectively weighted quantisation in the frequency domain in FIG. 6(a) and in the spatial domain in FIG. 6(b).

DETAILED DESCRIPTION

FIG. 1 shows a simplified block diagram of an encoder according to the present invention. Accordingly, the input signal 101 undergoes a motion estimation based on which a motion compensation prediction is carried out in order to provide a prediction signal 104, which is subtracted from the input signal 101. The resulting prediction error signal 105 is transformed into the frequency domain 106 and quantised by an optimised quantiser 107 for the frequency related coefficients. The output signal 120 of the quantiser 107 is passed to an entropy coder 113 which provides the output signal 116 to be transmitted, stored, or the like. By means of an inverse quantisation block 110 and inverse transformation block 111, the quantised prediction error signal 120 is further used for the next prediction step in the motion compensated prediction block 103. The inverse quantised an inverse DCT transformed prediction error signal is added to the prediction signal and passed to frame memory 122 storing preceding images for the motion compensation prediction block 103 and the motion estimation block 102. Generally, the present invention suggests to use in addition to the prior art an adaptively controlled mechanism 115 to switch between the frequency and the spatial domain for transforming the prediction error signal 105. The adaptive control means 115 produce signals and parameters in order to control the adaptive change between the frequency and the spatial domain. Accordingly, an adaptive control information signal 121 is asserted to the two switches switching between the positions A and B. If the transformation is carried out in the frequency domain, the two switches are in position A. If the spatial domain is used, the switches are switched to position B. Further, the side information signal 121, i.e. which of the domains has been used for the coding procedure of a picture is also passed to the entropy coder 113. Accordingly, an appropriate information for the device is included into the data stream. Parallel to the frequency transform, via an alternative path, the prediction error signal 105 is passed to the quantiser 109. This quantisation block 109 provides optimised quantisation for the prediction error signal 105 in the spatial domain. The quantised prediction error signal 124 in the spatial domain may be passed to a second inverse quantisation block 112 and further to the back connection to the motion compensation prediction block 103. Additionally, there is a scan control block 114 receiving either the motion vector 123 and the inverse quantised prediction error signal 118, or the prediction signal 104 via connection 119. Block 117 serves to encode the motion information.

The adaption control block 115 decides whether a block is to be coded in the frequency or in the spatial domain, and it generates corresponding side information to indicate the domain. The decision made by the adaption control means is based on the rate distortion costs for the coding in the spatial and for coding in the frequency domain. The domain having the lower rate distortion costs is selected for coding. For example, the rate distortion costs C are calculated by the required rate R and the resulting distortion D weighted by a Lagrange parameter L: C=L*R+D. As a distortion measure, the mean squared quantisation error may be used, but also other measures are applicable, as for example the mean absolute quantisation error. As Lagrange parameter L, the commonly used Lagrange parameter for the coder control of H.264/AVC may be used L=0.85*2((QP-12/3). Alternative methods for determining the rate distortion costs are possible.

The adaption control 115 can alternatively control the coding method. This may be done for example based on the prediction signal or based on the correlation in the prediction error, or based on the domain, the prediction error is coded in at a motion compensated position of already transmitted frames.

FIG. 2 shows a simplified block diagram of an architecture of a decoder according to aspects of the present invention. Accordingly, the coded video data is input to two entropy decoding blocks 201 and 202. The entropy decoding block 202 decodes motion compensation information, such as motion vectors etc. The entropy decoding block 201 applies the inverse coding mechanism used in the coder, as for example decoding according to CABAC or CAVLC. If the encoder uses a different coding mechanism for the coefficients or the samples in the spatial domain, the corresponding decoding mechanism is to be used in the corresponding entropy decoding blocks. Accordingly, the entropy decoding block 201 produces the appropriate signals in order to switch between positions A and B in order to use either the appropriate inverse quantisation path for the spatial domain, i.e. the inverse quantisation operation block 206, or the appropriate blocks according to switch position A, i.e. the inverse quantisation block 203 and the inverse transform block 204. If the prediction error is represented in the frequency domain, inverse quantisation block 203 and inverse transformation block 204 apply the corresponding inverse operations. As the samples in the spatial domain have been arranged in a specific order in accordance with a scan mechanism according to aspects of the present invention, a scan control unit 205 provides the correct order of the samples for the entropy decoding block 201. If the encoding has been carried out in the spatial domain, the inverse transform block 204 and the inverse quantization block 203 are bypassed by an inverse quantisation operation in block 206. The switching mechanism, to switch between frequency and spatial domain (i.e. position A and B of the switches) is controlled by the side information sent in the bitstream and decoded by the entropy decoding block 201. Further, the inverse quantised signal in the spatial domain, or the inverse quantized and inverse transformed signal in the frequency domain are summed with the motion compensated prediction picture in order to provide the decoded video signals 210. The motion compensation is carried out in block 209 based on previously decoded video signal data (previous pictures) and motion vectors. The scan control unit 205 uses either the prediction image 208, or the prediction error signal 207 in combination with the motion vector 212 to determine the correct scan sequence of the coefficients. The scan mechanism may also be based on both pictures, i.e. the prediction error picture and the prediction picture. As explained for the coding mechanism with respect to FIG. 1, the scan sequence during coding may be based on a combination of the prediction error information 207 and the motion compensation vectors. Accordingly, the motion compensation vectors may be passed via a path 212 to the scan control unit 205. Further, in correspondence to FIG. 1, there is a frame memory 211 storing the necessary and previously decoded pictures.

FIG. 3 shows a simplified diagram in order to illustrate the zigzag scan order according to the prior art. Accordingly, the coefficients, which are the result of a transform to the frequency domain (for example DCT) are arranged in a predetermined order as shown in FIG. 3 for a four by four block. These coefficients are read out in a specific order, such that the coefficients representing the low frequency portions are located in the first left positions of a one-dimensional array. The more on the bottom right of the array, the higher the corresponding frequencies of the coefficients. As blocks to be coded often contain substantial low frequency coefficients, the high frequency coefficients, or at least a majority of high frequency coefficients are zero. This situation can effectively be used to reduce the data to transmit it by for example replacing large sequence of zeros by a single information about the number of zeros.

FIG. 4 shows a simplified illustrative example for a scan mechanism according to an aspect of the present invention. FIG. 4(a) shows the magnitude of the gradients in the prediction image for one block. The values in each position of the block represent the gradient of the prediction image of the current block. The gradient itself is a vector consisting of a two components representing the gradient in horizontal and vertical direction. Each component may be determined by the difference of the two neighboring samples or it may be determined by the well-known Sobel-operator taking six neighboring samples into account. The magnitude of the gradient is the magnitude of the vector. If two values have the same magnitude, a fixed or predetermined scan order may be applied. The scanning order follows the magnitude of the gradient values in the block as indicated by the dotted line. Once the scanning order within the gradient prediction image is established, the same scanning order is applied to the quantised prediction error samples, which are shown in FIG. 4(b). If the quantised samples in the spatial domain of the block shown in FIG. 4(b) are arranged in a one-dimensional array as indicated on the left side of FIG. 4(b) in accordance with the scanning order established based on the magnitude of the gradients in the prediction image, the samples having a high value are typically arranged first in the array, i.e. in the left positions. The right positions are filled with zeros as indicated in FIG. 4(b).

Instead of a scan controlled by the gradient, also other scans as e.g. a predefined scan or a scan controlled by the quantised prediction error of already transmitted frames in combination with a motion vector, or combinations thereof can be applied (the scan control relates to blocks 114 or 205 as explained with respect to FIG. 1 and FIG. 2). In the case of a scan controlled by the prediction error signal in combination with a motion vector, the scan follows the magnitudes of the quantized prediction error samples of the block, the motion vector of the current block refers to, in decreasing order.

If the motion vector points to fractional sample positions, the required quantized prediction error samples may be determined using an interpolation technique. This may be the same interpolation technique as used for the interpolation of the reference image in order to generate the prediction samples.

In the case the scan is controlled by the combination of the prediction image and the prediction error image in combination with a motion vector, linear combinations of the magnitudes of the gradients and of the quantized prediction error samples of the block, the motion vector of the current block refers to, are calculated. The scan follows the values of these linear combinations. In addition, the method for the scan determination can be signalled for segments of the sequence, e.g. for each frame or for each slice or for a group of blocks. According to the typical standard processing, the motion compensation vectors are already considered, while the prediction image is determined.

According to another aspect of the present invention, the scanning order may also be based on the prediction error picture in combination with a motion vector. Further, combinations of the gradient principle as explained above and the prediction error picture are conceivable.

FIG. 5 shows a simplified illustration being useful to illustrate the definition of an optimised quantiser according to aspects of the present invention. Accordingly, the three parameters a, b, and c are the parameters used to adapt the quantiser. According to the standard H.264/AVC, rate distortion optimised quantisers for the coefficients with two different distortion measures are applied. The first measure is the mean squared quantisation error, the second is the subjectively weighted quantisation error. According to the H.264/AVC standard, two quantisers for the prediction error samples are developed. Since the distribution of the prediction error is close to a Laplacian distribution, scalar a dead-zone plus uniform threshold quantiser is used in the case of mean squared quantisation error optimisation. FIG. 5 illustrates the parameters a, b, and c of the quantisation and inverse quantisation.

Table 1 shows the parameters a, b, and c, which may be advantageously used for the commonly used QPs (Quantisation Parameter) in the H.264/AVC coding scheme. The parameters a, b, c are the respective optimised parameters for mean square quantisation error optimisation. However, this is only an example, and different or additional parameters may be useful for different applications.


TABLE 1
Mean squared
Subjectively weighted
quantisation error
quantisation error
optimisation
optimisation
QP
a
b
c
r1
r2
r3
r4
r5
23
9.6
1.6
2.7
0
11
28
46
66
26
14.8
1.4
4.8
0
14
36
58
110
29
22.2
1.4
6.9
0
20
54
92
148
32
30.2
1.4
9.3
0
28
76
130
220

For subjectively weighted quantisation error optimisation, a non-uniform quantiser is proposed with representative levels ri, −ri and decision thresholds in the middle of adjacent ri which are also shown in table 1. If large prediction errors occur at the edges, visual masking may be exploited. Accordingly, large quantisation errors may be allowed at the edges and small ones if the image signal is flat. H.264/AVC may use more than 4 QPs as shown in Table 1. Then Table 1 has to be extended. H.264/AVC may use 52 different QPs. The basic idea for determining the appropriate representative values ri, −ri is explained here below with respect to FIG. 6.

FIG. 6 shows a simplified representation of the measured mean absolute reconstruction error of a picture element in the case of the subjectively weighted quantisation in the frequency domain in FIG. 6(a) and in the spatial domain in FIG. 6(b). The measured mean absolute reconstruction error of subjectively weighted quantisation in the frequency domain is shown as a function of the absolute value of the prediction error. For the absolute reconstruction error of subjectively weighted quantisation in the spatial domain, the representation levels ri are adjusted such that the mean absolute reconstruction error is the same for quantisation in the frequency and spatial domain with respect to the quantisation intervals in the spatial domain. Just as an example, the values r1, r2, r3, and r4 for QP=26 as indicated in table 1 are also present in FIG. 6(b) As a rule of thumb, a representative levels r1 is approximately doubled if the value QP increases by 6. The quantiser design can also exploit other features of the visual system. Furthermore, quantisers can be used to create a quantisation error with properties different to those of the H.264/AVC quantisers.

Entropy Coding of the Quantised Samples in the Spatial Domain

According to an aspect of the present invention, entropy coding in the spatial domain may be based on the same methods as for the quantised coefficients in the frequency domain. For the H.264/AVC standard, two preferred entropy coding methods are CABAC and CAVLC. However, according to this aspect of the present invention, instead of coding the quantised coefficients in the frequency domain, quantised samples in the spatial domain are coded by the above mentioned methods. As explained above, the scanning order may be changed in order to provide the same data reduction as for the frequency domain. As set out above, the scan in the spatial domain may be controlled by the magnitude of the gradient of the prediction image signal at the same spatial position. According to this principle, the samples to be coded are arranged in an order of decrease in gradients, as already explained with respect to FIGS. 4(a) and (b). Other scan mechanisms may also be applied as set out above. Further, separate codes, which means separate probability models in the case of CABAC, may be used for the spatial domain according to aspects of the present invention. The code and in the case of CABAC the initialisation of the probability models may be derived from the statistics of the quantised samples. The context modelling in the spatial domain may be done in the same way as in the frequency domain.

Coding of the Side Information

The adaptive control means explained with respect to FIG. 1 generates the information relating to the domain, in which a block is to be coded. The block size may be four by four or eight by eight picture elements according to the size of the transform. However, according to different aspects of the present invention, other block sizes independent of the size of the transform may be applied. According to an aspect of the present invention, the side information includes specific flags, which indicate whether the coding mechanism has adaptively been changed during coding. If for example all blocks of a slice are coded in the frequency domain, this may be indicated by a specific bit in the coded video data signal. This aspect of the invention may also relate to the blocks of a macroblock, which may all be coded in each of the two domains, or only in one domain. Further, the concept according to the present aspect of the invention may be applied to macroblocks and information may be included in the data stream which indicates whether at least one block of a macroblock is coded in the spatial domain. Accordingly, the flag Slice_FD_SD_coding_flag may be used to indicate whether all blocks of the current slice are coded in the frequency domain, or whether at least one block is coded in the spatial domain. This flag may be coded by a single bit. If at least one block of the slice is coded in the spatial domain, this may be indicated by the flag MB_FD_SD_coding_flag for each individual macroblock of the current slice, if all the blocks of the current macroblock are coded in the frequency domain, or if at least one block is coded in the spatial domain. This flag may be coded conditioned on the flags of the already coded neighbouring blocks to the top and to the left. If the last one of a macroblock is coded in the spatial domain, this may be indicated by the flag FD_or_SD-Flag for each block of the macroblock to be coded, if the current block is coded in the frequency or in the spatial domain. This flag may be coded conditioned on the flags of the already coded neighbouring blocks to the top and to the left. Alternatively, the side information may also be coded conditioned by the prediction signal or the prediction error signal in combination with a motion vector.

Syntax and Semantics

According to this aspect of the present invention, an exemplary syntax and semantics allowing the incorporation of the aspects of the present invention into the H.264/AVC coding scheme is presented. Accordingly, the flag Slice_FD_SD_coding_flag may be introduced in the slice_header as shown in table 2. The flag MB_FD_SD_coding_flag may be sent in each macroblock_layer as shown in table 3. In the residual_block_cabac it may be signalled by the flag FD_or_SD_flag if the frequency domain coding or spatial domain coding is supplied for the current block, this is shown in table 4 here below. A similar scheme may be applied in other video coding algorithms for the prediction error coding.


TABLE 2
slice_header( ){
C
Descriptor
.
.
.
Slice_FD_SD_coding_flag
2
u(1)
.
.
.


TABLE 3
Macroblock_layer( ){
C
Descriptor
.
.
.
If (Slice_FD_SD_coding_flag == 1){
  MD_FD_SD_coding_flag
2
u(1),
ae(v)
{
.
.
.


TABLE 4
residual_block_cabac{
C
Descriptor
.
.
.
If (Slice_FD_SD_coding_flag == 1 &&
MB_FD_SD_Coding_flag == 1){
  FD_or_SD_flag
3/4
u(1),
ae(v)
  If (FD_or_SD_flag == 1)}
    Code_Prediction_error_in_spatial_domain
  }
  else{
    Code_Prediction_error_in_frequency_domain
  }
}
.
.
.

Read more
PatSnap Solutions

Great research starts with great data.

Use the most comprehensive innovation intelligence platform to maximise ROI on research.

Learn More

Patent Valuation

$

Reveal the value <>

29.65/100 Score

Market Attractiveness

It shows from an IP point of view how many competitors are active and innovations are made in the different technical fields of the company. On a company level, the market attractiveness is often also an indicator of how diversified a company is. Here we look into the commercial relevance of the market.

100.0/100 Score

Market Coverage

It shows the sizes of the market that is covered with the IP and in how many countries the IP guarantees protection. It reflects a market size that is potentially addressable with the invented technology/formulation with a legal protection which also includes a freedom to operate. Here we look into the size of the impacted market.

61.75/100 Score

Technology Quality

It shows the degree of innovation that can be derived from a company’s IP. Here we look into ease of detection, ability to design around and significance of the patented feature to the product/service.

35.0/100 Score

Assignee Score

It takes the R&D behavior of the company itself into account that results in IP. During the invention phase, larger companies are considered to assign a higher R&D budget on a certain technology field, these companies have a better influence on their market, on what is marketable and what might lead to a standard.

23.1/100 Score

Legal Score

It shows the legal strength of IP in terms of its degree of protecting effect. Here we look into claim scope, claim breadth, claim quality, stability and priority.

Citation

Patents Cited in This Cited by
Title Current Assignee Application Date Publication Date
Method and apparatus for predecoding and decoding bitstream including base layer SAMSUNG ELECTRONICS CO., LTD. 14 July 2005 19 January 2006
Variable-length coding data transfer interface NVIDIA CORPORATION 30 January 2004 04 August 2005
Method and apparatus for controlling amount of DCT computation performed to encode motion image SAMSUNG ELECTRONICS CO., LTD. 23 June 2003 15 January 2004
Selecting macroblock coding modes for video encoding MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC. 01 June 2004 15 December 2005
Video coding method and apparatus SAMSUNG ELECTRONICS CO., LTD. 12 October 2005 27 April 2006
See full citation <>

More like this

Title Current Assignee Application Date Publication Date
Encoding and decoding of pictures in a video TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) 28 November 2016 08 June 2017
Method and device for encoding or decoding image SAMSUNG ELECTRONICS CO., LTD. 13 October 2016 15 June 2017
Method for encoding a digital image and associated decoding method, devices and computer programs B<>COM,ORANGE 14 December 2015 30 June 2016
Apparatus and method for video motion compensation HUAWEI TECHNOLOGIES CO., LTD.,ZHAO, ZHIJIE,LIU, YIQUN,OSTERMANN, JOERN 21 May 2015 24 November 2016
Method and apparatus for scan order selection HUAWEI TECHNOLOGIES CO., LTD. 12 February 2016 17 August 2017
Method and apparatus for HDR quantization or masking DIGITALINSIGHTS INC. 22 April 2016 23 March 2017
Video decoding method and device for same and video encoding method and device for same SAMSUNG ELECTRONICS CO., LTD. 16 February 2017 24 August 2017
Video encoding method and apparatus, and video decoding method and apparatus SAMSUNG ELECTRONICS CO., LTD. 22 November 2016 01 June 2017
Image encoding method and apparatus, and image decoding method and apparatus SAMSUNG ELECTRONICS CO., LTD. 11 January 2017 20 July 2017
Merge candidates for motion vector prediction for video coding QUALCOMM INCORPORATED 11 May 2017 16 November 2017
Method of coding and decoding images, device for coding and decoding images and computer programmes corresponding thereto ORANGE 26 August 2016 09 March 2017
Motion vector coding apparatus, method and program for coding motion vector, motion vector decoding apparatus, and method and program for decoding motion vector CANON KABUSHIKI KAISHA 31 October 2012 13 June 2017
Joint inter-intra prediction mode-based image processing method and apparatus therefor LG ELECTRONICS INC. 02 September 2016 16 March 2017
Motion information derivation mode determination in video coding QUALCOMM INCORPORATED 25 March 2016 06 October 2016
Video image encoding device, video image decoding device, video image encoding method, video image decoding method, and program NEC CORPORATION 12 August 2015 28 April 2016
Motion vector prediction using prior frame residual GOOGLE INC. 20 December 2016 03 August 2017
Sub-prediction unit motion vector prediction using spatial and/or temporal motion information QUALCOMM INCORPORATED 09 June 2016 15 December 2016
Method for encoding image, method for decoding image, image encoder, and image decoder KT CORPORATION 26 January 2015 21 May 2015
Method and apparatus for processing video signal on basis of combination of pixel recursive coding and transform coding LG ELECTRONIC INC. 02 February 2017 10 August 2017
Image encoding/decoding method ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE 18 July 2017 08 February 2018
See all similar patents <>

More Patents & Intellectual Property

PatSnap Solutions

PatSnap solutions are used by R&D teams, legal and IP professionals, those in business intelligence and strategic planning roles and by research staff at academic institutions globally.

PatSnap Solutions
Search & Analyze
The widest range of IP search tools makes getting the right answers and asking the right questions easier than ever. One click analysis extracts meaningful information on competitors and technology trends from IP data.
Business Intelligence
Gain powerful insights into future technology changes, market shifts and competitor strategies.
Workflow
Manage IP-related processes across multiple teams and departments with integrated collaboration and workflow tools.
Contact Sales
Clsoe
US10021424 Adaptive coding prediction 1 US10021424 Adaptive coding prediction 2 US10021424 Adaptive coding prediction 3