Latest News Posts

Social
Latest Forum Posts

Intel Dishes On What Makes H.265 Worth Waiting For

Posted on March 5, 2013 1:20 PM by Jamie Fletcher
Bookmark and Share

Following-up on January’s H.265 announcement, we had a chat with Intel to better understand some of the developments and enhancements made to the upcoming codec, compared to the venerable H.264 (AVC). Although Intel isn’t the one conducting the development of H.265 (HEVC), the company has been keeping in good contact with those who are, because with the nature of its business, it pretty-well has to. We might not see on-CPU logic for H.265 anytime soon, but for Intel, the sooner things are sorted out, the better.

One of the most impressive aspects of H.265 so far has been its ability to look great at lower bit-rates, and much of that compression improvement comes from the larger block size used for image analysis; H.264 uses blocks of 16×16, while H.265 uses 64×64. Spacial transforms (the removal of information we can’t see or have difficulty seeing) are improved on the upcoming codec as well, using 32×32 transforms instead of H.264′s 8×8. Also, the more memory you have available, the faster you can encode (due to all of the extra inter-prediction modes used for motion detection).

H265 HEVC

Another major improvement is the use of multi-threading for single frames; this means multiple threads can work on the same frame simultaneously, instead of one thread per frame. Fortunately, this applies to both encode and decode. On the decode side, there are two different methods to threading per frame; the first is using tiles, where the frame is separated into rectangles and each thread handles its own block, and the second method is something called “wavefront”, where each thread handles a single row of blocks – the bigger the picture, the more threads you can use.

One thing made clear was the scalability of the new codec. H.265 has greater compressibility for the same image quality at higher resolutions; the 40-50% reductions compared to H.264 are really at the 1080p to 4K resolutions. Standard definition and 720p movies won’t see the same levels of compressibility, but instead see somewhere in the region of 30% compared to H.264. So, you will still see improvements in bit-rate at these lower resolutions, just not to the same extent as 4K and 8K.

So the big question we all have: what about the hardware? Unfortunately, there really isn’t much detail, as it’s still rather early. There are a few things that we can piece together, though. Hardware companies had input into the H.265 standard, and saw certain things implemented in order to ease adoption, including bandwidth optimizations to minimize power consumption. The decoder should also work just fine on modern PC systems in software (one of the new MacBook Pro “Retina” models was able to decode a 4K video at 60fps in software no problem, and using an early revision). We may see driver updates for existing GPUs to help off-load the decoding, but that’s a matter for the hardware companies, as well as licensing.

4K Resolution Comparison
“What is 4K?” Credit: Jamie Fletcher – Techgage

Speaking of licensing, details there are also still up in the air. H.265 is likely to follow a similar system as H.264, in which case there will be a patent pool held by MPEG LA and commercial products will need to license its use in their products (such as hardware decoding), though it’ll likely remain free for end-users and Internet-streaming services. There may be the possibility of a shared pool for both .264 and .265, but that’s just a pipe dream. We just have to wait and see.

So, there is plenty to be excited over. Encoders will have a great deal of options to play with (35 intra predictions modes in H.265, instead of just 9 in H.264, for example) to eke out the lowest bit-rates possible, and decoders will likely see an influx of hardware over the coming months. Finally, 4K resolution streaming may in-fact become a reality, without overburdening our poor, penniless service providers.

H.265? It’s the best thing since sliced frames… Yeah, I just did that.


  • Leo

    In the screenshot above there is still subtle differences between the 2 codecs. Looking at the man’s suit or tiling (especially in the highlights) shows the H264 has more finer detail albeit at 2x the bitrate. I wonder what bitrate HEVC needs to really match H264 pixel for pixel. 30% less?

    • http://techgage.com/ Jamie Fletcher

      In general, there is less detail in the HEVC shot, like it’s had a blur filter (or excessive AA applied). When I saw the original image (the one above is scaled down), the AVC shot had more detail, but also more artifacts, similar to having too much edge sharpening. With the right tweaks, and possibly longer encode time, similar detail levels could be achieved without the artifacts and keeping the same approx bitrate in HEVC.

      The 30% reduction you mention is still possible for a very similar level of quality, and I would prefer to maintain detail for a higher bitrate. Just remember though, only the bitstream has been standardized (the file standard for decoding), we now have to wait for the software to polish up the encoders, then who knows what kind of quality we could get from them. Developers can do anything they want to the encoders, as long as the result is a legal bitstream for the decoder.

Recent Tech News
Recent Site Content