View Full Version : AVCHD video at higher bitrates


Pages : [1] 2

Arkady Bolotin
December 2nd, 2010, 09:47 AM
AVCHD (Advanced Video Coding High Definition) utilizes MPEG-4 AVC/H.264 (AVC) video coding for video compression. The maximum allowed video bitrate for AVCHD, as we know, is 24 Mbit/s (at the level 5.1).

So, the question is if AVCHD-based camera recorded footage at maximum-allowed bitrate, can we go beyond this limit? In other words, can the increase of video bitrate in postproduction produce new information (i.e. better quality) in AVCHD video?

Theoretically speaking, the answer to this question should be possibly yes, or might be.

Given such features of AVC as multi-picture-inter-picture prediction, lossless macro block coding and loss resilience, the improved quantization design, it is possible that decompressing AVCHD video and then recompressing it again but at the higher bitrate can produce higher quality.

Without starting technical dispute here, I just want to show some results I got. Please observe the following two fragments: the first fragment has been cut from the original footage recorded at approx. 21 MBps, and the second one was from the postproduction movie rendered at approx. 80 MBps.

As you can see, the central element of the ornament is more evident in the high bitrate fragment.

Robin Davies-Rollinson
December 2nd, 2010, 12:22 PM
I don't see how you put detail in that wasn't there in the first place...

Robert Young
December 2nd, 2010, 01:30 PM
I agree- you can't produce information that was not there to begin with.
However, decompressing your original footage and recoding into higher bitrate, higher level (10bit, 12bit, 4:2:2) digital intermediate codec for editing gives you more redundancy and "digital headroom" for applying extensive effects, color correction/grading, etc. and can result in a better quality image in the final delivery format. Otherwise, I doubt that there is any serious advantage.

Arkady Bolotin
December 2nd, 2010, 04:29 PM
Robin:

The (additional) information does not appear out of nothing; rather it was unseen in the first place and revealed in the process of rendering at higher bit rate.

Let me take an example: imagine that we have number 25.63452985032(7)… If the brain power of a PC processor is limited to certain extent then instead of that number we read just 26. However, if the processor would be enforced somehow, we would get more accurate approximation of the original number, say, 25.635.

Something analogical goes with AVCHD video. It is severely compressed so to be played back it has to be decompressed first. Where this decompressing is taking place depends on the video bitrate. It could be decompressed completely by the processor on-the-fly i.e. during replay. Or it could be partially decompressed by rendering process at a higher bitrate and then playing back with lesser burden on the processor (i.e. more accurate = with smaller number of artifacts).

Robert:

I believe I already answered to your first argument.

I agree that besides of what I just said, decompressing could also increase redundancy and doing so facilitate the application of extensive video editing.

Regarding your doubt in serious advantage of all this, let me respectfully refuse to respond. The reason is plain: I do not like vague or fuzzy words or concepts behind such words. Please clarify what you mean by serious advantage, and then I will eagerly respond. To me it’s simple: if I see what I couldn’t see before it’s advantage.

John Abbey
December 2nd, 2010, 05:53 PM
I have been using a firmware hacked panny GH1 and now get somewhere in the 40mbps range, and to my eye it looks fantastic.

Robert Young
December 2nd, 2010, 06:25 PM
I have been using a firmware hacked panny GH1 and now get somewhere in the 40mbps range, and to my eye it looks fantastic.

Are you referring to the recording datarate??

Robert Young
December 2nd, 2010, 06:53 PM
Regarding your doubt in serious advantage of all this, let me respectfully refuse to respond. The reason is plain: I do not like vague or fuzzy words or concepts behind such words. Please clarify what you mean by serious advantage, and then I will eagerly respond. To me it’s simple: if I see what I couldn’t see before it’s advantage.

All theory aside, the way I usually compare various workflow ideas is to carry them out to the final delivery format to see if there is any difference, and if so, what really looks best. At the end of the day it is the image quality of the final delivery product that really matters.
So you can take the original AVCHD clip, a higher bitrate version, and a DI version, like Cineform, render them all out to (for example) BluRay, put it on a BD & watch on a high quality large screen HDTV. If you can see a difference, you've got your answer. If you can't see the difference, then it doesn't make any difference.

Bruce Dempsey
December 2nd, 2010, 06:58 PM
Hey Arkady I've just received a Blackmagic Intensity Shuttle to capture from the HDMI port on my AVCHD based camera (cx550)
Should be fantastic looking video of my figureskating subjects. The bitrate of uncompressed video is the astronomically high figure of 155.52 Mbps (standard SI-units) but I plan on capturing as mjpeg so as to realistically store the footage long enough to trim and encode to BluRay disc and the figureskating routines usually do not exceed 4 minutes in length so it's manageable I hope
I must say I enjoy reading your comments

Arkady Bolotin
December 2nd, 2010, 08:12 PM
It’s four o’clock in the morning here, so I’ll be brief.

Robert, it is simple indeed: if I could not see any difference, there would be no post and I would not have the pleasure of this talk with you.

Bruce, thank you! I think you understand what I’m trying to do… And my congratulations on your new Blackmagic capture board!

Dave Blackhurst
December 2nd, 2010, 09:14 PM
If I should be so bold, I think Arkady may be on to something... if we were talking a single "still" frame, there are ways to enhance the amount of "data" through interpolation (there was an interesting thread at one time here on DVi about an algorithm that could extract virtually infinite detail from a small digital sampling)

With video, if my understanding is correct, the "goal" of compression is to keep as much data as possible, using methods of retaining the pixels that remain the same as much as possible within a frame, and only generating the "different" information. for the sake of argument, I'd think it's safe to postulate that within all teh "same" information is a certain level of "change" which is in fact recorded in the intermediate frames, and if one were to, after decompressing the data stream, utilize the additional data across frames to enhance the ramaining frames, it in theory should be quite possible to enhance the overall individual frames to a substantial extent.

I think what is being proposed is that "post decompression" it is possible to recover levels of information which upon "recompression" at a higher bitrate, no longer have to be discarded, thus resulting in a "better" picture.

I've always been fascinated with compression algorithms, and how one can basically toss huge portions of the data, yet still manage to re-create the "original" from what would logically seem to be a flawed (due to "loss of information") digital file. The math goes way over my head, but the concept is still fascinating!

Robin Davies-Rollinson
December 3rd, 2010, 12:59 AM
Well, never let it be said that I don't have an open mind.
I'm going to look into this today, but I want to try it with a test card to see if there will indeed better linear resolution if I render out at, say, 80Mbs.
Well done Arkady for giving us something to think about!
One thing Arkady, how did you export the stills?

Robert Young
December 3rd, 2010, 02:42 AM
Here's the part I have difficulty with- Arkady is decompressing his raw 24mbs AVCHD and transcoding/recompressing to a higher bit rate format, then comparing the transcode to the original footage.
Is this not the same as transcoding 24mbs AVCHD to a DI such as Cineform 100mbs .avi? The transcoded footage looks great, it's 10 bit or even 12 bit, 4:2:2, and so forth, and will be more "lossless" downstream.
But even the companies who develop and sell DI software do not claim that their product will produce an image that is "better" than the original footage. Their only claim is to preserve the original image quality throughout the editing process all the way to the final delivery format.
If Arkady's assertion is correct, then it has somehow been completely missed by all of the engineers, software developers, & video professionals who work with these issues on a daily basis, all with the goal of trying to squeeze every fractional improvement possible out of the image data.

Bruce Dempsey
December 3rd, 2010, 04:46 AM
Sony's PMB has yielded the best looking stills from a m2ts file for me extracting action shots from moving subjects (I do not know what Arkady did for his stills)

Something similar occurs in the HDV world. How is it that when a scene is recorded as hdv and downconverted to sd then played back on an upconverting dvd player it looks better than a straight sd recording and play back on the same upconverting dvd player?

Arkady Bolotin
December 3rd, 2010, 08:39 AM
Hi guys, thank you everyone who participated in this discussion so far.

Regarding all your queries, allow me to respond to them not individually but principally.

1. How I did this.

For postproduction I used Sony Vegas Pro 9.0e choosing for the rendering template the settings of either Windows Media Video V11 or MainConcept MPEG-2 at the variable bit rate (actually I varied only the peak bitrate). I tried different rates including 25, 40, and 80 Mbit/s.

To take snapshots I used VLC Media player (ver.1.50), and the frame fragments presented in this post were made partially in Windows 7 Paint and partially in Adobe Photoshop.

2. Why it is possible to uncover (recover) additional details by rendering AVCHD footage at higher bitrates.

I think a few factors – not one – might play the role here. One of them, as I said before, is a reduced processor load during the replay of a less-compressed movie.

Another one, as Dave pointed out, might be the utilization of the additional data across frames resulting in some enhancement of the remaining frames to a substantial extent.

Also, it could be that the inter-frame compression algorithm takes its part. In the course of rendering, this algorithm (together with the block matching algorithm) can find a matching block with little prediction error so that, once rendered, the overall size of motion vector plus prediction error is lower than the size of a raw encoding.

3. Why it has been completely missed by all of the engineers, software developers, & video professionals who work with these issues on a daily basis, all with the goal of trying to squeeze every fractional improvement possible out of the image data.

No, it hasn’t. Improving video quality by means of increasing bitrates is well-known and well-studied theoretical approach.

The problem here is practicality. The maximum bitrate of each video standard (24 MBps for AVCHD, 25 MBps for HDV, 40 MBps for Blu-ray Disc) is not determined pure theoretically but in accordance with the corresponding media. Thus, 24 MBps bitrate is the upper limit for flash memory media, 25 Mbit/s is the maximum bitrate for magnetic tape, and 40 Mbit/s is the maximum for optic disk media.

This means that even if we manage to squeeze some additional detail by the increase of bitrate, the resultant video could be played back only on above-the-mainstream computers powered with 3.0 GHz or better processors and equipped with RAID-type hard drives.

Guy McLoughlin
December 3rd, 2010, 11:53 AM
The problem here is practicality. The maximum bitrate of each video standard (24 MBps for AVCHD, 25 MBps for HDV, 40 MBps for Blu-ray Disc) is not determined pure theoretically but in accordance with the corresponding media. Thus, 24 MBps bitrate is the upper limit for flash memory media, 25 Mbit/s is the maximum bitrate for magnetic tape, and 40 Mbit/s is the maximum for optic disk media.

This means that even if we manage to squeeze some additional detail by the increase of bitrate, the resultant video could be played back only on above-the-mainstream computers powered with 3.0 GHz or better processors and equipped with RAID-type hard drives.

OK, you've lost me here...

40 Mbps is only 40 mega-bits-per-second which translates to 5 MB per second of data. Even my crappiest USB key can handle a 7 MB per second data transfer rate.

Modern SATA hard-drives that are in most computers built in the past 2 years can handle a data transfer rate of 70 - 110 MB per second. ( 560 - 880 mega-bits per second )

The problem with higher AVCHD bit rates is that they are not part of the standard set by Panasonic and Sony, and thus the encoded video files may see very little benefit from this additional data to work with.

I expect the AVCHD standard to evolve, and hope that 4:2:2 color is implemented at some point soon, as I think 4:2:2 color will have a bigger impact on the finished results than just using a higher data rate.

Dave Blackhurst
December 3rd, 2010, 02:19 PM
Guy -
This was the reason compression was originally NECESSARY - it wasn't THAT long ago that I was speaking with someone about editing/producing AUDIO with a computer and was told there simply wasn't enough horsepower or capacity at that time to do it at a cost effective price point...

Editing DV was I'm sure a challenge at first, and I certainly remember the groans from my comupter the first time I tried to edit HDV... and the resultant upgrade. AVCHD taxed the machine I had the first time I tried to edit THAT, so of course an upgrade became necessary...

BUT, back to my point, if one has virtually infinite storage/bandwidth/processing/graphics display power, in theory no compression would be needed, as you could simply display a representation of ALL the 1's and 0's.

BIG IF! And not something that one can expect unless your budget is under the "black projects" military funding... As a practical matter, content and programs have an amazing ability to expand to fill the latest hard drives and use the full capacity of the latest hardware, but MUST be able to run on reasonably up to date systems. I had to upgrade the in-laws, as their "adequate" machine (for e-mail and docs and pictures to some extent) totally choked on video... adequate simply was not quite up to date for modern video display/playback.

Back to Arkadys theory - compression is sometimes called "loss-less", but typically is "lossy", just simply because in order to compress you have to have an algorithm that is in all likelyhood imperfect, and some method of reducing the amount of 1's and 0's while retaining a sufficient amount of the original information to offer an "adequate" playback of the original source.

What constitutes "adequate" varies greatly, my camera has multiple bitrate settings and even DV options, I set the highest on the theory that the more data points, the more accurate and less potentially "noisy" (more signal, less noise) my file will be when I go to work with it and play it back.

SO, presuming that a de-compressed 24Mbps date stream has captured a sufficiently detailed "snapshot", but can now reference adjoining "snapshots" in order to add additional information while re-compressing, it's reasonable to suspect that you could produce a "better" series of "snapshots", with fewer compromises in the data integrity.

It would also result in a significantly larger file size, and potential incompatibilities with playback devices (I can burn a BR file to a regular DVD and I've had good luck playing it back ONCE I lowered the bitrate to around 17Mbps - the 8Mbps looked too degraded to use, IMO).

The "goal" is a balance at each stage of maximum "signal" (data points) with the least amount of loss or noise introduced into the workflow. The burning of an SD DVD by editing/rendering from the original HD files is a good example - if you convert the HD files to SD, THEN edit/render, you started with a significantly lower number of data points.

Generally I like to keep the bitrates/signal levels as "hot" as possible without overloading the inputs/sensor/bandwidth/hardware. What we can do NOW vs. what we could do 5 years ago (let alone 10 or 15, or heaven forbid the dark ages 25+ years ago!) is significant due to improvements in computers generally.

I like to tell my kids about how my first computer had a 30 MEGABYTE hard drive and like 256K of RAM (and don't forget those big computing devices they kept in the basement when I was in college - they always had to feed these strange beasts all these "cards", which all the computer techs used to carry around in boxes... large heavy boxes...).

Remember the now totally useless 1.2M floppy??? Fortunately I just found some cheap card readers on eBay that fit that slot, W7 immediately wants to grab onto the fast flash cards and use them to increase performance!

My Sony CX550 has 64 GIGABYTES, My main box has got 3+ terabytes of storage, and even on the "obsolete" hand me down machines, 2GB of RAM...

Compression is a "necessary evil", but that doesn't mean one has to just accept that it's fully optimized, thus Arkady's line of experimentation is interesting and valid.

David Heath
December 3rd, 2010, 05:38 PM
The problem here is practicality. The maximum bitrate of each video standard (24 MBps for AVCHD, 25 MBps for HDV, 40 MBps for Blu-ray Disc) is not determined pure theoretically but in accordance with the corresponding media. Thus, 24 MBps bitrate is the upper limit for flash memory media, 25 Mbit/s is the maximum bitrate for magnetic tape, and 40 Mbit/s is the maximum for optic disk media.

This means that even if we manage to squeeze some additional detail by the increase of bitrate, the resultant video could be played back only on above-the-mainstream computers powered with 3.0 GHz or better processors and equipped with RAID-type hard drives.
No, Arkady, that's not true. Firstly you're confusing MegaBITS (written Mbs) with MegaBYTES (written MBs) and Mbs and MBs mean very different things. (In this context, 1MBs = 8Mbs).

AVC-HD has a maximum data rate of 24Mbs (NOT 24MBs) - but that's not the upper limit for flash memory media by far - even SDHC will go several times higher, Compact Flash a lot faster still, and P2 and SxS into the hundreds of Mbs.

As far as the basic argument goes, then video compression works by discarding the least significant data, hopefully insignificant enough that it's loss won't be noticed - at least for casual viewing. But once discarded, that's it - it's gone. To use your analogy, then if we start with 25.63452985032, the act of (lossy) compression may truncate it to 25.6345299 - and that's it. Feed that number into the best computer in the world and it can never know that the next few digits should be ....5032. As far as this goes, that's been lost for ever.
....presuming that a de-compressed 24Mbps date stream has captured a sufficiently detailed "snapshot", but can now reference adjoining "snapshots" in order to add additional information while re-compressing, it's reasonable to suspect that you could produce a "better" series of "snapshots", with fewer compromises in the data integrity
What you're suggesting is effectively interpolation. It's true that it may make something "look" better superficially, but it won't - can't - add back in data that's been lost.

As example, let's think of a still photo, 1,000x1,000 pixels in dimension, viewed as an image of 10"x10", and full of good detail. Now we downscale to a image of just 100x100 pixels but still want to view a 10"x10" image. There are two basic ways we could do it.

The first would be to simply make each pixel fit an area of 0.1"x0.1". It would work - but have a very blocky look to it.

Better would be to upscale the image back to 1,000x1,000 - effectively "guessing" nine values in between the ones we know. That would smooth out the blockiness, make it more viewable, but wouldn't get back the detail that had been lost. It would look nicer to the eye - but be much softer than the original.

Mathematically, think of a string of numbers, which may represent brightness levels on a line across an image - let's say 1, 1, 1, 1.4, 1.6, 1.9, 2, 2, 2 and so on. Now we "compress" it by removing 1/2 of the values to get 1, 1, 1.6, 2 etc. Can anyone without access to the original ever work out what the inbetween numbers ever were?

First attempt may just be to repeat each number twice - so 1, 1, 1, 1, 1.6, 1.6, 2, 2, etc. Well, not good. Interpolation can be better (averaging before and after values) so 1, 1, 1, 1.3, 1.6, 1.8, 2, 2 etc - a closer representation - but still not correct. You can never replace real data once it's been discarded.

Now real video compression is far more complicated, obviously, and DCT etc techniques are far cleverer than simply dropping samples, let alone getting into frame by frame techniques. But the basic principle remains - once you discard data, there's no getting it back. Sorry.

As far as AVC-HD goes, then Guy is absolutely right. You could encode at a higher datarate - but then it wouldn't be AVC-HD! The implication may be that an NLE (say) wouldn't be able to cope with it.

Is the spec likely to be extended? I think it's quite likely to end up with a mode to cover 1080p/50, but I doubt anything else. The real question is what would be the point? If you compare it to (say) XDCAM 35Mbs, AVC-HD may *THEORETICALLY* be able to match it for quality - but at the expense of a lot of complexity, processing power etc. The gain then becomes (for a given quality) a smaller file size, the disadvantage far greater complexity.

Now that 35Mbs can be easily recorded to cheap media like SDHC, the incentive to compress the video ever harder becomes far less. The original rationale behind developing AVC-HD was to enable decent quality on cheap memory - not have to use P2 or SxS. Once memory advances to the point where a good quality codec can be recorded to cheap memory (think of the Canon cameras and 50Mbs 422 XDCAM to Compact Flash) much of the rationale behind AVC-HD dies. The advantages aren't worth the disadvantages.

Robert Young
December 3rd, 2010, 05:47 PM
Why it has been completely missed by all of the engineers, software developers, & video professionals who work with these issues on a daily basis, all with the goal of trying to squeeze every fractional improvement possible out of the image data.

No, it hasn’t. Improving video quality by means of increasing bitrates is well-known and well-studied theoretical approach.

Exactly my point.
Recompressing source footage to higher bit rates has been studied extensively, is the basis of many commercial products, and, although the higher bit rate codec provides many benefits, it does not produce images that are better quality than the original footage. That seems to be the the consensus of the people who work in this field, and those who design and sell this type of product.

Robert Young
December 3rd, 2010, 06:13 PM
Something similar occurs in the HDV world. How is it that when a scene is recorded as hdv and downconverted to sd then played back on an upconverting dvd player it looks better than a straight sd recording and play back on the same upconverting dvd player?

This is a completely different situation alltogether.
You are recording at a high resolution (HDV- lots of information captured), converting to a lower rez, lower data rate (DVD), versus recording at a low resolution (DV), converting to a different low rez, lower data rate (DVD).
Of course the high resolution sourced footage will make a better looking DVD.

Arkady Bolotin
December 3rd, 2010, 06:36 PM
Guy:

Regarding your first argument about comparison between maximum bitrate for Blu-ray disc (40 MBps) and your “crappiest USB key” which can handle 7 MB per second data transfer rate, I must say you are mixing up two things.

In digital multimedia bitrate represents the encoding level (the rate of compression): the lower bitrate, the higher compression level (and vice versa). So, 40 MBps for Blu-ray means that after data compression, each second of BD video playback uses only 5 Mbytes of data.

Meanwhile, the data transfer rate (DTR) is the amount of data that is moved from one place to another in a given time. So, the speed with which your USB key can transfer the information is 7 MB per second.

Regarding your remark about 4:2:2 chroma sub-sampling: its implementation might come true sooner than you expected. Even the first standardization of AVCHD (completed in May 2003) had the extension enabled Y’CbCr 4:2:2 and Y’CbCr 4:4:4. Right now, these options are included only in the highest profiles of the AVCHD format (Hi422P and Hi444P).

David:

When you said that 24 Mbit/s “is not the upper limit for flash memory media by far - even SDHC will go several times higher, Compact Flash a lot faster still, and P2 and SxS into the hundreds of Mbs", you unfortunately did the same mistake as Guy did: you mixed up the compression level of AVCHD standard and the DTR of memory media.

Your second argument, which states that “video compression works by discarding the least significant data”, holds true only in the case of lossy compression of data.

Meanwhile, lossless data compression allows original data to be reconstructed from the compressed data exactly. And, as we know, the H.264/AVC compression (used by AVCHD) is very close to lossless one.

Robert:

You speak semantic again. What does it mean “although the higher bit rate codec provides many benefits, it does not produce images that are better quality than the original footage”?

AVCHD video at higher bitrates might not be practical (or commercially applicable) but still provides more detailed images. Isn’t it “better quality”?

Dave:

Very well written, Dave. Nice essay, I was really enjoyed reading it. Kudos!

Robert Young
December 3rd, 2010, 07:46 PM
Robert:

You speak semantic again. What does it mean “although the higher bit rate codec provides many benefits, it does not produce images that are better quality than the original footage”?

AVCHD video at higher bitrates might not be practical (or commercially applicable) but still provides more detailed images. Isn’t it “better quality”?

I'm not sure we are talking about the same thing:
If you mean recording at higher bit rates, of course the quality is better.
I've been under the assumption that we were discussing recording at lower bitrates and then transcoding/recompressing that original footage to a higher bit rate.
The point that I and David are making is that recompressing low bit rate footage to a higher bit rate format does not improve the image quality. It does not retrieve, or reconstruct any data that was lost in the original acquisition compression.

Guy McLoughlin
December 3rd, 2010, 09:07 PM
If you compare it to (say) XDCAM 35Mbs, AVC-HD may *THEORETICALLY* be able to match it for quality

It's been established by many people evaluating the new Panasonic AF-100 camera ( and by Barry Green using a Panasonic AVCHD recorder connected to a Sony EX-1 camera ) that the Panasonic implementation of it's 24 Mbs AVCHD CODEC is superior to Sony's implementation of 35 Mbs XDCAM CODEC. You seem to be very stubborn about not acknowledging this when it's been shown over and over again. I have yet to see ONE example where the Sony XDCAM 35 Mbs produces a better image than Panasonic AVCHD 24 Mbs.

- but at the expense of a lot of complexity, processing power etc. The gain then becomes (for a given quality) a smaller file size, the disadvantage far greater complexity.

Totally agree with you here. There is no free lunch, so AVCHD decompression requires a lot of processing power. With the release of the AF-100 camera less than one month away, I expect Apple is going to have to finally make Final Cut Pro AVCHD friendly. ( all of the worst examples of AVCHD editing that I've seen have all been done by FCP users )

Once memory advances to the point where a good quality codec can be recorded to cheap memory (think of the Canon cameras and 50Mbs 422 XDCAM to Compact Flash) much of the rationale behind AVC-HD dies. The advantages aren't worth the disadvantages.

I disagree here. I expect AVCHD to continue to be extended over the next couple of years, and eventually see an AVCHD 4:2:2 35/50 Mbs standard added to the mix. ( Panasonic will eventually build a "big brother" to the AF-100 )

Guy McLoughlin
December 3rd, 2010, 09:23 PM
In digital multimedia bitrate represents the encoding level (the rate of compression): the lower bitrate, the higher compression level (and vice versa). So, 40 MBps for Blu-ray means that after data compression, each second of BD video playback uses only 5 Mbytes of data.

The compression level directly relates to the amount of space on disk/disc required to store a still or video image. With playback the media containing the video file has to maintain a sustained minimum data transfer rate equal to the compression rate of the image to guarantee proper playback. Thus a video file compressed at 40 Mbs requires playback media that can deliver a sustained 40 Mbs data transfer rate to deliver a proper image. ( assuming the rest of the electronic processing path can handle this amount of data )

Many of the new Blu-ray players have a USB port so that you can indeed copy your BD video file to a USB key to play your video at full resolution on your TV set. The Panasonic Blu-ray players even understand the AVCHD file format, so you can pop an SDHC card directly from your camera and in to a SDHC slot on their player to play the video footage you just shot seconds earlier. ( no additional editing/processing required )

Arkady Bolotin
December 4th, 2010, 08:07 AM
Robert:

I very much regret, but I suspect either you haven’t read my respond to David’s comments or – if you have – you did this carelessly.

Nevertheless, let me repeat my arguments again.

Your statement that “recompressing low bit rate footage to a higher bit rate format does not improve the image quality” and another one that “it does not retrieve, or reconstruct any data that was lost in the original acquisition compression” would be correct under the following two conditions:

1. The H.264/AVC compression (used by the AVCHD format) was of lossy type, which discards (loses) some of the data, in order to achieve compression.

2. The snapshot fragments in the beginning of this thread did not exhibit any difference; or those fragments did not exist at all (as well as my experiments).

However, both these conditions are false. First, by its results the H.264/AVC compression can be classed as lossless one; second, the fragments do exist and they do show evidence of improvements in the detail department.


Guy:

Yes, it’s correct that there is a correlation between the maximum multimedia bitrate and the data transfer rate (DTR) for the corresponding media (if this is what you are trying to say).

For example, the Blu-ray disc max bitrate of 40 Megabit/s means that the corresponding media – which is an optical disk – must provide such speed of data transferring that is not less than 5 Megabytes per second.

Analogously, the AVCHD max bitrate of 24 Megabit/s stays that the media – flash memory – must supply data at the speed not less than 3 Megabytes per second.

David Heath
December 4th, 2010, 01:40 PM
David:

When you said that 24 Mbit/s “is not the upper limit for flash memory media by far - even SDHC will go several times higher, Compact Flash a lot faster still, and P2 and SxS into the hundreds of Mbs", you unfortunately did the same mistake as Guy did: you mixed up the compression level of AVCHD standard and the DTR of memory media.
No, no mistake by either of us. My answer you quote above was a direct response to your earlier statement that ” Thus, 24 MBps bitrate is the upper limit for flash memory media, 25 Mbit/s is ………….” You are clearly referring to flash memory – not AVC-HD – and that is what Guy and I responded to.

Also note that AVC-HD is not (as you seem to imply) uniquely tied to flash memory anyway. The spec (AVCHD - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/AVCHD#Specifications) ) says:
Developed jointly by Sony and Panasonic, the format was announced in 2006 ........AVCHD is a file-based format and ........., video can be recorded onto DVD discs, hard disk drives, non-removable solid-state memory and removable flash memory..............
However, both these conditions are false. First, by its results the H.264/AVC compression can be classed as lossless one; second, the fragments do exist and they do show evidence of improvements in the detail department.
However good AVC-HD may be, I really don’t think it can be called “lossless” by any imagination. How great the losses are will vary hugely with picture content. But if your point about it being “lossless” was correct, then surely your whole argument falls down anyway? If 24Mbs AVC-HD is lossless, how can it be improved on by reencoding to a higher bitrate? “Lossless” implies “perfect”, and how can you improve on that?

Examining the samples you show more closely, I’m not really sure either is much different to the other. And are you certain they both correspond to EXACTLY the same frame? They are different – but I’m not convinced one is actually BETTER than the other. Likewise, have you compared a succession of frames throughout that shot? Without doing that, there’s no way to eliminate differences caused by random variations on a pixel basis during recompression.
I expect AVCHD to continue to be extended over the next couple of years, and eventually see an AVCHD 4:2:2 35/50 Mbs standard added to the mix. ( Panasonic will eventually build a "big brother" to the AF-100 )
At heart, H264 and MPEG2 are both built on the same foundation. H264 has an additional range of tricks it can call on to help, but the improvement they can give is not linear with bitrate. The tricks work far better at lower bitrates, give far less relative improvement at higher bitrates.

So a higher than 24Mbs rate of AVC-HD will progressively lose it’s theoretical advantage over comparable MPEG2, whilst being far more difficult to work with. The more the bitrate gets upped, the less and less point there is to AVC over MPEG2.
If a big brother to the AF100 emerges, I’d expect it to be AVC-Intra 100 to SDXC cards, which potential “big brother” users would find far more useful. (And SDXC – not SDHC – already has the speed required for 100Mbs.)

Dave Blackhurst
December 4th, 2010, 01:42 PM
It struck me that the single linear/plane digit isn't at all a good analogy, except on a most simplistic level... and a more complex example is necessary to explain why, with "interpolation", it is possible to "recreate" otherwise "lost" data.

Even in a still photo, you have not a linear series of numbers (each pixel is not represented by A digit from 0-9, rather a SERIES of digits), but rather a matrix of "numbers", In other words there is not just "X", but "X" and "Y", and except on the very edges, where there wouldn't be adjacent pixels, each pixel has EIGHT adjacent pixels from which to potentially draw "information" Each pixel, is not entirely independent, but is related to the surrounding pixels - part of any algorithm is the discarding of "redundant" information while leaving a marker to "restore" that information upon decompression, by referencing the surrounding pixel data.

I believe each pixel, being comprised of multiple bits, in effect would also have a third dimensional component.

So in order to more accurately understand the potential, one needs to envision that EACH pixel has a given depth, and eight adjacent "information sets", of some "depth". It doesn't take very long to realize the data volume we are dealing with, and why compression becomes a potential necessity in order to deal with that volume.

We all realize that in the single linear digit example, once a number is GONE, it's GONE... but for the sake of argument when dealing with compression schemes, you have to look at it three dimensionally - IOW, the algorithm takes a look at numbers (data bits) which can be "reduced" because of redundancy, and recreated from the surrounding data points during decompression. The higher the level of compression (or the lower the level of horsepower/hamsters available to process the data), the more data points must be discarded, thus the greater the reduction in the quality of the eventual reconstruction.

If we ignore the "edges" (worst case being a corner, with 7 adjacent points, or an edge with 10), each digital pixel "data poiint" actually has 26 directly adjacent points in a 3 dimensional (X, Y, & Z) matrix... depending on the efficiency and accuracy of the compression scheme (as noted, they do vary), you can see where there is much more information to draw from - and that's before you begin to consider that the surrounding pixels provide a degree of information that can be utilized, to lesser and degrading degrees, the farther away they are from the "target pixel"...

Add to that, as soon as we speak of video, we now are not talking about a snapshot, but 30 per second, in motion, which adds another very complex set of dimensions, and I think I just gave myself a headache!

Keep in mind that things like face tracking, auto focus,exposure, etc, are crunching these data points in REAL TIME, or at least as fast and the little processing hamsters can run, in order to deliver the best possible results!



I'm hoping my offspring who seem to excel in mathematics are better prepared for the digital world - I understand films, emulsions, stuff I can handle and "see"... once you start converting everything into 1's and 0's it gets so much more complex to get a handle on!!

I am finding this discussion fascinating, I think we are actually much on the same page, as there is an acknowledgement that all AVCHD is not necessarily "alike" - I've been shooting AVCHD cameras for a while, the first ones IMO were "better" or equal to HDV, but as things have advanced, they have allowed me to get much better footage, often in much tougher shooting conditions, than any HDV/tape camera I could afford - some of that is hardware, but I have to presume that algorithms are also being tweaked to improve performance.

My opinion is that it's taken a few generations to get to where, even with dedicated pixel peeping, it's hard to argue with the performance... but we're close... at the least we are approaching the point of diminishing returns under the existing standards.

Before one says that just because the current state of the art is "the best that can be achieved", one should consider that man isn't supposed to fly any more than the humble honeybee... it's only a matter of time, creative thinking, and dedicated effort, before the "impossible" becomes reachable... barriers are meant to be broken!

Remember too that scientists/engineers in the lab must eventually be told they need to release their "monsters" to pay the bills, otherwise they'd always be tweaking and improving and we'd never get to play with the new toys... and so next years new toys will almost always bring a years worth of creativity and tweaking to the table... such an exciting and fun time to be dealing with technoliogy!

Robert Young
December 4th, 2010, 03:43 PM
...would be correct under the following two conditions:

1. The H.264/AVC compression (used by the AVCHD format) was of lossy type, which discards (loses) some of the data, in order to achieve compression.

2. The snapshot fragments in the beginning of this thread did not exhibit any difference; or those fragments did not exist at all (as well as my experiments).

I disagree with your assumptions:
1) AVCHD is not considered a lossless codec- in fact, most consider it to be very lossy. It does not tolerate repeated recompression well at all. This is why it's often converted to DI for extensive, or multi system editing. To decompress AVCHD from the original file and recompress it again to AVCHD- no matter what the data rate- will actually cause you to lose quality
2) Honestly, I don't think I can actually see any significant quality difference in the two frame grabs from your original post.
My main point throughout this discussion is that the notion of recompressing original footage to a higher data rate is exactly what occurs when we use a DI codec for editing. It is established knowledge that the high data rate DI image is not of higher quality than the original lower data rate acquisition footage.
The companies that develop and sell DI products readily acknowledge this fact.
Clearly, if there were even a slight chance that the image quality was improved by this process, these companies would be marketing their products extensively on this basis.

Arkady Bolotin
December 4th, 2010, 04:51 PM
David:

I must confess I like your arguments, they‘re logical, smart and mostly flawless. Therewithal, I found few of them being wrong.

First, regarding your definition of lossless compression as “perfect”, I have to say lossless does not mean perfect, it means “without loss” (simply because without proper definition word “perfect” has no meaning).

Second, to your question how I can improve on that, I answer this. By partly decompressing AVC-compressed video (rendering it at higher bitrates) we lessen a processor load and – as result –number of artifacts during replay.

Third, concerning you reservation whether we can classify AVC compression as lossless. Technically speaking we cannot, but since this compression design includes many lossless compression algorithms, in many cases we may expect the exact (or almost exact) reconstruction of data compressed with H.264/AVC. In which cases and to what extent it would be exact, this – as you properly put – would apparently depend on footage content.

Regarding your observation of the frame fragments, I have nothing to add. As you understand, it was virtually impossible to extract the same frame from two different clips, so your assumption that they are different just because of random pixel variation should stay.


Dave:

You did it again! Very, very nice! Even though I have studied it for many years, never before theory of data compression had seemed so appealing to me! For that I want to thank you!


Robert:

Sorry Robert, I have noticed your reply just in the moment I’ve finished my responds to the previous posters. But I believe most of what I have written to David would be relevant to your comments. If not, let me know, I’ll answer to them later. Again, my sincere apologies.

David Heath
December 4th, 2010, 07:19 PM
First, regarding your definition of lossless compression as “perfect”, I have to say lossless does not mean perfect, it means “without loss”.....
I can't help feeling that's just playing with words, sorry.
By partly decompressing AVC-compressed video (rendering it at higher bitrates) we lessen a processor load and – as result –number of artifacts during replay.
But that's not going to happen like that. If the processor can't cope it will stutter or drop frames - not just introduce a few more artifacts. And what basis do you have for saying that higher bitrate AVC will need less processor power than lower bitrate AVC?
Technically speaking we cannot, but since this compression design includes many lossless compression algorithms, in many cases we may expect the exact (or almost exact) reconstruction of data compressed with H.264/AVC.
It may include some lossless algorithms, but it includes a lot of lossy ones as well! How much loss will depend on bitrate (higher the bitrate, greater the loss) but you be sure there will be some. Regarding your observation of the frame fragments, I have nothing to add. As you understand, it was virtually impossible to extract the same frame from two different clips, so your assumption that they are different just because of random pixel variation should stay.
Well, if the comparisons offered aren't of the same frame, pre and post recompression, I don't think any sensible conclusion at all can be drawn. How can you be sure that the differences between the two you've shown are due to your theory, and not simply randomness and noise!? Frankly, that's a far more likely explanation for what you're seeing.

If you can't make sure of a same frame comparison, (by looking at a frame, say, 15 frames after a vision cut?) the only other way to do a meaningful test would be to take a random sample of (say) 10 original frames, and a random sample of 10 recompressed frames. To make it scientific, you'd then need to present all 20 randomly to someone who didn't know which was which, and ask them to choose what they thought were the best 10.

If they picked out the 10 recompressed frames, I might start to get interested. I'm not holding my breath, though.

Dave - yes, my analogy was highly simplistic. But it does hold true IN PRINCIPLE for real world systems. What you are describing is lossless compression, and yes, it's obviously possible. But the level of compression that is being spoken of here is far, far higher than can be achieved losslessly, certainly for any real world images. The trick is to do it in such a way that the most insignificant details get discarded first. Ideally, only the ones that the eye is unlikely to notice anyway - though even that may reduce the scope for post manipulation.
........there is an acknowledgement that all AVCHD is not necessarily "alike" - I've been shooting AVCHD cameras for a while, the first ones IMO were "better" or equal to HDV, but as things have advanced, they have allowed me to get much better footage, often in much tougher shooting conditions, than any HDV/tape camera I could afford - some of that is hardware, but I have to presume that algorithms are also being tweaked to improve performance.
Now that is without question. The AVC-HD spec allows for various "tricks" to be used in addition to basic MPEG2 to improve the quality/bitrate ratio - but how many is not specified. The spec only fully defines the DECODER, not the coder. An early AVC coder may not have been much better than basic MPEG2.

Hardly surprisingly, cheaper cameras aren't likely to be as good as more expensive ones, and the coders improve with general technical advances. I fully expect the AF100 to have a better coder than the HMC150, for example - technology moves on.

In the UK that was demonstrated recently with the HD broadcasts when the BBC got new hardware coders. (In the UK, HD is broadcast as H264.) They were able to substantially reduce the bitrate, with not much impact on overall quality.

Dave Blackhurst
December 5th, 2010, 01:13 AM
In the end it's all a matter of degree...

You've got to remember that "reality" is not 1's and 0's, but rather a highly complex multidimensional environment... one that no two people can perceive in "exactly" the same way. ANY device or methodology to "capture" or record "reality" introduces another complex series of variables...

IOW, "perfect" doesn't exist. You're merely creating a representation, with some degree of degradation or enhancement at various stages.

I don't think the deficiencies of technology/flaws in reproduction/losses of data points count in the end - if the CONTENT has value or "moves" the viewer, it doesn't matter to the "average audience member" if it looks like poo, or has flaws...

While it's a noble task to improve upon technology and max out the capabilities of our "toys", one should remember that human perception fills in most of the "holes" anyway, sometimes in amusing ways (optical illusions illustrate how "easy" it is to fool the eye).

Most compression and interpolation schemes are just another way to achieve the most efficient balance between image quality and the capabilities of the hardware, while "fooling" the eye of the viewer in a pleasing way... and are inherently "flawed", regardless of whether mathematically they are "lossless" or "lossy".

Over time, with increases in hardware capabilities and improvements in software, we should get closer to "perfect", or at least the perception thereof. It still will come down to the skill and vision of the human talent that will determine the actual value of the end product.

Arkady Bolotin
December 5th, 2010, 05:46 AM
David:

Thank you for staying in this discussion so long… Now back to your arguments.

1. I was disappointed in you when you said that “If the processor can't cope it will stutter or drop frames - not just introduce a few more artifacts”. No David, it is not that simple. Decompressing on the fly (i.e. while replay) is much more complicated process. In the case of near 100 percent overload, yes, frame dropping and stuttering may take place, but what if the load is less than 100%?

2. To question “What basis do you have for saying that higher bitrate AVC will need less processor power than lower bitrate AVC?” I say it based on the compressing rate: the higher bitrate, the less the compression rate of the media.

3. When you argued about percentage of lossless and lossy algorithms within AVC compression design, you unfortunately made me disappointed even more: to say that AVC has a lot of lossy compressions is one hell of an exaggeration.

4. You were right when expressed doubt about whether the difference between two fragments were real and not introduced by statistical noise. So, I have prepared another pair of the fragments cut from an original footage (AVCHD, 24 Mbit/s) and the rendered clip (VC-1 format, maximum bitrate 82 Mbit/s).

Look closely at the window frames of the mosque and the sky near the lamppost. As you can see, the rendered at 80 MBps movie has less artifacts and more subtle details.

Ron Evans
December 5th, 2010, 08:05 AM
It seems to me that the main difference is brought out by the way the stills are extracted. Both streams are long GOP but trans coding to a higher bit rate will yield more information each frame for the still extraction to take. There are big differences between how my various NLE's extract stills with Vegas being at the bottom of the list. I don't have VLC to compare. PMB does the best which has a specific feature for still extraction which may well use a different approach.

The video playback may well be the same unless the transcoding employed some upscaling like the Toshiba Super scaling used to upscale SD to HD which produces similar results to the stills shown. I have Cineform and also use Canopus HQ neither of these shows any difference on video playback to the original AVCHD.

Ron Evans.

Arkady Bolotin
December 5th, 2010, 08:49 AM
That is a valuable point, Ron. Really, a process of still extraction may significantly contribute to appeared difference.

My answer is this. Since I use the same software (VLC media player) to extract and play back the clips, the same mechanism, which might yield more information during still extraction from the clips, should also be in play during the clip replay. Thus, whatever the reason, with the increase of bitrate we would have the improved footage.

Regarding VLC itself, I can say it has a plethora of internal codecs, including H.264/MPEG4 AVC encoder (x264) that is far advanced that Windows 7 Media Player has.

Dave Blackhurst
December 5th, 2010, 01:40 PM
The experience I had with burning BR to a regular DVD was that if you exceed the bitrate that the media/hardware was capable of playing back at, it would TRY, but then slowly grind to a halt, then just give up. The BR templates in Vegas have two "default" bitrates, 25 and 8 Mbps - to get best results, I shot for what I understand is the maximum bitrate for a regular "red" DVD disk, which is 18Mbps.

What is important to note is that a disk burned at 8Mbps looked like poo, may as well have been SD - it suffered from significant "lost" information in order to "fit" the 8Mbps stream. A 25Mbps disk would attempt playback and look good, only to "choke", but the "optimized" bitrate will both play back and look quite good (the source AVCHD was from Sony consumer cams at 17Mbps max). Once the bitrate was lowered to within the specs, playback was fine and smooth, without trouble.

If you have a storage media capable of higher bitrates, along with playback equipment capable of that bitrate, per my earlier postings, you then can reduce the needed compression, and in theory increase the amount of detail and the smoothness of the individual frames.

I think the question has been "is it possible to 'restore/extract' information encoded in the lower bitrate data stream upon decompression if when you recompress you use a HIGHER bitrate"? Arkady has experimented, and it would seem possible. I suspect that the "math" would under some circumstances make this possible, depending on the efficiency and accuracy at many points along the chain. AVCHD seems to be pretty robust, and the manufacturers are definitely squeezing better and better image quality out of it...

From a practical standpoint, I'm pretty sure you won't see spec changes until the hardware is fully capable of RELIABLE playback. Simple reason for this is if you release a spec for a CODEC/format, but no one can play it back reliably, you've got trouble. The 1080/60p from the latest 700 series consumer Panasonics is an example - looks amazing, but it's taken a while for people to figure out how to process it!

From a "practical" standpoint, I'd think entities like law enforcement, advanced imaging sciences, and espionage would likely already be aware of the possibilities, and have the $$$ to spend on such "enhancement" capabilities. The increase in detail and smoothness would probably not even be noticed by 99.99% of the viewing population, but we'll see it one of these days as the technology trickles down.

Dave Blackhurst
December 5th, 2010, 03:10 PM
It suddenly occurred to me that there's a "variable" here that will drastically affect the possible improvements...

MOVEMENT - we all are aware that motion is a problem, and that Codecs tend to "break down" when there's a lot of fine motion in the frame.

Arkady - your samples look like they are from frames with minimal movement, where the compression is LEAST likely to have to discard data. In order to squeeze in the added data from movement between frames, the CODEC has to toss increasingly large amounts of information.

IOW, if you have a relatively static shot (or a series of frames) with little movement (a "still" basically), the codec can almost compress 100% - the repetitive data comprises a large component of the adjacent frames... I'd expect in this case the probability of restoring "lost" detail might be pretty high.

Now, lets take a series of frames with say 25% non-redundant information (I'm just tossing out random percentages, as I don't know where breakdown occurs) - the codec is still able to compress relatively well, maintaining the bitrate AND the level of detail, but by nature, more will be "lost" because the non-repetitive information must take up more "space" or bits.

Bump that to maybe 50% moving/non redundant, I'm guessing the bitrate is maxing out, and beginning to strain to retain details...

Let's say at 75%,the CODEC is now past the point of diminishing returns, and is now tossing data just to stay under the threshold "max bitrate", you start to get "noise", "distortion", etc.

if 100% of the frame is moving/non-redundant - (fast pans, bouncy handheld footage), you've got a big hairy MESS, as the CODEC is trying to empty a bathtub with a teaspoon... the "data" overwhelms the ability to compress it in real time!

I'm going to postulate that you wouldn't be able to "recover" ANY additional detail as the motion component of footage increasingly "stresses" the compression algorithm - there's probably a "curve" that could be plotted as to how much "overhead" the Codec has at a given percentage of movement in the frame, and at some point you literally "hit the ceiling/wall" as the motion increases, and at some point the image quality begins to degrade beyond acceptable.


Just thought this was worth adding to the mix, moral being, don't whip pan, and be conscious of what motion in the frame is doing, and oh yeah, keep in mind shallow DoF "helps" as blurry, non-detailed backgrounds require less data than highly detailed, infinite DoF shots...

Arkady Bolotin
December 5th, 2010, 03:54 PM
Dave, your last comments were right on the nail! Even I myself if tried to wrap up the moral of this discussion, hardly would do better.

Yes, it’s absolutely correct that the H.264/AVC video format (used by AVCHD) is capable of nearly lossless coding. In practical terms it means, that under right circumstances we can expect the exact (or almost exact) restoration of the compressed data.

Therefore, it makes possible to decrease the compression rate of the recoded AVCHD video (by rendering it into some particular format) before and play it back afterward.

But, the problem here, as Dave elegantly put, is reliable media for the playback of such high-bitrate video. Ask yourself how you can run 80-Megabit-per-second video and I am sure the only answer you would find right now is a pretty fast hard drive coupled with a powerful processor. Zero distribution options, no archiving.

So, until that wonderful moment when industry will come with a commercially profitable solution how to playback high-bitrate video, it will stay only in our computers.

P.S. Just saw you last addition. Unfortunately, I have no more time to write, but, in general, yes, you have guessed right: I used a tripod when I filmed these scenes (and no panning, no zooming).

David Heath
December 5th, 2010, 04:20 PM
3. When you argued about percentage of lossless and lossy algorithms within AVC compression design, you unfortunately made me disappointed even more: to say that AVC has a lot of lossy compressions is one hell of an exaggeration.
But that's not what I said. Look back and you'll see the words are "It may include some lossless algorithms, but it includes a lot of lossy ones as well!" - the spec has lossy algorithms as well as lossless ones. How lossy the compression itself is depends on factors such as bitrate - if it's very high there'll be little loss, if low (as when AVC is used for web video) the losses will be very large. What you can't say is "by its results the H.264/AVC compression can be classed as lossless one".

Look, the latest pictures you've posted show visible artifacts - thats the whole reason you've linked to them. Surely that itself is proof that H.264/AVC is lossy?
4. You were right when expressed doubt about whether the difference between two fragments were real and not introduced by statistical noise. So, I have prepared another pair of the fragments cut from an original footage (AVCHD, 24 Mbit/s) and the rendered clip (VC-1 format, maximum bitrate 82 Mbit/s).

Look closely at the window frames of the mosque and the sky near the lamppost. As you can see, the rendered at 80 MBps movie has less artifacts and more subtle details.
If they aren't the same frame, original and recompressed, there is really little point in comparing them. Any differences seem pretty slight anyway.
I think the question has been "is it possible to 'restore/extract' information encoded in the lower bitrate data stream upon decompression if when you recompress you use a HIGHER bitrate"? Arkady has experimented, and it would seem possible. I suspect that the "math" would under some circumstances make this possible, depending on the efficiency and accuracy at many points along the chain. AVCHD seems to be pretty robust, and the manufacturers are definitely squeezing better and better image quality out of it...
They are squeezing better and better out of it by improvements to the CODERS. Making them more and more complex, and making use of more of the defined toolkit. But the DECODERS have to be more or less defined. Any transcode means decoding to uncompressed, then recompression - which inevitably must mean a further loss. Once the original coder has lost something, that's it. The act of inital decompression must get you the best that's possible - anything else you then do to it can only preserve it or make it worse.
Yes, it’s absolutely correct that the H.264/AVC video format (used by AVCHD) is capable of nearly lossless coding. In practical terms it means, that under right circumstances we can expect the exact (or almost exact) restoration of the compressed data.
No, that's not correct. AVC-HD may give good results under normal usage, but it's nowhere near lossless.

Look, if that argument was correct (that AVC-HD gave nearly lossless coding) then how can you gain anything anyway by reencoding!?! If AVC-HD is lossless, why not just stick with this "lossless" solution? How can you improve on the original if it's as you say?

It's human nature to want to discover a quick magic fix, but here all I can see is extreme optimism. Sorry.

Ron Evans
December 5th, 2010, 06:14 PM
That is a valuable point, Ron. Really, a process of still extraction may significantly contribute to appeared difference.

My answer is this. Since I use the same software (VLC media player) to extract and play back the clips, the same mechanism, which might yield more information during still extraction from the clips, should also be in play during the clip replay. Thus, whatever the reason, with the increase of bitrate we would have the improved footage.

.
The still extraction mechanism may be very different from the realtime playback mechansim as is the case with Sony MBS and some of the in camera still functions in AVCHD cameras. Still ectraction has the opportunity to analyse over many frames and potentially increase resolution based on whether there is any or slight movement in the image.

Ron Evans

Robert Young
December 6th, 2010, 01:57 AM
Arkady
Regarding the two mosque shots:
To my eye, the original shot (24mbs) looks better.
The 80mbs image has a little less contrast and that may give the appearance of more detail in the darks, but the same detail is in the 24mbs shot as well if you look closely.
If you decompress AVCHD and recompress again to AVCHD, even at a higher data rate, you will loose additional data.
Unfortunately, that's the bottom line.

Arkady Bolotin
December 6th, 2010, 06:31 AM
My initial intention was to avoid (by all means) a technical lecture on AVCHD and video compression essentials. Besides, this thread is rather over.

However, after the last comments by David and Robert, I see that such lecture is necessary (or at least it would be helpful).

Therefore, people, prepare for the hard stuff. If someone has already felt a jolt of boredom, sorry, you can skip this and consider the discussion closed.


H.264/AVC compression (used by AVCHD) was developed over a period of about four years. The roots of this standard lie in the ITU-T’s H.26L project initiated by the Video Coding Experts Group (VCEG).

The H.264/AVC has been developed to address primarily the following needs:
1. Use more than 8 bits per sample of source video accuracy.
2. Use higher resolution for color (i.e., to use 4:2:2 or 4:4:4).
3. Use very high bit rates.
4. Use very high resolution.
5. Achieve very high fidelity – even representing some parts of the video losslessly.

The main principle of video compression applied in H.264/AVC is this. Each picture is compressed by partitioning it as one or more slices; each slice consists of macroblocks, which are blocks of 16x16 luma samples with corresponding chroma samples. Each macroblock is also divided into sub-macroblock partitions for motion-compensated prediction. The prediction partitions can have seven different sizes – 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4.

The hierarchy of a video sequence, from sequence to samples is given by:

sequence(pictures(slices(macroblocks(macroblock partitions(sub-macroblock partitions(blocks(samples)))))).

Thus, the basic unit of the encoding or decoding process is the macroblock.

Slices in a picture are compressed by using the following coding tools:
1. "Intra" spatial (block based) prediction
2. "Inter" temporal prediction – block based motion estimation and compensation
3. Interlaced coding features
4. Picture Adaptive Frame Field
5. MacroBlock Adaptive Frame Field
6. Lossless representation capability
7. 8x8 or 4x4 integer inverse transform
8. Residual color transform for efficient RGB coding
9. Scalar quantization

However, the main tool is Lossless Entropy coding. Entropy coding is the coding technique that replaces data elements with coded representations, which can result in significantly reduced data size without data loss.

A coded (compressed) by H.264/AVC picture contain different slice types, and may come in two basic types – reference and non-reference type.

What’s going on during replay of the video?

It’s important to remember that the fidelity of the playback is constrained by the processing power, the memory size, and other parameters of the AVCHD (H.264/AVC) decoder. Picture size, frame rate, and bitrate play the main role in influencing those parameters. Particularly, the higher bitrate, the higher the fidelity can be.

If the processing power (or the memory size) isn’t up to the task (or the bitrate is constrained to a low limit), the fidelity of playback deteriorates (by throwing away some of the macroblocks or even the picture slices). This is, of course, has nothing to do with the faithfulness of the compressed video, which can be restored (decompressed) in some other way.


I hope this crush introduction into coding technique answers all questions raised before in the course of the thread.

For the purpose of the lecture, I used mainly the papers “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions” by Gary Sullivan et al (Microsoft Corp., 2004), and “Subjective quality evaluation of H.264/AVC FRExt for HD movie content" by T. Wedi and Y. Kashiwagi (Joint Video Team document JVT-L033, 2004).

Dave Blackhurst
December 6th, 2010, 10:26 AM
Arkady -
I think you are perhaps missing one aspect that others have mentioned, To quote Yoda, "there is no try, only do or do not" when it comes to playback - if the equipment or media is not up to handling the data stream, it falls apart or crashes entirely.

I think what you're saying is that at a given bitrate, regardless of whether it can be played back, there might be additional image information that can be extracted. My "wrench" post pretty well explains the answer is "maybe", and why that would be true.

Let's say a "raw" data stream would represent for the sake of argument 100Mbps - but you have to get that down to 17Mbps to be able to play it back - there will be a significant reduction required. It the data is sufficiently redundant, that reduction is relatively easy - the more "new" and different data, the more difficult the tack becomes, and the less likely you will find any "restorable" data.

I saw the little bits of noise in the Mosque shot, particularly around the light, that's one of those "artifacts" that personally drives me up the wall - the sky is a smooth tone, and doesn't have a flock of mosquitos. As for the window detail, I thought the 80 Mbs sample was a tad cleaner - this is one of those "judgement calls" on image quality - I've seen people prefer an image with crushed blacks that tend to give a more contrasty overall image, but there is less shadow detail - I myself lean the other way, prefering more gradations of shadow, but the overall image will look less contrasty - just a personal preference.

Arkady Bolotin
December 6th, 2010, 11:36 AM
Dave:

I agree completely with you – the phrase of mine “If the processing power … isn’t up to the task” is an example of bad wording. It’s supposed to be read as “If the H.264 decoder’s power … isn’t up to the task…”

Philosophically speaking, that entire dispute around high bitrates can be reduced to one question: Given a particular AVCHD-based original video what would be the optimal bitrate for the rendered movie? In other words, how we can choose the right bitrate for rendering?

This is an area open for experiments and taste contests.

Robert Young
December 6th, 2010, 12:19 PM
Philosophically speaking, that entire dispute around high bitrates can be reduced to one question: Given a particular AVCHD-based original video what would be the optimal bitrate for the rendered movie? In other words, how we can choose the right bitrate for rendering?

It sounds like maybe you are referring to an optimal bitrate for obtaining the best preview imagery within an editing system. That is a reasonable goal, and again there are existing solutions. The most common being to convert the raw AVCHD to a high bitrate DI if needed.
When you say "the optimal bitrate for the rendered movie", this means something different to me.
The movie (edited sequence) is usually rendered to a specific delivery format for distribution- Blu Ray, web, DVD, etc., and those bitrates are determined by the pre existing specs of the various formats.

Arkady Bolotin
December 6th, 2010, 12:44 PM
Robert:

Yes, industry standards are one thing, but the optimal bitrate is another. Even within the same standard (say, Blu-ray disc), you can choose different bitrates (in the certain range) for rendering.

David Heath
December 6th, 2010, 01:28 PM
My initial intention was to avoid (by all means) a technical lecture on AVCHD and video compression essentials. Besides, this thread is rather over.
I had intended not to bother with any more answers, but..... well, one last time.....
However, the main tool is Lossless Entropy coding. Entropy coding is the coding technique that replaces data elements with coded representations, which can result in significantly reduced data size without data loss.
Just what evidence can you quote to support the claim that that is the *main* tool? It's certainly not my understanding. It's one tool, but only one amongst many - and most are likely to involve some loss.

Think about it. Uncompressed HD video is around 1Gbs at 8 bit depth. AVC-HD is about 20Mbs (average). That's roughly a 50:1 average compression ratio! There is simply no way any system can achieve that and still be truly "lossless" on normal pictures. It's a tribute that it manages to achieve it at all, without the lost data being visibly missed under normal viewing.
If the processing power (or the memory size) isn’t up to the task (or the bitrate is constrained to a low limit), the fidelity of playback deteriorates (by throwing away some of the macroblocks or even the picture slices). This is, of course, has nothing to do with the faithfulness of the compressed video, which can be restored (decompressed) in some other way.


I hope this crush introduction into coding technique answers all questions raised before in the course of the thread.
No, far from it. The first paragraph implies that whatever bitrate the original video is compressed at, the original can always be reconstructed by a powerful enough computer. If true, the expectation would be that playback quality would vary widely with the power of the hardware it's replayed on.

But that is not the case in practice. Quality is independent of the power of the decoding computer - until a lower threshold is approached, when playback first becomes stuttery, then fails completely. The compression quality is a function of the individual coder and bitrate - not the replaying equipment.

At AVC-HD bitrates, H264 will have to discard information at the coder, and once lost, it's lost.

Arkady Bolotin
December 6th, 2010, 04:17 PM
Oy vey, David, give me a break…

You discard my statements as baseless and instead offer yours equally deniable and unsubstantiated…

Think about it. The problem here is not the logic or wordplay, we are arguing about something that can be only proved or disproved by experiments and observations.

Neither you nor I possess any real knowledge about what AVCHD coder Sony or any other manufacturer uses in their cameras, what algorithms they apply and what the profile of H.264 they employ, how lossless or lossy the encoded video could be in various situations.

Anyway, thank you for answering. I am glad you are still here.

David Heath
December 6th, 2010, 05:02 PM
You discard my statements as baseless and instead offer yours equally deniable and unsubstantiated…

Think about it. The problem here is not the logic or wordplay, we are arguing about something that can be only proved or disproved by experiments and observations.
OK - it's an observation that a computer with a low power won't playback AVC-HD at all. Increase the processing power, and at a certain threshold it will start to play with stuttering, then with a little more power play cleanly. (I think Dave Blackhurst confirms this?) No matter how much more powerful a processor you then put on the task, it won't make any better job of the decoding quality wise - the coding quality is a function of bitrate for any given coder.

Try that as an experiment for yourself. Play your samples on a range of computers with different powers and see if you notice any quality difference.

That's why I feel my statements are not unsubstantiated, and ARE based on observations. (As well as theory.)

I'll let everybody else make up their own mind.

Arkady Bolotin
December 6th, 2010, 07:03 PM
You said “No matter how much more powerful a processor you then put on the task, it won't make any better job of the decoding quality wise - the coding quality is a function of bitrate for any given coder”.

But this is an exact confirmation of what I did: I made two clips in the different video formats – AVCHD (M2T) and VC-1 (WMV) – at the different bitrates – 24 MBps and 80 MBps respectively – and showed that “the coding quality is a function of bitrate”, i.e. that the quality of the 80 MBps WMV clip is higher.

Look, David, we can do it eternally: your argument – my argument and so on, and so on…

I think we should stop right here. I am sure: everyone has made up his mind long time ago.

Dave Blackhurst
December 6th, 2010, 09:57 PM
There are TWO variables here - as we are speaking of, the horsepower and hardware, AND the encoding bitrate. If something is recorded at a lower bitrate, it'll play back on lower grades of hardware, but it will not look as good as a higher bitrate stream, and the likelyhood of ANY extra detail being available to extract is pretty small.

Based on what I understand of compression, it should, under some circumstances be possible to pull additional detail out of a decompressed stream, and in theory use a higher bitrate to "keep" that detail in a more intact form.

I think the main problem with the theory is that one needs to know how well movement in the frame is dealt with before you could even know whether the endeavor is worth the hassle, my suspicion is that the specifications for the standard are set up in such a way as to be as "gracefull" as possible handling detail and movement with "average" hardware, under "average" shooting conditions, and under those "average" conditions, the possible improvement would be minimal.

Arkady Bolotin
December 8th, 2010, 04:40 PM
To decipher the “mystery” of more-detailed-appearance of the high-bitrate clips rendered from original AVCHD video two words have a very important meaning. Those words are transcoding and interpolation.

Transcoding is the conversion between different encoding formats. Naturally, transcoding between two lossy formats – be it decoding and re-encoding to a different codec or to a different bitrate – causes generation loss (i.e. loss of the video quality).

However, if lossy-compressed video is transcoded to lossless or uncompressed one, then it can be technically qualified as a lossless conversion because no information would be lost.

In practical terms, it means that transcoding video from the AVCHD@24 MBps format to the VC-1@80 MBps format shouldn’t produce (perceptible) video loss. In other words, the original video clip and the transcoded one must have the same video quality.

So, why the VC-1 clips do appear as more detailed? It is so due to the bicubic interpolation utilized in the VC-1 video codec. By constructing new sub-pixels, the bicubic interpolation brings about the appearance of fractionally filled pixels (i.e. smoothness) in the picture. In other words, the bicubic algorithm (which is lacking in the AVCHD format) increases apparent sharpness (i.e. detail level).

The downside of the bicubic interpolation is that it reduces contrast: that is why the rendered clips look less contrast.

Thus, transcoding video from the AVCHD@24 MBps format to the (more advanced) VC-1@80 MBps format is merely a video enchantment. That’s it, nothing more, nothing less.


POST SCRIPTUM:

Robert and David, I think I owe you both a great deal of apologies.

David, I feel very sorry. You were right, I was wrong. Instead of paying attention to your arguments I drowned them deep into empty demagogy.

My sincere apologies go also to everyone who felt mystified by those high-bitrate clips. I was bewildered by them as well. But this is called being human.