|
|||||||||
|
Thread Tools | Search this Thread |
March 20th, 2006, 09:45 PM | #16 |
Major Player
Join Date: Mar 2005
Location: UT
Posts: 945
|
What he said!!
Leave it to Chris to just swoop in and make a salient point! I totally appreciate what Steve is trying to get to the bottom of, but in the end all anyone worth their salt with one of these cameras cares about is the result of the mechanism. Yes, I do indeed care about what that mechanism actually is, but only so much. In the end, do the results sell or not? It still seems like there's something very good but still unexplainable about 24F. Maybe Canon is working on a patent for it and it's a relatively new deinterlace technique. btw Steve, I think MPEG Streamclip has a "2D FIR (Finite Impulse Response) filter" of sorts for deinterlacing and conversion. I'd like to try that with 50i H1 material and see if the results are anything close to 25F mode. The only problem is I'm pretty sure the program doesn't offer all the motion adaptive deinterlacing that Compressor does, but I'm not sure if Compressor has a 2D FIR. I'll look into it, but please keep us posted on your further research. |
March 20th, 2006, 10:06 PM | #17 | |
Regular Crew
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 69
|
Quote:
I'm not negating Barlow's reading of Steve's post, just point out how mine differed. Hoping more for a cancelling out effect! |
|
March 20th, 2006, 11:54 PM | #18 |
Starway Pictures
Join Date: Jul 2005
Location: Studio City
Posts: 581
|
If my responses have come across as a result of Steve's post, then I apologize. They were not my intention.
I think my "rant" was more in response to a general consensus over several years that there seems to be an anti-Canon bias in the filmmaking community. Of which I don't understand. |
March 21st, 2006, 07:35 AM | #19 |
Major Player
Join Date: Jun 2004
Location: McLean, VA United States
Posts: 749
|
Deinterleaving is indeed frequenty done with FIR filters though often in the "vertical temporal" domain and I wouldn't be too surpised to discover that Canon has come up with some new twist on this that they wish to keep proprietary. Since we are free to speculate here's my particular guess. Sucessful deinterleaving depends on being able to look at a small part of the image and tell whether it has moved from sampling instant to sampling instant and how far it has moved vertically and horizontally so that the lower field can be shifted into alignment with the upper. Does this ring a bell? It should because that is exactly what an MPEG encoder needs to do. The difference is that with encoding you compute the residual and send that along with the motion vector whereas with deinterleaving you shift the moved part back into alignement with the other field. Now thinking about how to measure movement it occured to me that if you take two successive upper fields, DFT (not DCT) them and congugate multiply the results the phase of the product will give you (integrate vertically and do a linear fit for phase - the slope of the phase is the horizontal offset; then do the same for the vertical offset) the offsets. Multiplying by the conjugate phase slopes will and taking the inverse transform (some issues with edge effects possibly solvable by proper 0 stuffing) shift the second field by the amount of movement so parts that really did move will be on top of where they were in the first and parts that didn't will be misaligned. Taking the difference between the two separates the moving from non moving parts (the difference is small where the model is good and larger where it isn't) so that the moving parts of the recorded lower field can now be shifted by half the measured difference and combined with an unshifted copy of the parts that didn't to generate a lower field with the moving parts in the right places.
This is equivalent to "weave" where there is no motion and to "bob" where there is except that it suggests that both can be done at once and that "bob" can be adaptively wickered to the amount of motion as opposed to using a fixed set of coefficients as is apparently the practice. Note that the frequency and spatial domains are dual so that my guess could be implemented in either domain but time varying coefficients would be required in latter. So my WAG is that Canon have done something like this which cleverly combines deinteleaving with MPEG encoding (note that the DCT, which is required for MPEG encoding is simply the real part of the DFT). A note on dB: A one stop decrease in sensitivity means twice the light is required for the same signal to noise ratio: 10*log(2) = 3 in terms of the light energy. True summing two CCD cells each producing 1 volt of signal gives 2 volts which is a 6 dB (20*log in the case of voltage) increase but the noise voltages from the 2 cells will also add (though incoherently) resulting in 3 dB more noise. The improvement in SNR is thus 6 - 3 = 3 dB if cells are combined. This reasoning applies if the summation is done before gamma correction. If done after it's a different ballgame. |
March 21st, 2006, 01:12 PM | #20 |
Trustee
Join Date: Mar 2004
Location: Milwaukee, WI
Posts: 1,719
|
How would that then work for SDI which has no mpeg2 compression? The 24f must happen before it even gets to the mpeg2 part so it can split off to the SDI on one branch and mpeg2 encoder on the other branch. Are the digic DSP and mpeg2 encoder both doing some of the same things for the mpeg2 version?
Here is a good way to test the damn thing for those with the H1. Lock the camera down and use the remote control to zoom in and out. Record 60i and 24f a few times. Since the remote zooms at a locked speed it should be pretty easy to match up a 24f and 60i version. This will allow us to compare exact motion between 24f and 60i. While at it maybe somebody with a Decklink system could also do it with SDI to see what is going on. The 24f chroma channels on 24f could tell us a lot. Try to shoot a scene with lots a detail and color but that will not change while you are zooming in and out. |
March 21st, 2006, 05:25 PM | #21 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
But, that means the results are only available after encoding! So I think, as was pointed-out, the SDI output rules this out. Still, it seems only a matter of time for someone to use the encoder to do smart things to video. Especially, with AVC where objects are tracked very closely. RE dB: so 6dB gain but only a 3dB increase in S/N. Right? I found the logic error in my model that caused my model to estimate decresed sensitivity in 24F mode. Bad logic = GIGO. Will start a new clean 24F Thread tonight since this one is getting very messy. Thank you for participating!
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
|
March 21st, 2006, 05:36 PM | #22 |
Major Player
Join Date: Jun 2004
Location: McLean, VA United States
Posts: 749
|
We know that the MPEG encoder is running when the SDI output is active because one can pull a tape at the same time he is taking output from SDI. So if I'm right (and I have no real reason to think I am) there should be no poblem with SDI. The deinterleaving machine runs in either case and feeds parallel paths. One to the rest of the MPEG processor and the other to the SDI processor.
Roger on the dB. |
March 21st, 2006, 09:20 PM | #23 | |
Wrangler
Join Date: May 2003
Location: Eagle River, AK
Posts: 4,100
|
Quote:
__________________
Pete Bauer The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. Albert Einstein Trying to solve a DV mystery? You may find the answer behind the SEARCH function ... or be able to join a discussion already in progress! |
|
March 21st, 2006, 11:05 PM | #24 |
Built the VanceCam
Join Date: Apr 2004
Location: Prescott Valley, AZ
Posts: 109
|
Another 24F "Technique"?
Based on the all the comments about how great the image is, perhaps there is no "deinterlacing" at all. Since there are 3 CCDs, then in the 24F (48Hz) mode, they could invert the phase on the clock on the GREEN CCD chip. Then the odd field (rows) of the RED and BLUE CCDs would "see" the same image at the same time as the even field of the GREEN CCD. Now every frame contains incomplete but accurate "progressive" image information. No "motion-sensing" required!
Then the signal processing consists of deriving some luminance info from the RED and BLUE and some chrominance info from the GREEN signals. Not 100% accurate, but simpler and probably a better image result than a motion-sensing deinterlace scheme. |
March 22nd, 2006, 12:34 AM | #25 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
|
March 22nd, 2006, 12:41 AM | #26 |
Obstreperous Rex
|
He doesn't control that function. I do. If you feel the need to revise a post after the window of time has expired for editing it, you can either post a follow-up indicating the revision, or contact me directly and I'll do it for you.
|
March 22nd, 2006, 02:59 AM | #27 | |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
Quote:
Even rows: R + B + G from row above Odd rows: G + (R + B from row above) I see a couple of issues: 1) Need R + 2G + B for Y 2) Not clear if the system will generate 800 TVL for static and 540 TVL for dynamic.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
|
March 22nd, 2006, 01:11 PM | #28 |
Regular Crew
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 69
|
Interesting. So would this be the moral equivalent of vertical greenshifting 1440x540? If so, are there known artifact of green shifting that could be used to determine if this is the case? For example, on material that includes no green, it would seem you'd have visible loss of vertical resolution. A red or blue rez chart perhaps?
It does seem that if this was the case that Canon would have simply claimed progressive, rather that murky the waters with their 24f nomenclature. As I recall, in their promo literature, they did a strange thing where they sort of claimed true progressive for SD and then used slightly watered down language to describe their HD 24f mode. I assumed it was because there are enough pixels in single field to generate the SD frame. Seems like if they were comfortable with that, they'd have been ok calling a greenshifted field true progressive as well. |
March 22nd, 2006, 03:51 PM | #29 |
Built the VanceCam
Join Date: Apr 2004
Location: Prescott Valley, AZ
Posts: 109
|
I think if they had claimed True Progressive from interlaced CCDs, that would cause a credibility problem, regardless of the method used.
|
March 22nd, 2006, 07:28 PM | #30 | |
Obstreperous Rex
|
Quote:
Pixel Shift process is a good thing, not a bad thing, it creates more sampling points per pixel, or in other words higher resolution. How much "green" there is has nothing to do with it. And it's interesting how Canon used a technique several years ago very similar to the Panasonic HVX200. The original Canon XL1 employed Pixel Shift in both axes to produce DV, at 720x480 which is a 345,600 pixel matrix, from CCDs that had only 250,000 effective pixels each. Nobody made a big deal about that back in 1998, but now suddenly it's a federal case when Panasonic does the exact same thing with the HVX. The reason for all this pointless measurebating is that there are too many people talking about these cameras and not enough people actually using them. At any rate, please put that sophomoric term "green shift" out of its misery and call it what it is. Pixel Shift. That's how the industry refers to it... that's how any decent, self-respecting video geek should refer to it too. |
|
| ||||||
|
|