|
|||||||||
|
Thread Tools | Search this Thread |
August 29th, 2007, 10:43 AM | #46 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
I sure wish someone from Blackmagic would read these posts and give a good answer.
"Inquiring minds want to know", so to speak. I've had a few e-mails from the blackmagic tech support folks on a different question - maybe I should send them a link to this thread and see if i can get a response! |
August 29th, 2007, 11:01 AM | #47 |
New Boot
Join Date: Apr 2006
Location: Port Stanley, Canada
Posts: 23
|
Test HDV vs Component HD 100
I did a test using Adobe CS3 to capture analog component 1280 x 720 8bit 60p through a BM studio card, and at the same time captured 30p to tape, them imported through 1394 port into Avid Liquid. The difference is very noticeable. I posted a mp4 file, because of file bandwidth issues, to show people what they can expect to get out of this combination. Even the HDV portion of the mp4 has more resolution then the analog component capture.
http://www.portstanleynews.com/TV/JVCTest.mp4 I will do another test with a Canon HV20 to check the difference of HDV vs HDMI |
August 29th, 2007, 01:16 PM | #48 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
Doug,
Thanks much! The more I think about it, the more reasonable this sounds. I was in error in my previous post talking about the relative quality of converter box A to D vs Blackmagic A to D vs camera A to D. Because the camera is already D from the get go and to produce analog it would have to first do a D to A conversion. So inputting analog would then effectively have a D to A step followed by an A to D step. At least going via firewire, the whole process is digital. It would be really nice if there were a nice block diagram showing what happens where in the path within the camera. Thinking about it this way, I suspect that a lot of chroma info has already been thrown away - or in fact never captured from the sensor, so expecing to get better color or anything else by picking up the analog outputs is probaby just wishful thinking. It really isn't the direct path through the camera I think. The intended path in the camera would be direct to MPEG-2 and then to tape, so this is probably the path that has the best components and the best algorithms, hence probably the best quality. At least this is what occurs to me on rethinking. And again, sincere thanks for actually making the test! (were you also going to post the capture via Blackmagic, by the way?) Last edited by Jim Andrada; August 29th, 2007 at 01:18 PM. Reason: Fix typo |
August 29th, 2007, 02:52 PM | #49 |
Trustee
Join Date: Sep 2005
Location: Gilbert, AZ
Posts: 1,896
|
Thanks Doug,
You did this comparison with the BM studio card. I'm wondering how different the results would be using the Intensity Pro card component inputs? It seems hard to believe the A/D conversion can be this bad. |
August 29th, 2007, 03:29 PM | #50 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
Doug, Sorry - I didn't wait long enough to see the picture switch to component.
Now I see how it looks - pretty bad indeed! |
August 30th, 2007, 11:56 AM | #51 |
Trustee
Join Date: Sep 2005
Location: Gilbert, AZ
Posts: 1,896
|
Doug,
This article mentions what you're seeing using the Studio's component inputs: http://www.dv.com/reviews/reviews_it...leId=196603088 It also mentions try to capture at a higher bit rate to reduce softening. For ex. Capture at 10bit for 8bit video. Have you tried this? |
August 30th, 2007, 12:22 PM | #52 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
It also mentions using the card in conjunction with firewire (from a port not on the card)
This doesn't immendiately make a lot of sense to me because I can't figure out what connection there would be between the card and a separate port on the computer. Does anyone know what he's talking about and how the card would be in the path for a firewire capture? |
September 11th, 2007, 11:04 AM | #53 | |
Regular Crew
Join Date: Dec 2006
Location: Menlo Park, CA
Posts: 53
|
Quote:
Video capture is inherently analog (light is not digital). A CCD registers the light hitting it by converting it to an electrical (i.e. analog) signal. That is the sole purpose of the CCD. So, with that signal being analog, using Jim's words, the camera is really ANALOG from the "get go." AN A/D convertor is then introduced into the signal path to sample the signal and convert it to a digital one before recording to tape or otherwise routing the signal to an SD-HDI connector etc. (In comparison, the old analog cameras actually recorded the ELECTRICAL signal straight to the tape.) So you can see that it would be relatively simple to route the electrical/analog signal around the AD convertor to the component (or composite) out and feed this "uncompressed" signal to another device which can compress it at a different specification than is otherwise required for recording HDV to mini-DV tape (with the limitation of a data stream of less then 25Mbps). Hence the potential for recording "uncompressed" from these sources as long as you have the right equipment to ingest the full bandwidth of the electrical signal. Obvioulsy, the end quality of your new recording is completely dependent on the spec and quality of your AD conversion path. And just a note about Firewire output - Firewire only carries a digital signal and its bandwidth limitations constrain it such that, in this case, it can only carry an HDV (i.e. compressed) signal. This signal is indeed post AD conversion - the same signal that gets laid to mini-DV tape in an HDV camera. There is a great ebook about the JVC ProHD series available on the JVC site that explains in detail how these camera work. (It does cost $40 but I found it worthwhile and, no, I don't get anything for recommending it.) Also there is a good white paper about the JVC philosophy and their choice of codecs, etc. (It's free!) Both can be found on the http://pro.jvc.com web site. Navigate to the page about the GY HD-110 and click on the "Color Brochures" button. Both are there. (I tried to post a direct link but it doesn't stay intact. Sorry.) Hope that is helpful. Terry |
|
September 11th, 2007, 11:44 AM | #54 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
Terry,
Actually, there is no such thing as digital. Everything is analog (this may be the dirty little secret of the computer age!) and digital is fantasy! However it is a very convenient and useful fantasy/abstraction - particularly for me as I've been making a living exploiting this fantasy since 1959! So of course I agree with you that everything in the camera is a set of analog states such as charge levels on the sensor, and in the various components. However, I still think it unlikely that the camera ever has the pure analog signal available to route around all the digital processing. In particular, as JVC says elsewhere on the web site, the data from the sensors is sampled in two halves, ie right and left of the sensors to prevent overheating of the sensor chip. I think it is very unlikely that this would be compatible with maintaining the possibility of a pure analog flow through the camera since the word "sampled" seems to imply (at least to me) that the data are already in the process of being "digitized". There are also a lot of things such as white balance etc going on that I doubt are being performed in analog circuitry - digital processes for these functions are so much more compact and cheap these days. I'd really love to find out what the real data flow in the camera is but absent some kind of block diagrams from JVC I think we're all in speculate mode! |
September 11th, 2007, 01:42 PM | #55 | |
Regular Crew
Join Date: Dec 2006
Location: Menlo Park, CA
Posts: 53
|
Quote:
We agree (I think) that the signal coming out of the component output IS analog (by definition, as stated by the camera mfgrs, and as testified to by common usage). There have been component (YCC) signals since before video cameras were digital. This signal carries electrical "components" of one luma and 2 chroma. Since the engineers designing the camera presumably intended to include this capability, why would they choose to digitize the signal (AD conversion) and then reverse the process (DA conversion) in order to allow for this? It would almost certainly result in a lower quality and would require much more circuitry. So, while it might be an bit of an engineering "inconvenience" to tap the analog signal just after the CCD, it can't be any more arduos and inconveneient than the multiple conversion process. Can it? Why not believe that they purposely design the circuitry to tap the analog signal and route it to the component out? As for the white balance function - that is an adjustment that is actually applied to the chips themselves to balance the sensitivity to the specific colors so, no, it wouldn't require digital processing. It is essentially "adjusting" the CCD chips to tweak their light sensitivity - just as boosting the gain is. After all - you can't alter the WB (or gain) from the camera after capture, even within the component signal (tweaking it in post doesn't count because that's a different proces), so why is that part hard to believe? So, with the HD110/200 there are two signals through the camera. One routes the analog signal to the component out - full uncompressed YCC signal ready for AD conversion by an external device. The other routes through the AD conversion chipset to be "sampled" and, thus, converted. It is here (in the AD conversion) where the two halves are sampled separately - yes, to prevent overheating - and result in a digital signal. Perhaps where we might be getting hung up is whether these two processes are embedded on the same tiny circuit board or not. They might be on the same "chip" but they are not the same circuity. My contention - based on what I believe to be facts and logic - is that, since an analog signal IS captured by the CCD (and can be electrically tapped at that point), and a high quality analog signal IS available at the component output, there is little reason to suspect that the pathway from one to the other is more complex that it need be. The quality of the component signal is purely a product of the quality of the CCD - not the AD conversion. And it happens that JVC has high quality CCDs and thus a very good component signal. And to finish the explanation - the digital route in the 110/200 goes from the capture CCD, through the AD conversion process and THEN is compressed to be laid to tape AND sent out the Firewire. Hence, the ability to record an HDV signal simultaneoulsy to a hard drive capture device. Note that the sampling process and the compression process are different and separate prcesses! (Which is one reason the HD200 create a better digital signal and is more expensive). The HD-250 offers on additional path. After capture by the CCD and conversion by the AD convertor but BEFORE compression, the signal is routed to the HD-SDI port which is an uncompressed version of the digital signal. (The HD-SDI port, if I am not mistaken, is the sole difference between the HD-200 and the HD-250.) To be quite clear, the "sampling" does a certain amount of data reduction simply by the limits of the AD conversion process but this is NOT the compression we are talking about when we talk about HDV compression and the resultant 4:2:0 color space. That compression is driven by the HDV/MPEG-2 standard as agreed to by the industry and which is driven by the need to keep the signal bandwidth low enough to be recorded to the same mini-DV tape which became the standard in the SD DV era (before HD and HDV). Again - I hope that helps. If you still think I am "speculating" I guess I'll have to accept that. Unless, of course, there are any known facts to dispute my "speculation." :-) Best, Terry |
|
September 11th, 2007, 02:44 PM | #56 |
Inner Circle
Join Date: Jan 2006
Posts: 2,290
|
Dear Terry and Jim, reading your posts brings to mind a quote from "Butch Cassidy and the Sundance Kid". To paraphrase the quote: WHO ARE YOU GUYS??
|
September 11th, 2007, 03:40 PM | #57 |
Regular Crew
Join Date: Dec 2006
Location: Menlo Park, CA
Posts: 53
|
Not sure I understand the nature of the question. I am assuming a certain amount of sarcasm. Did we intrude on a private conversation?
|
September 11th, 2007, 03:56 PM | #58 |
It's my understanding that the digital data available (after compression) via the firewire port or the tape, is 4:2:0, 8-bit. The analog data available from the component ports is sampled at 4:2:2 and delivered at the equivalent of 10-bit. This implies a A-D process directly from the CCD block, which JVC advertises as 12-bit. The digitized signal, at this point, follows one of two paths, as follows:
1- it undergoes compression for tape or firewire delivery(HDV specification is for 4:2:0, 8-bit) 2- it is converted, once more from D-A for analog delivery to the component ports. Once again, the processing engine is maintained at 4:2:2, 10-bit. |
|
September 11th, 2007, 06:26 PM | #59 |
Inner Circle
Join Date: Jan 2006
Posts: 2,290
|
|
September 11th, 2007, 06:52 PM | #60 |
Inner Circle
Join Date: Feb 2007
Location: Tucson AZ
Posts: 2,211
|
50 years ago I couldn't even spell "enjinear" and now I is one!
Not an engineer, just an old guy who was majoring in Physics and Chemistry and somehow got into computer programming in 1959 and has been living around the damned things ever since. Once upon a time long long ago I wrote firmware for some unique military systems and have hung around with hardware designers for - decades. Long enough to pick up the concepts, even if I couldn't design a circuit if my life depended on it. I'm really just trying to formulate a model in my head of how these cameras really work and where in the process various things like compression and up/downconversion and sampling take place Everything I say is probably wrong (as my wife likes to remind me) so don't take any of this too seriously. |
| ||||||
|
|