|
|||||||||
|
Thread Tools | Search this Thread |
June 29th, 2007, 09:40 AM | #31 |
Trustee
Join Date: Mar 2004
Location: Milwaukee, WI
Posts: 1,719
|
There have been a few people on this forum that have done tests and show 24F to only have a 10% loss of resolution. The loss could just be due to the fact that interlaced video has a forced 1080 resolution since every other line comes from a different moment in time. Regardless how soft the image is there are 1080 unique lines and sometimes depending on what is being shot it can give the illusion of more detail. 24p on the other hand is meant to be smooth across the lines to make a clear picture so there are smoother transitions between each line.
For example take a shot of a rock. On interlaced each field may be soft but line 1,2,3,4,5 each have a clear edge. On progressive the lines kind of blend together to form a smooth edge of the rock and are not forced to have unique details every other line. The HVX200 is clearly known as a progressive camera even though the vertical pixels equal 540. 540 progressive pixels pixel shifted and 540 pixels doing whatever F mode does is not going to be all that different. In fact I have always said that 24F was more like pixel shifting the 540 lines so you get a lot more then just frame doubled. Cameras such as the SONY Z1 with Cineframe frame double their footage and 24F has a ton more detail then Cineframe does. Pros who have shot 24F next to a Cinealta F900 and said they were very close are not fools and know what they are talking about. I'm sorry but I will take the word of a Cinematographer over a engineer anyday. After all we are shooting video and not building cameras. Last edited by Thomas Smet; June 29th, 2007 at 06:46 PM. |
June 29th, 2007, 06:16 PM | #32 | |
Major Player
Join Date: Apr 2006
Location: Melbourne, Australia
Posts: 826
|
Quote:
Originally, it seemed easy. You had the Sony F900 which had 1080 horizontal rows (or lines) of pixels (I've never used one, but I've always believed it to have 1080 lines.) The image is captured during the one instant in time and is therefore called "progressive" (as opposed, of course to interlaced, which captures the image in two separate instances in time). It was called 1080p. Then you had the Panasonic Varicam which has 720 horizontal lines of pixels all captured at the same instant in time. It was called 720p. When I bought my JVC GY-HD101E two years ago it had 720 horizontal lines of pixels all captured in the same instant in time and was also called 720p. At that stage, my precision definition for 1080p was "1080 horizontal lines of pixels all captured in the same instant of time". No longer. My first departure from this definition was with the announcement on a Panasonic webpage (prior to the camera's release) that the HVX200 recorded "1080p". I became very curious as to how this could be so when I later found out that the HVX200 sensor only had 540 horizontal lines of pixels. This has since been expanded on (thanks to very helpful posts by Jan Crittenden and Barry Green over in the HVX forum) that a 1080i signal is scanned with 1080p imbedded within it and that the 1080p signal can be extracted. And with the discussion earlier in this thread about the Canon and 1080p, it helped me realize just how much the definition had changed. (Or perhaps it hasn't changed and I've just had an incorrect definition all along.) And thanks to the excellent debate between Steve and Chris I have come to a new realization about this definition. The definition of "Progressive Scan" has shifted its emphasis away from "Progressive" (WHAT is being captured, how many lines on the sensor) and more towards "Scan" (HOW it is being captured [scanned] and processed afterwards). Perhaps it is just a natural evolution. Originally you had 1080 lines of pixels that were scanned and processed and delivered to the NLE in a final form of 1080 horizontal lines. So it didn't really matter. WHAT was captured was exactly the same as HOW it was captured and processed and delivered in a final form (in terms of horizontal lines). But once a variation was introduced - smaller sensors than 1080 lines or 720 lines which were coupled with an effort to make a final product delivered to the NLE of 1080 or 720 lines - the emphasis of the definition had to fall on one side or the other. Perhaps there should be two terms introduced to delineate between the WHAT and the HOW. There are many, many other criteria for selecting a camera as far as I am concerned (lenses, form factor, image manipulation and control through the menu structure, compression, storage, how the image "looks", sound input connections, etc., etc.). But I still think it helps to seek more precision with these definitions. And I reserve the right to totally change my opinion on this tomorrow if more information comes to light! |
|
July 2nd, 2007, 03:27 AM | #33 |
HDV Cinema
Join Date: Feb 2003
Location: Las Vegas
Posts: 4,007
|
*** Interlace Scan Dual-line CCDs in Progressive Mode ***
Interlace Scan Dual-line CCDs can be operated in progressive mode—as long as the full frame read-out occurs at no more than half the field rate. In this case, a 1080 camera CCD block can output 50Hz or 60Hz using interlace scanning as well as 25Hz or 30Hz using progressive scanning. You will note that 24fps rate is lower than either of the latter values. So, if a 1080i camera uses Interlace Scan Dual-line CCDs, it can also be switched to capture progressive video at 24fps/25fps/30fps. -------- The Canon reads-out 540-lines at 50fps/60fps AFTER it has been passed through a low-pass Row-Pair Summation filter IN each CCD, so output resolution is about 400-lines per FIELD. With a Kell factor of 0.87, the field resolution is about 350 TVL-ph. Total frame resolution is thus about 700 YVL-ph. Measured vertical resolution for the Canon is, in fact, 700 TVL-ph. Assuming the Canon reads-out 1080-lines at 30fps or 24fps AFTER it has been passed through a low-pass Row-Pair Summation filter IN each CCD, output resolution is about 810-lines per frame. With a Kell factor of 0.95, the frame resolution would be about 770 TVL-ph. This number is greater than the Canon's Interlace number and, therefore, cannot be the way the Canon works! That leaves two options: Field-doubling and ADAPTIVE deinterlacing. Chris claims field-doubling is NOT used. By that he means that all 1080-lines from the CCDs are used -- not 540-lines. Were an adaptive deinterlacer used on the 810-line Frame output by the CCDs -- assuming the deinterlacer preserves on average about 80% of the Frame's resolution -- the Canon's Vertical resolution will be about 564 TVL-ph. (And, 80% is just about right.) Now, if you compare H1's interlace V. rez. (700) and it's F-mode V. rez. (564) you'll find the loss of resolution to be about 20% -- which is what's been claimed. So all is well -- no field-doubling and only a 20% loss in V. rez. between modes. ---------------------- Because the deinterlacer is adaptive -- were the resolution test to be performed with the camcorder being move slightly to simulate motion -- then such a test would yield 540 TVL-ph. Those who read the tests know Adam did, in fact, "purturb" the cameras during his resolution tests -- as he does ALL cameras. He does this -- and I fully agree -- because it realistically checks a camera's ability to capture motion rather than simply score well in a static resolution test. It penalizes those who use any kind of pixel-shift technology -- and read this carefully -- to increase resolution from sensors that physically have fewer pixels than the recording format. Thus, it does not penalize the Canon in Interlace mode. And, it doesn't seem to affect Sony's V1 interpolation system. And, of course, it doesn't penalize the JVC 720p camcorders. It also penalizes those who use any kind of "processing" to get frames without interlace artifacts -- rather than use progressive sampling sensors. Because Canon does use such a "process," Canon cannot call it's video "24p." You can call this unfair, but I doubt Adam will cease. And, neither will I because it allows my math model to estimate "dynamic measured resolution" from "sensor resolution" with an average error of less than one pixel on eight HD cameras. Moreover, it does not penalize those who use green-shift to "super sample" an image. It simply ignores the super-sampling which is fine because by definition -- "super" means more pixels than needed for the format.
__________________
Switcher's Quick Guide to the Avid Media Composer >>> http://home.mindspring.com/~d-v-c |
September 6th, 2008, 11:59 AM | #34 | |
Major Player
Join Date: Mar 2007
Location: california North and South
Posts: 642
|
Quote:
|
|
September 7th, 2008, 03:49 PM | #35 |
Regular Crew
Join Date: Aug 2006
Location: Paris France
Posts: 89
|
Yes 50, 60p helps a lot, but you are a bit mistaken, Slow and fast pans work well, it's the mid range of pan speeds that we've become used to that produce judder at slower rates. I like 50p, 60p and simply adjust my shooting methods for 25p and avoid the judder. Use prime lenses with the JVC adaptor and inverted image (200/ 250 series) and you can throw the background, which cuts the judder. If the DOF is shallow and you follow your subject then judder/ stutter all but go, adjust your pan rate and it has gone.
|
September 14th, 2008, 03:24 PM | #36 |
Regular Crew
Join Date: Jul 2005
Location: Boston, MA
Posts: 73
|
So, I'd like to pose some thoughts/questions about this thread, and try to tie it back to the original question.
1) Are cameras that shoot 24p better for film transfers? I mean specifically in terms of cameras within a prosumer price range, which I think was part of the original intent of the post. The discussion here seems to have diverged from that point and was broken apart into separate discussion about a) 1080i vs. 720p and b) the idea that people who ask videographers for 24p just read it in a magazine and have no idea what they're actually asking for. 2) The discussion of "film look" seems to be defined a little bit differently in this thread than I have seen in other places on dvinfo. Specifically I'm seeing references to the stutter caused by a quick moving (panning) camera, whereas other places discuss things like image softness and DoF. I just don't think that anyone clamoring for a "film look" is clamoring for stuttering. You're all more experienced than I am, so please set me straight on these questions/comments. |
September 14th, 2008, 04:05 PM | #37 |
Regular Crew
Join Date: May 2006
Location: Sydney, Australia
Posts: 91
|
The transfer process of video to film requires the image be converted to progressive scan. This is not a perfect or simple procedure for interlaced footage, therefore video images shot originally as true progressive have an advantage and are preferred for the filmout process.
Phil Balsdon Sydney, Australia |
September 14th, 2008, 05:21 PM | #38 | |
Inner Circle
Join Date: Dec 2005
Location: New York City
Posts: 2,650
|
Quote:
24p is ideal for a video-to-film transfer for obvious reasons as there is a one to one transfer without interpolation issues from differing film rates. However 24p shot as video can make film look like video in certain situations. Film has a long history with all the technical and artistic achievements that comes with a long history. The nature of chemical image recording creates a different image than electronic image recording. Almost all of our great recorded visual entertainment is on film, that's what people are used to. Video is playing catch up with film and everyone wants video to look as close to film as possible for a "quality" look. Stutter has always been a problem in shooting 24 frames and most cinematographers have developed techniques to avoid it. 60i does not have a stutter problem and most videographers need to learn how to adjust their camera movements to compensate when they start shooting 24p and even 30p. Film has a softness, partly from the grain which is chaotic and changes from frame to frame, that lends a fantasy element to the image. Video records with a regular pattern that image has to fit into. Also the nature of electronic image pick-up has historically been poor with high contrast images creating unnatural edges on images. This has been improving steadily over the years. Shallow DOF is something associated with a film-look. It is more of a purposeful decision by scores of film makers over the years. You can have film just as sharp as video and guess what? It starts to look like video. You have have shallow DOF in video and people feel like they are watching film (especially if you are shooting 30p). The fact is that you have to make a decision based on the production, it's present needs and it's future needs. I just completed a concert DVD. It was shot in HD in 30p for a film-like look that the producer wanted. Why not 24p? This production is never going to film, ever and I didn't want my cameramen worrying about stutter and I didn't want to think about 24 to 60i issues. 30p fits into 60i perfectly. And if the unlikely decision to project it is ever made, it'll look great on Blu-Ray in 30p and the audience will be happy. The original question was if 720p has enough resolution. For broadcast, DVD and Blu-Ray, yes. For a film transfer, yes but it looks more like Super 16mm then 35. 1080p is sharper but the sharpness does not always translate into a more film-like image.
__________________
William Hohauser - New York City Producer/Edit/Camera/Animation |
|
| ||||||
|
|