|
|||||||||
|
Thread Tools | Search this Thread |
February 25th, 2007, 05:44 PM | #1 |
Regular Crew
Join Date: Feb 2007
Location: Leeds UK
Posts: 94
|
Please clear up a few things for me.
There are a few questions i have from reading different topics, I hope somebody can explain a few things for me. It will also help other people who are new and need a little help.
My questions: Whats all this pull down 2.3.3 thing i see some of you discussing? Whats cc? Why does everybody keep talking about progressive and deinterlacing? whats the difference in the footage and why do we want it? Why is everybody obsessed with 24p? what does this 35mm lens adapter for the z1 and the canon hd actually do to the image? sorry if some of these are daft questions but i really dont know. Thanks in advance. |
February 25th, 2007, 07:04 PM | #2 |
American Society of Cinematographers
Join Date: Jan 2003
Location: Los Angeles, CA
Posts: 123
|
You really should spend some time doing a Google search on these terms. I'll give you the quick answers:
NTSC (standard definition video) in North America runs at 60 fields per second (actually 59.94), mainly because our power supply is 60 Hz. This means that the video image is made up of 60 fields, each field containing every other line of picture, the next field containing the in-between lines; there are two fields that make up every frame of video. The fields are run in interlaced-scan; the TV raster draws each line of the field from top to bottom and then goes back and draws in the inbetween lines from the next field. This is called "60i". Film projection is 24 frames per second -- an entire frame is shown at once, and the film camera captures an entire frame at once. Progressive-scan video works the same way; whole frames are displayed one at a time, not as fields. Your computer monitor is a progressive-scan monitor. If you want to show material shot at 24 fps on film or at 24P on video, but display it on a 60i monitor, you have to convert 24 frames into 60 fields. Since every frame in interlaced scan video is made up of two fields, simply splitting each "P" frame or film frame into two fields only yields 48, so you have to then add the equivalent of 12 more redundant fields every second in order to add up to 60 -- you do this by repeating some of the existing fields. You have to insert these repeated fields in a pattern that makes it hard to see the occasional repeated field, hence the 3:2 pattern, called "pulldown". I won't go into the exact pattern. Why shoot at 24P or 24 fps? Because that's the standard for film production, so if you're trying to make video look more like film, that's one place to start (not the only place.) It only really affects motion reproduction, and it's not about "better" or "worse", just about mimicking 24 fps film. I think "cc" is color-correction. You can take material shot in interlaced-scan and deinterlace it to convert it to progressive-scan, if you are planning on showing it on film or a progressive-scan display device. The 35mm adaptor is about recreating the shallower focus look of 35mm photography. A 1/3" CCD camera produces on average a very deep-focus look compared to 35mm; this is because the small target area requires very short focal length lenses to achieve the same field of view as longer focal lengths on a 35mm camera.
__________________
David Mullen, ASC Los Angeles |
February 25th, 2007, 07:22 PM | #3 |
Obstreperous Rex
|
Andy, you've just received an excellent reply from someone whose work we can see in movie theaters across the nation this week (at least in the US, not sure when it's released in the UK): http://www.dvinfo.net/conf/showthread.php?t=87536
|
February 25th, 2007, 08:25 PM | #4 |
Trustee
Join Date: Mar 2006
Location: Montreal, Quebec
Posts: 1,585
|
David,
That was a long, patient and detailed answer to a question that would have be answered with RTFM by many. A pleasure to read. Congrats on the film opening, Vito |
February 26th, 2007, 02:15 AM | #5 |
Trustee
Join Date: Nov 2005
Location: Honolulu, HI
Posts: 1,961
|
"what does this 35mm lens adapter for the z1 and the canon hd actually do to the image?"
In addition to the ability to have a shallow depth of field, there seems to be an increase in the exposure latitude due to the diffusion element. There is a correlation between the amount of diffusion and the increase in latitude. A nice side benefit of only having the subject in focus is that the background and foreground decrease in detail which allows HDV cameras to reduce compression artifacts. Blurred footage won't have the huge change in pixels from one frame to the next, so more data is available for the sharp areas. The downsides are increased complexity and loss of available light. |
February 27th, 2007, 05:34 PM | #6 |
Regular Crew
Join Date: Feb 2007
Location: Leeds UK
Posts: 94
|
Thanks guys, very informative, And Marcus thanks er.. you lost me on the first paragraph lol but much appreciated anyway. Im learning little by little :)
|
| ||||||
|
|