Sony F3 vs RED? - Page 7 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Sony XAVC / XDCAM / NXCAM / AVCHD / HDV / DV Camera Systems > Sony Digital Cinema Camera Systems > Sony XDCAM PMW-F3 CineAlta
Register FAQ Today's Posts Buyer's Guides

Sony XDCAM PMW-F3 CineAlta
HD recording with a Super35 CMOS Sensor.

Reply
 
Thread Tools Search this Thread
Old January 19th, 2011, 12:28 AM   #91
Major Player
 
Join Date: Nov 2002
Location: Tokyo
Posts: 898
F3 ...

Quote:
Originally Posted by Giuseppe Pugliese View Post
This is why they have threaded mounting holes on the top of the camera near where the EVF would be. This is a build it the way you want it, kind of camera. I've had the F3 on my shoulders with a standard shoulder mounting kit and it worked just fine. All I needed to do was ad an EVF or LCD and you're done.

They designed it like a handycam for a reason... If they built it like a red camera where its just a brick, it would cannibalize their upper end. DP's might take it too seriously and end up having the F3 on productions where the 9000PL might be used.

It was made to look a little consumer for a reason, this is not a mistake or poor judgment. The limitations are there so they don't ruin an entire line of cameras that cost much more.

Face it, if the F3 was the exact same specs and insides, but came in a metal housing, no ugly little view finder on the back, and just an option for a proper evf, this camera would be taken just as seriously as the Alexa. Once an F3 has the 444 option enabled its really a no brainier. This is a powerful camera, but it was put into a consumer body so they don't hurt their sales. They might also be coming out with another F4 type camera that could be just what I described above... who knows.
It does little good to defend the design as most of us are concerned with having usable elements on the camera. I thought that when the EX3 came out it was a perfect type of design for the viewfinder and still do. We can't speculate on what Sony had in mind when they designed the F3 but many have commented over the years on how most backend viewfinders are useless on most prosumer or pro cameras! Now, I can see that it's possible to hide the fact that a camera is being used professionally if it's designed like a consumer handi-cam when it has a back-end viewfinder and in fact, my first documentary on China was done with Sony Hi8 cameras back in the last century for just this purpose! The F3 is a great camera and I think it's going to be highly successful. I might even buy one at some point myself but the viewfinder could have had an option to remove it and accommodate another type of viewfinder in it's stead.
__________________
Sony EX3, Panasonic DVX 100, SG Blade, Nanoflash, FCP 7, MacBookPro intel.
http://www.deanharringtonvisual.com/
Dean Harrington is offline   Reply With Quote
Old January 19th, 2011, 03:26 PM   #92
Inner Circle
 
Join Date: Jul 2002
Location: Centreville Va
Posts: 1,828
Sounds like the upcoming RedRock Micro EVF is going to be a popular option if Sony enables simultaneous output of HDMI and SDI.
__________________
Boycott Guinness, bring back the pint!!!
Joe Carney is offline   Reply With Quote
Old January 19th, 2011, 05:06 PM   #93
Regular Crew
 
Join Date: Mar 2009
Location: NYC/CA
Posts: 34
Please give me more details,

Quote:
Originally Posted by Alister Chapman View Post
I'm sure if Sony used the pixel count as a measure of resolution as Red do, the F3 would be approaching 3K or more. Sony's F35 has 12.4 million pixels to achieve 1920x1080 resolution. Compare that to the 8 MP of Red One used for "4k" or the 13.8 MP that Epic use for the headline figure of 5k. Pixel count does not equal resolution with bayer sensors. I also just noticed that Epic requires a whopping 60 Watts!

I don't know the pixel count for the F3, but as it is a Bayer pattern I expect (and hope) it will be considerably higher than 1920x1080.
Could you please help me with the Red marketing of the 4k. Because it drives me nuts when I confront this from the producers.


Arriflex 35 III, Aaton A-minima, Canon 5DM2 and 7D(I feel so cheap)
Michael Carmine is offline   Reply With Quote
Old January 20th, 2011, 10:50 AM   #94
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
The term 4k started in film when a film maker would scan the frames of the film using a single row scanner that was 4,096 pixels wide. Each line of the film was scanned 3 times, once each through a red, green and blue filter, so each line was made up of three 4K scans, a total of just under 12k per line. Then the next line would be scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4x3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.

Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn't stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.

Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution to close to half that of the sample rate. So if you had a 4K sensor the resolution would need to be dropped to around 2K to avoid aliasing altogether. BUT a 4k bayer sensor is in effect a 2K Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? If you set it to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It's aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.

In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k without aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920x1080 pixel sensors, even halving the resolution with the low pass filter to eliminate any aliasing in all the channels you should still get at least 1k. That's one reason why bayer sensors despite being around since the 70's and being cheaper to manufacture than 3 chip designs, with their own issues created by big thick prisms have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it's (non bayer) 14.4 million pixels.

This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn't even mean 2k, the laws of physics prevent that.

After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn't that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so they are far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it's resolution is typically lower than that of the 4k scanner.

Until someone actuall does some resolution tests or Sony release the data we are a bit in the dark as to the pixel count. IF it resolves around 1000TVL, which is about the limit for a 1920x1080 camcorder then it should have a 3.5k sensor or thereabouts.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old January 21st, 2011, 08:23 AM   #95
Major Player
 
Join Date: Sep 2008
Location: Vancouver, Canada
Posts: 975
Thanks Alister. That is the most cogent write up on the Bayer process I have read to date.
Andrew Stone is offline   Reply With Quote
Old January 21st, 2011, 11:57 AM   #96
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
I have reviewed what I wrote and realised that some of it may appear incorrect as I have mixed up pixel resolution and TVL/ph resolution in the same sentence in a few areas so I have re-written it and it should make more sense.

First lets clarify a couple of terms. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels can I see. Or as line pairs or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you are talking about is line pairs, or at what point can I no longer discern one black line from the next. For the black lines to be separated there must be white in between, so TVL/ph is a combination of BOTH the black and white line so will always be a lot less than the "pixel" resolution. With video cameras TVL/ph is the normally quoted term, while pixel resolution is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.

The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4x3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.

Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn't stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.

Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the pixel resolution of the image falling on the sensor to less than that of the pixel sample rate. You don't want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing.

It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don't want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution increasing to a total cut off at the pixel resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off?
As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is arguably some room to have the low pass cut off above the 2k of the green channel but this can lead to problems when shooting objects that contain lots of primary colours. If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between a bit above green and a bit below leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It's aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.

In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With three 1920x1080 pixel sensors, even with a sharp cut-off low pass filter to eliminate any aliasing in all the channels you should still get at least 1k TVL/ph. That's one reason why bayer sensors despite being around since the 70's and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add more pixels to get higher resolution, like the F35 with it's (non bayer) 14.4 million pixels.

This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn't even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 3k pixels or about 1.5k TVL/ph. If it is it will have serious aliasing issues.

After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn't that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it's resolution is typically lower than that of the 4k scanner.

Until someone actuall does some resolution tests or Sony release the data we are a bit in the dark as to the pixel count. IF it resolves around 1000 TVL/ph, which is about the limit for a 1920x1080 camcorder then it should have a 3k sensor or thereabouts.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com

Last edited by Alister Chapman; January 21st, 2011 at 12:32 PM.
Alister Chapman is offline   Reply With Quote
Old January 21st, 2011, 06:25 PM   #97
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
There's only one extra thing that I think you may need to add to that, Alister, and it's the definition of TVL/ph - which I understand to be "TV line pairs/horizontal".

If we are talking about 1920x1080, the "pixel resolution" you talk about will be (theoretically) just that - 1920x1080. It follows that you can expect the equivalent figures expressed in line pairs to be 960 horizontally, and 540 vertically. Important thing to realise is that talking about 960lp horizontally, and 540lp vertically are both referring to lines the same distance apart on a chart - albeit at 90 degrees to each other.

Hence the introduction of lph - line pairs referenced to the horizontal. What this means is that resolving a pair of lines a given distance apart will always be given a fixed value, regardless of whether they are vertical or horizontal lines - or even diagonal. So, on the vertical axis, a resolution of 540 lp, will be exactly the same thing as 960 lph.

This all becomes especially important when charts are used with circular resolution bands, or zone plates. It means that a ring can be given a unique lph figure which is equally valid at any point around the ring.

It follows that for a 1920x1080 recording system, the maximum resolution that can be got is 960 lph. If anyone claims and sees more than that - they must be seeing aliasing.
David Heath is offline   Reply With Quote
Old January 22nd, 2011, 01:47 AM   #98
Major Player
 
Join Date: Nov 2002
Location: Tokyo
Posts: 898
Dave and Alister ...

Thanks much for this clarification.
__________________
Sony EX3, Panasonic DVX 100, SG Blade, Nanoflash, FCP 7, MacBookPro intel.
http://www.deanharringtonvisual.com/
Dean Harrington is offline   Reply With Quote
Old January 22nd, 2011, 07:56 AM   #99
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Thanks David for adding that. One issue is that TVL/ph and Lph can be a little higher than 1/2 horizontal pixels because it is measured as the extinction point of the pair of pixels, ie the point where you can no longer see one black pixel separated from the next on the chart, so this implies that the white pixels can no longer be seen (or measured) so your actually looking at less than 2 pixels. When you measure using a scope you are looking for the point where both the white and black lines both become 50% grey That's why it is not impossible to see a measured lph resolution slightly higher than half of the pixel resolution.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old January 31st, 2011, 09:17 AM   #100
RED Problem Solver
 
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
Quote:
Originally Posted by Alister Chapman View Post
In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With three 1920x1080 pixel sensors, even with a sharp cut-off low pass filter to eliminate any aliasing in all the channels you should still get at least 1k TVL/ph. That's one reason why bayer sensors despite being around since the 70's and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add more pixels to get higher resolution, like the F35 with it's (non bayer) 14.4 million pixels
1.5k l/ph for a 16:9 array is 1500 * 16/9 = 2666 horizontal resolution. Actual measurements of 4k RED performance are around 3.2k (with negligible aliasing), which reversing the calculation would lead to ~1800l/ph. The issue of where to set the optical low pass filter "correctly" is shown to be a non-issue by the low aliasing nature and higher measured resolution of the images produced by the system, in comparison to other Bayer pattern based systems, three-chip systems and RGB stripe systems.

1kl/ph on a 3chip+prism system is, for the reasons you explain above about optical low pass filters only achievable with quite visible aliasing. 1kl/ph is 1.9k horizontal resolution, very close to the actual pixel resolution of the sensors with no-where-near enough "room" for any reasonably amount of optical filtering to work in. Good optical filtering, as you point out, is a necessary component for all types of cameras, and will necessarily reduce resolution when implemented properly. This is tough when you only have as many samples as is necessary to produce your HD image as you're now in a real battle between soft and aliasy.

Optical low pass filters don't come in sharp-cut-off varieties. I wish they did as they'd make life oh-so-much easier. The lack of control over the roll-off independent of the strength of the low pass is due to the lack of negative photons. Such darkons would make lighting so much easier too :-)

F35 is RGB stripe, with 12 million pixels used to make the image in a 1920x1080x3x2 array. Although using significantly more pixel in it's colour filter array, it manages to make for strong vertical luma aliasing and strong horizontal chroma moire rainbows due to the RGB stripe pattern coupled with low strength OLPF.

When talking about camera systems, it's vitally important to properly measure with a good high resolution circular zone plate to show resolution, aliasing and MTF performance of the system. All these factors go hand in hand and are readily visible and comparable with a single image shot on these charts. They are an invaluable tool in camera analysis.
Graeme Nattress is offline   Reply With Quote
Old February 1st, 2011, 02:28 PM   #101
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Zone plates don't tell the full story with Bayer sensors as they are looking at a mix of the R G and B channels. They do not as a result show up all the issues that occur in areas of highly saturated colour. This is where Bayer sensors tend to fall behind 3 chip designs as the resolution is not equal in the R G and B channels. No matter how you tell it, the resolution in the R and B channels is half that of the G channel and that presents the potential for serious moire issues in saturated colours. With "4k" cameras this is a little non sensical as many people are drawn to 4k for shoots that involve compositing and green screen work where uniform resolution in all the colour channels is advantageous.

Nyquists theory, demands that the sampling frequency must be no more than half the number of samples to eliminate moire, so for a 4k bayer sensor to have no moire in the green channel the frequency cut off should be 2k. But designers cheat as they use the assumption that there will never be a high frequency pure green image falling on the sensor, so they allow the cut off of the LPF to sit above 2k relying on cross talk between colours. For most situations they get away with it, but some simple scenes like vivid green foliage can cause havoc.

An OLPF doesn't have to stop light or reduce it to zero so negative photons or darkons are not required. An OPLF simply prevents the frequency rising past the cut off. The light still passes through, almost un attenuated, just frequency limited or for want of a better word "blurred". In effect a pair of black and white lines above the OLPF cut off would still be seen through the filter, only they would be seen as mid grey. A good Birefingent OLPF can have a fairly sharp cut off.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old February 1st, 2011, 02:47 PM   #102
RED Problem Solver
 
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
Alister, that's why we use multi-colour zone plates of primary R,G,B and black-and-white. They show up all issues that we're talking about here.

To say "the resolution in the R and B channels is half that of the G channel" is an over-simplification because if the colour has any change in brightness then it will be pulling that detail from the green pixels on the sensor. Even in the worst possible case, you will still have more measured resolution and lower aliasing in R, G and B than a 1920x1080 3 chip camera system.

There is potential for moire in all camera systems - but control over chroma moire on a bayer pattern sensor is not hard with some good algorithms and OLPF design, and the results are fine for extensive compositing use. Pulling keys is not something we hear VFX houses complaining about.

With sampling theory, to avoid aliasing, you must have at least twice as many samples as the frequency you wish to sample. In other words, you must have at least twice as many samples as line pairs you wish to sample, which means you must have at least as many samples as lines you wish to sample. In practise, moire in our camera systems is utterly negligible and much lower than that with 3chip HD systems and RGB stripe systems.

The requirements of sampling theory to avoid aliasing are very much harder to achieve in 3 chip systems where say for instance you have three 1920x1080 sensors on your prism. For an OLPF to achieve much reduction in MTF at 1920 you will necessarily reduce MTF at lower spatial frequencies and you will see a blurry image. In practise, a weaker OLPF is used which allows through a stronger MTF at 1920, producing a sharper image and allowing stronger aliasing too. The problem being that you cannot use a sensor of the final resolution you wish to capture, have an image that measures that same resolution and not have aliasing.

When you put "4K" in quotes, you should also be putting "HD" in quotes as when such cameras are measured they either produce a resolution less than 1920x1080, or they have strong aliasing issues, or in the case of cameras that line skip, they have both low measured resolution and strong aliasing issues.
Graeme Nattress is offline   Reply With Quote
Old February 2nd, 2011, 07:34 AM   #103
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Quote:
Originally Posted by Graeme Nattress View Post
To say "the resolution in the R and B channels is half that of the G channel" is an over-simplification because if the colour has any change in brightness then it will be pulling that detail from the green pixels on the sensor.
This depends on the colour of the subject. In a perfect bayer sensor pure red, no matter how bright would only fall on the red pixels. However the perfect bayer sensor does not exist as there is much leakage through the other colour filters, so some of the red leaks through to the green and blue (which has a detrimental effect on colourimetry). But this red leakage will be at a much reduced level, so moire and aliasing will occur, all be it at a reduced level.

Quote:
Originally Posted by Graeme Nattress View Post
Even in the worst possible case, you will still have more measured resolution and lower aliasing in R, G and B than a 1920x1080 3 chip camera system.
But the bottom line is that Red One at 4K does not resolve twice the resolution of the majority of 19201080 (sub 2k) camcorders, many of which have negligible aliasing. So when comparing 1080, 2k and 4k cameras it is important to know what these numbers mean.

Quote:
Originally Posted by Graeme Nattress View Post
There is potential for moire in all camera systems - but control over chroma moire on a bayer pattern sensor is not hard with some good algorithms and OLPF design, and the results are fine for extensive compositing use. Pulling keys is not something we hear VFX houses complaining about.
The best software algorithms in the world can only guess at what data is missing in an under sampled image. Sure they might be close enough most of the time, but they won't always get it right. OLPF design with a Bayer sensor is a compromise because there is a difference between the way the 3 primary colours are sampled.

Quote:
Originally Posted by Graeme Nattress View Post
With sampling theory, to avoid aliasing, you must have at least twice as many samples as the frequency you wish to sample. In other words, you must have at least twice as many samples as line pairs you wish to sample, which means you must have at least as many samples as lines you wish to sample.
Yes, I said that.

Quote:
Originally Posted by Graeme Nattress View Post
The requirements of sampling theory to avoid aliasing are very much harder to achieve in 3 chip systems where say for instance you have three 1920x1080 sensors on your prism. For an OLPF to achieve much reduction in MTF at 1920 you will necessarily reduce MTF at lower spatial frequencies and you will see a blurry image. In practise, a weaker OLPF is used which allows through a stronger MTF at 1920, producing a sharper image and allowing stronger aliasing too. The problem being that you cannot use a sensor of the final resolution you wish to capture, have an image that measures that same resolution and not have aliasing.
OLPF design for a 3 chip design compared to a comparable bayer sensor is much simpler as you only have to cater for a single cut off frequency as each colour is sampled at the same level. Your mixing up 4k bayer and 1920x1080 3 chip in the same comparison which is confusing to anyone reading this. As you yourself state you cannot use a sensor of the final resolution you wish to capture, yet Red like to call Red One a 4K camera, Most HD camcorders are referred to as either 720P or 1080P/I cameras. The difference is that Red does not achieve even close to 4K resolution while most 1080P camcorders get pretty damn close to 1k resolution. I don't see Sony claiming the F3 to be a 3.5K camera just because it has more pixels than the true resolution.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old February 2nd, 2011, 08:11 AM   #104
RED Problem Solver
 
Join Date: Sep 2003
Location: Ottawa, Canada
Posts: 1,365
Alister, you're missing the point that practically speaking, the RED One has lower levels of aliasing and moire than HD cameras. I know. I've measured them. Yes indeed there are theoretical issues with Bayer pattern systems (as there are theoretical issue with all camera systems) yet in practical measured circumstances on real world measured cameras they are negligible.

Colorimetry issues with Bayer pattern sensors are easily handled by appropriate colour correction matrices and measured colorimetry errors are as low or lower than 3 chip cameras I have measured. Cross colour leakage leads to advantages for colorimetry under discontinuous light sources though, so it can be rather useful given the amount of discontinuous sources in use.

Is there is a majority of sub2k camcorders that have such great measured resolution and aliasing results?Lookking through Alan Roberts' published zone plates of such cameras, I see significantly more aliasing than I would class as "negligible".

Sure OLPFs are compromises on Bayer sensors, just as they're compromises on 3 Chip systems also, where you still have to balance aliasing / resolution. And because you're trying to achieve 1920 out of a sensor with 1920 pixels, this will lead to more aliasing as a much weaker OLPF is generally used. The theoretical issue with setting an OLPF to avoid chroma moire on a Bayer sensor is just that - and with a good demosaic algorithm the visibility of chroma moire is so reduced as to be a non-issue. Theoretical camera design is very different to practical camera design.

The issue with OLPF design for a sensor in a 3 chip design is that generally the sensor will have just as many pixels as the measured resolution that is desired - as in 1920 pixels across and the hope is to be able to measure 1920 lines across an image. It's pretty obvious from this that if an OLPF is strong enough to reduce the MTF at 1920 to zero, you will not be able to measure 1920 resolution and the image will appear soft. Similarly, if you relax the OLPF to allow through a good MTF at 1920, you will allow aliasing to occur. The is the crux of the issue with optical filters and sensors. It is a battle you face with every sensor design type.

Now, for a 3 chip system, the answer would be to oversample. Have three sensors of 2880 x 1620 (oversample by 1.5) set the OLPF for negligible MTF at 2880, then use a good downsample filter to achieve a sharp image at 1920x1080 with negligible aliasing. The extra costs are higher resolution sensors, potentially lower dynamic range and a lot of extra horse power for the good downsampling filter. However the results would be visually excellent in the areas we're discussing - measured resolution and aliasing. Back in the standard def days, there were three chip systems that over-sampled and they did have superb results.

The main comment that drew me to post in this thread is: "In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues" which is not the case. Practical real world resolution in such a system is around 3.2k, which is around 1.8k l/ph. That a 3chip HD camera can measure a resolution of 1k l/ph is often the case, but because of the above issues with optical low pass filters you will have stronger aliasing at such a resolution. Max resolution in l/ph of a HD camera is 1080 l/ph. There is no such thing as an optical filter that is strong enough to reduce MTF at 1080 to near zero while allowing through good MTF at 1000. What it comes down to is that if you have x samples across on your sensor and hope to measure x lines of resolution you will get strong aliasing. To get negligible aliasing you probably want to aim to measure around 80% of x, or have at least 1.25x (more is better, but see above for drawbacks) the number of samples of the resolution you wish to measure. In both cases you're building in enough of a buffer to allow for an OLPF to work in. OLPFs by their nature are slow filters. They don't have sharp cut-offs. I wish they did, as it would make camera design a fair bit easier, but that's just the way the physics of them is.

Graeme
Graeme Nattress is offline   Reply With Quote
Old February 3rd, 2011, 03:31 AM   #105
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Sadly the zone plates I've seen from Red tests have been poorly executed, very often forgetting that to tell the true story you have to take the plate frequency out to at least 2x the camera resolution. The ones I've seen say hey look no aliasing, but you can still see all the rings clearly, so the frequency is not high enough to truly show the aliasing performance, you must go out past the extinction point. In addition the Red plates that I've seen do exhibit colour moire artefacts from well below the the extinction point. Perhaps Graeme you have some links to correctly done tests?
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Sony XAVC / XDCAM / NXCAM / AVCHD / HDV / DV Camera Systems > Sony Digital Cinema Camera Systems > Sony XDCAM PMW-F3 CineAlta


 



All times are GMT -6. The time now is 09:53 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network