View Full Version : Sony F3 vs RED?


Pages : 1 2 [3] 4

Alister Chapman
February 1st, 2011, 02:28 PM
Zone plates don't tell the full story with Bayer sensors as they are looking at a mix of the R G and B channels. They do not as a result show up all the issues that occur in areas of highly saturated colour. This is where Bayer sensors tend to fall behind 3 chip designs as the resolution is not equal in the R G and B channels. No matter how you tell it, the resolution in the R and B channels is half that of the G channel and that presents the potential for serious moire issues in saturated colours. With "4k" cameras this is a little non sensical as many people are drawn to 4k for shoots that involve compositing and green screen work where uniform resolution in all the colour channels is advantageous.

Nyquists theory, demands that the sampling frequency must be no more than half the number of samples to eliminate moire, so for a 4k bayer sensor to have no moire in the green channel the frequency cut off should be 2k. But designers cheat as they use the assumption that there will never be a high frequency pure green image falling on the sensor, so they allow the cut off of the LPF to sit above 2k relying on cross talk between colours. For most situations they get away with it, but some simple scenes like vivid green foliage can cause havoc.

An OLPF doesn't have to stop light or reduce it to zero so negative photons or darkons are not required. An OPLF simply prevents the frequency rising past the cut off. The light still passes through, almost un attenuated, just frequency limited or for want of a better word "blurred". In effect a pair of black and white lines above the OLPF cut off would still be seen through the filter, only they would be seen as mid grey. A good Birefingent OLPF can have a fairly sharp cut off.

Graeme Nattress
February 1st, 2011, 02:47 PM
Alister, that's why we use multi-colour zone plates of primary R,G,B and black-and-white. They show up all issues that we're talking about here.

To say "the resolution in the R and B channels is half that of the G channel" is an over-simplification because if the colour has any change in brightness then it will be pulling that detail from the green pixels on the sensor. Even in the worst possible case, you will still have more measured resolution and lower aliasing in R, G and B than a 1920x1080 3 chip camera system.

There is potential for moire in all camera systems - but control over chroma moire on a bayer pattern sensor is not hard with some good algorithms and OLPF design, and the results are fine for extensive compositing use. Pulling keys is not something we hear VFX houses complaining about.

With sampling theory, to avoid aliasing, you must have at least twice as many samples as the frequency you wish to sample. In other words, you must have at least twice as many samples as line pairs you wish to sample, which means you must have at least as many samples as lines you wish to sample. In practise, moire in our camera systems is utterly negligible and much lower than that with 3chip HD systems and RGB stripe systems.

The requirements of sampling theory to avoid aliasing are very much harder to achieve in 3 chip systems where say for instance you have three 1920x1080 sensors on your prism. For an OLPF to achieve much reduction in MTF at 1920 you will necessarily reduce MTF at lower spatial frequencies and you will see a blurry image. In practise, a weaker OLPF is used which allows through a stronger MTF at 1920, producing a sharper image and allowing stronger aliasing too. The problem being that you cannot use a sensor of the final resolution you wish to capture, have an image that measures that same resolution and not have aliasing.

When you put "4K" in quotes, you should also be putting "HD" in quotes as when such cameras are measured they either produce a resolution less than 1920x1080, or they have strong aliasing issues, or in the case of cameras that line skip, they have both low measured resolution and strong aliasing issues.

Alister Chapman
February 2nd, 2011, 07:34 AM
To say "the resolution in the R and B channels is half that of the G channel" is an over-simplification because if the colour has any change in brightness then it will be pulling that detail from the green pixels on the sensor.
This depends on the colour of the subject. In a perfect bayer sensor pure red, no matter how bright would only fall on the red pixels. However the perfect bayer sensor does not exist as there is much leakage through the other colour filters, so some of the red leaks through to the green and blue (which has a detrimental effect on colourimetry). But this red leakage will be at a much reduced level, so moire and aliasing will occur, all be it at a reduced level.

Even in the worst possible case, you will still have more measured resolution and lower aliasing in R, G and B than a 1920x1080 3 chip camera system.
But the bottom line is that Red One at 4K does not resolve twice the resolution of the majority of 19201080 (sub 2k) camcorders, many of which have negligible aliasing. So when comparing 1080, 2k and 4k cameras it is important to know what these numbers mean.

There is potential for moire in all camera systems - but control over chroma moire on a bayer pattern sensor is not hard with some good algorithms and OLPF design, and the results are fine for extensive compositing use. Pulling keys is not something we hear VFX houses complaining about. The best software algorithms in the world can only guess at what data is missing in an under sampled image. Sure they might be close enough most of the time, but they won't always get it right. OLPF design with a Bayer sensor is a compromise because there is a difference between the way the 3 primary colours are sampled.

With sampling theory, to avoid aliasing, you must have at least twice as many samples as the frequency you wish to sample. In other words, you must have at least twice as many samples as line pairs you wish to sample, which means you must have at least as many samples as lines you wish to sample. Yes, I said that.

The requirements of sampling theory to avoid aliasing are very much harder to achieve in 3 chip systems where say for instance you have three 1920x1080 sensors on your prism. For an OLPF to achieve much reduction in MTF at 1920 you will necessarily reduce MTF at lower spatial frequencies and you will see a blurry image. In practise, a weaker OLPF is used which allows through a stronger MTF at 1920, producing a sharper image and allowing stronger aliasing too. The problem being that you cannot use a sensor of the final resolution you wish to capture, have an image that measures that same resolution and not have aliasing.OLPF design for a 3 chip design compared to a comparable bayer sensor is much simpler as you only have to cater for a single cut off frequency as each colour is sampled at the same level. Your mixing up 4k bayer and 1920x1080 3 chip in the same comparison which is confusing to anyone reading this. As you yourself state you cannot use a sensor of the final resolution you wish to capture, yet Red like to call Red One a 4K camera, Most HD camcorders are referred to as either 720P or 1080P/I cameras. The difference is that Red does not achieve even close to 4K resolution while most 1080P camcorders get pretty damn close to 1k resolution. I don't see Sony claiming the F3 to be a 3.5K camera just because it has more pixels than the true resolution.

Graeme Nattress
February 2nd, 2011, 08:11 AM
Alister, you're missing the point that practically speaking, the RED One has lower levels of aliasing and moire than HD cameras. I know. I've measured them. Yes indeed there are theoretical issues with Bayer pattern systems (as there are theoretical issue with all camera systems) yet in practical measured circumstances on real world measured cameras they are negligible.

Colorimetry issues with Bayer pattern sensors are easily handled by appropriate colour correction matrices and measured colorimetry errors are as low or lower than 3 chip cameras I have measured. Cross colour leakage leads to advantages for colorimetry under discontinuous light sources though, so it can be rather useful given the amount of discontinuous sources in use.

Is there is a majority of sub2k camcorders that have such great measured resolution and aliasing results?Lookking through Alan Roberts' published zone plates of such cameras, I see significantly more aliasing than I would class as "negligible".

Sure OLPFs are compromises on Bayer sensors, just as they're compromises on 3 Chip systems also, where you still have to balance aliasing / resolution. And because you're trying to achieve 1920 out of a sensor with 1920 pixels, this will lead to more aliasing as a much weaker OLPF is generally used. The theoretical issue with setting an OLPF to avoid chroma moire on a Bayer sensor is just that - and with a good demosaic algorithm the visibility of chroma moire is so reduced as to be a non-issue. Theoretical camera design is very different to practical camera design.

The issue with OLPF design for a sensor in a 3 chip design is that generally the sensor will have just as many pixels as the measured resolution that is desired - as in 1920 pixels across and the hope is to be able to measure 1920 lines across an image. It's pretty obvious from this that if an OLPF is strong enough to reduce the MTF at 1920 to zero, you will not be able to measure 1920 resolution and the image will appear soft. Similarly, if you relax the OLPF to allow through a good MTF at 1920, you will allow aliasing to occur. The is the crux of the issue with optical filters and sensors. It is a battle you face with every sensor design type.

Now, for a 3 chip system, the answer would be to oversample. Have three sensors of 2880 x 1620 (oversample by 1.5) set the OLPF for negligible MTF at 2880, then use a good downsample filter to achieve a sharp image at 1920x1080 with negligible aliasing. The extra costs are higher resolution sensors, potentially lower dynamic range and a lot of extra horse power for the good downsampling filter. However the results would be visually excellent in the areas we're discussing - measured resolution and aliasing. Back in the standard def days, there were three chip systems that over-sampled and they did have superb results.

The main comment that drew me to post in this thread is: "In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues" which is not the case. Practical real world resolution in such a system is around 3.2k, which is around 1.8k l/ph. That a 3chip HD camera can measure a resolution of 1k l/ph is often the case, but because of the above issues with optical low pass filters you will have stronger aliasing at such a resolution. Max resolution in l/ph of a HD camera is 1080 l/ph. There is no such thing as an optical filter that is strong enough to reduce MTF at 1080 to near zero while allowing through good MTF at 1000. What it comes down to is that if you have x samples across on your sensor and hope to measure x lines of resolution you will get strong aliasing. To get negligible aliasing you probably want to aim to measure around 80% of x, or have at least 1.25x (more is better, but see above for drawbacks) the number of samples of the resolution you wish to measure. In both cases you're building in enough of a buffer to allow for an OLPF to work in. OLPFs by their nature are slow filters. They don't have sharp cut-offs. I wish they did, as it would make camera design a fair bit easier, but that's just the way the physics of them is.

Graeme

Alister Chapman
February 3rd, 2011, 03:31 AM
Sadly the zone plates I've seen from Red tests have been poorly executed, very often forgetting that to tell the true story you have to take the plate frequency out to at least 2x the camera resolution. The ones I've seen say hey look no aliasing, but you can still see all the rings clearly, so the frequency is not high enough to truly show the aliasing performance, you must go out past the extinction point. In addition the Red plates that I've seen do exhibit colour moire artefacts from well below the the extinction point. Perhaps Graeme you have some links to correctly done tests?

Graeme Nattress
February 3rd, 2011, 06:41 AM
Like these examples; Red and Moiré? - REDUSER.net (http://www.reduser.net/forum/showthread.php?p=661604&highlight=zone+plate#post661604) ?? They're only showing a small section of the chart that goes out to 2k. The full charts (that go out to 4k) are often shown to visitors to the RED studios in full resolution so they can see the full effect.

Alister Chapman
February 3rd, 2011, 12:51 PM
Yes I've seen those. They don't go out far enough on the Red to show if it's aliasing or not in the luma, but they do show noticeable colour moire, typical of a bayer sensor starting at about 1k. If you download the image and boost the saturation a bit the colour moire becomes clearly visible.

Certainly the F35 produces strong aliasing in that example.

Graeme Nattress
February 4th, 2011, 07:30 AM
Not seeing much in the way of any chroma moire here.

Have you shot the F3 on a zone plate yet? Be keen to see the results if you do, especially in comparison to it's big brother, the F35 which is pretty poor on the aliasing front as noted.

Tom Roper
February 4th, 2011, 08:48 AM
Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels can I see. Or as line pairs or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you are talking about is line pairs, or at what point can I no longer discern one black line from the next. For the black lines to be separated there must be white in between, so TVL/ph is a combination of BOTH the black and white line so will always be a lot less than the "pixel" resolution. With video cameras TVL/ph is the normally quoted term, while pixel resolution is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.


There's only one extra thing that I think you may need to add to that, Alister, and it's the definition of TVL/ph - which I understand to be "TV line pairs/horizontal".

If we are talking about 1920x1080, the "pixel resolution" you talk about will be (theoretically) just that - 1920x1080. It follows that you can expect the equivalent figures expressed in line pairs to be 960 horizontally, and 540 vertically. Important thing to realise is that talking about 960lp horizontally, and 540lp vertically are both referring to lines the same distance apart on a chart - albeit at 90 degrees to each other.

Hence the introduction of lph - line pairs referenced to the horizontal. What this means is that resolving a pair of lines a given distance apart will always be given a fixed value, regardless of whether they are vertical or horizontal lines - or even diagonal. So, on the vertical axis, a resolution of 540 lp, will be exactly the same thing as 960 lph.

This all becomes especially important when charts are used with circular resolution bands, or zone plates. It means that a ring can be given a unique lph figure which is equally valid at any point around the ring.

It follows that for a 1920x1080 recording system, the maximum resolution that can be got is 960 lph. If anyone claims and sees more than that - they must be seeing aliasing.

Thanks David for adding that. One issue is that TVL/ph and Lph can be a little higher than 1/2 horizontal pixels because it is measured as the extinction point of the pair of pixels, ie the point where you can no longer see one black pixel separated from the next on the chart, so this implies that the white pixels can no longer be seen (or measured) so your actually looking at less than 2 pixels. When you measure using a scope you are looking for the point where both the white and black lines both become 50% grey That's why it is not impossible to see a measured lph resolution slightly higher than half of the pixel resolution.

Yikes! No!

TVL/ph is not "line pairs horizontal."

A TVL is either a dark line or a light line, not a pair.

TVL/ph is "TV lines per picture height." For TV it was expressed as the number of lines, either vertical or horizontal, (light or dark) that could be resolved inside a circle with a diameter equal to the vertical dimension of the frame.

1080 is Nyquist, not 960.

TVL resolution can be expressed at MTF50, or any other level between 0 and 100%. The resolution number varies accordingly.

For photographic lenses and film, resolution can be expressed as lp/mm, line pairs per millimeter.

I'm sorry for interjecting here, and if you feel I'm wrong feel free to correct me. I'm not sure how it affects the discussion for 4k/2k/1k bayer filters, but the term TVL/ph has a defined meaning. The use of TVL/ph (tv lines/pic height) is helpful because it removes the aspect ratio of the frame size from being a factor in the discussion of horizontal resolution versus vertical resolution.

If you had perfect theoretical resolution for a 1920x1080 sensor, you would have 1080 TVL in both axis, but you would measure 1920 lines across the frame width.

Alister Chapman
February 4th, 2011, 01:05 PM
Thanks for jumping in and putting us straight.

I don't think it changes the argument. But good to have the terms corrected. Of course while TVL/ph is as you say individual lines, to be able to see one line form the next you do have to be able to see or measure the complimentary line. Not sure how you would express MTF below 50% as once you get down to 50% grey any further MTF reduction would just be more of the same 50% grey, unless I've missed something?

If you boost the saturation a little of the zone plates on the Red site the colour moire is plainly visible. I have not yet seen a zone plate from an F3 it will be interesting to see.

Graeme Nattress
February 4th, 2011, 02:00 PM
If you take your zone plate and plot a scanline you'll see a series of sine waves increasing in frequency and decreasing in amplitude from the centre out. Although printed with equal amplitude, the imaging system will see them with reducing amplitude - this is the MTF of the system.

If we call the largest peak to peak amplitude on our plot 100%, we can travel down the sines of increasing frequency until we get a peak to peak amplitude of 50% (of our largest amplitude) and now we've found out MTF50 point. At this frequency we're still seeing a strong MTF and a good figure for frequency here will appear visually as a sharp image. If we keep on going until our peak to peak is 0%, we now have mid-grey and no detail at all, although for limiting resolution I'd probably call it below ~10% as it gets hard to tell below that.

Graeme

Alister Chapman
February 4th, 2011, 02:33 PM
I know Graeme, I was getting muddled by Toms references to both MTF50 and 50% grey in the same sentence. I was referring to the fact that once you get down to 50% grey (the extinction point) the frequency response becomes zero so can't be measured, or at least there is nothing to measure. So MTF50 is before you get to 50% grey. Getting muddled, as zero contrast which is zero MTF normally means 50% grey, assuming the lines or rings were originally at 100% and zero.

Doh.. head hurts. You have to be so careful how you read and how you express all this stuff.

Graeme Nattress
February 4th, 2011, 02:37 PM
It's tricky when you're dealing with 50% MTF's and mid (50% grey), but mid grey is when you get to 0% MTF :-) It's so much easier to show visually rather than in textual comments.

One nice thing you can do with your circular sine zone plate is plot MTF, and also you can look at the area under the MTF curve which strongly correlates with what we perceive as overall image sharpness.

Graeme

Tom Roper
February 4th, 2011, 03:26 PM
Gray (half way between white and black) would be extinction, 0% MTF.

I'm sorry, I didn't think that through. Thanks for the correction.

David Heath
February 4th, 2011, 04:40 PM
My own references were to lph, which I understood to be "line pairs/horizontal", and shouldn't really be confused with terminology along the lines of TVLetc.

In other words, for a 1920x1080 image, the reference is to 1920 LINES res horizontally ("either a dark line or a light line, not a pair") OR 960 LINE PAIRS. (Being able to resolve a white/black pair of lines.)

My understanding is that it's relevant as this is what most res charts are nowadays marked in? With the outer ring of an HD chart typically 1000 lph? If the scale was TVL/ph (tv lines/pic height), it would be given the value 2000.

Tom Roper
February 4th, 2011, 08:36 PM
This is a little tough to write from my Blackberry, but here goes:

Lph still refers to lines per pic height, not line pairs horizontal. LW/PH, LPH, TVL are all really expressions of the same quantity, lines not pairs.

400 TVL would mean if you had a 4x3 tv set, you would measure 200 black and 200 white lines from east to west over a length equal to the north-south dimension.

LPH is saying the same thing.

ISO 12233 charts for digital still cameras use LW/PH for the scale.

Analog tv EIA-1956 charts use TVL, but they are the same as LW/PH or LPH.

TVL is always stated as horizontal resolutiion, dating back to interlaced broadcast, where the vertical rez can be no better than the number of horizontal scan lines.

For progressive images, it's valid to state the vertical resolution the same way, TVL or LPH.

For film and photography. LP/mm (line pairs per millimeter) is sometimes used.

You can sometimes find a spec for Sony HD cams where they state the resolution as 1000 TV Lines. What that's saying, is you should be able to count 500 black and 500 white, vertically oriented lines, over a horizontal distance equal to the vertical dimension. That would also equate to 1778 lines if you counted them across the full width of the panel, and I suppose 889 line pairs to your way of counting.

Peter Moretti
February 8th, 2011, 04:59 AM
...
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn't stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. ...

Alister,

Where are you getting that 8 Megapixel number from? Perhaps the old sensor? The latest Red One (MX) is 13.8 megapixels, FWIU.

CAMERAS / RED ONE (http://www.red.com/products/red-one)

Alister Chapman
February 8th, 2011, 08:32 AM
Yes that was for the old sensor. If I had meant MX I would have said MX. Still doesn't change the fact that Red like to boast about pixel count as resolution, which it is not. Yes MX should achieve higher resolution than the old sensor but it doesn't make it a "5k" camera in the true sense of the meaning. Working on the Red principle Sony, Panasonic etc could claim a 1080 camcorder to be "2k" near as damn it, but they don't. All the headline talk of megapixels does is create confusion as you have to factor in many other things, including sensor type, pixel offsets etc.

Peter Moretti
February 8th, 2011, 10:13 AM
Alister,

Come on, this is really not being fair to Red. First off, the Red One currently comes with the MX sensor. So you've been talking about a sensor that the camera does not even come with anymore. And owners with the old sensor can upgrade to the new one; MANY have.

But more than that, anyone w/ $25K to spend on a camera should pretty quickly be able to understand that Red is quoting the horizontal pixel count of the image area. They claim the actual measured luma resolution is about 3.2K. And any resolution chart you can pin up is going to show that the Red has significantly more resolution than any other digital camera. Please show an example if I'm wrong.

Where I do find Red misleading is the 4:4:4 claim. It's interpolated 4:4:4. You can claim "4K" or you can claim 4:4:4 at a lower resolution, but Red really can't claim both. If chroma resolution is less than luma resolution, then it's not 4:4:4.

Chris Hurd
February 8th, 2011, 12:16 PM
Let's be fair please. Any discussion of RED One should refer to its current shipping configuration which is in fact the MX sensor.

Alister Chapman
February 8th, 2011, 01:16 PM
Apologies to Red, I was not aware that Red One shipped with the MX sensor as standard. I thought a Red One with MX sensor was known as a Red One MX.

I do feel that many producers and directors, very often non technical people don't know the difference between a camera where the resolution is given as horizontal pixel count, actual resolution or image format.

If I proposed to many of them that I shot on a "2.5k" camera over a 1080p/i camera few would realise that I was talking about the same thing (F3's speculated H pixel count). I agree that most owners know the difference (at least they should do), but very often the pressure to use this or that comes from producers that have read a headline number without really understanding what it means. Then the crew have to spend an age trying to educate said producer in the pro's and cons of the different cameras/formats/workflows only for the producer to come back with "but the front page headline says it 5k".

Alister Chapman
February 8th, 2011, 04:12 PM
Where I do find Red misleading is the 4:4:4 claim. It's interpolated 4:4:4. You can claim "4K" or you can claim 4:4:4 at a lower resolution, but Red really can't claim both. If chroma resolution is less than luma resolution, then it's not 4:4:4. And to be fair to Red, Sony and the F3 is no different.

Graeme Nattress
February 10th, 2011, 08:39 AM
Terminology like 4:4:4 and 4:2:2 refer to chroma subsampling of Y'CbCr data. They should not be used (although often are, and indeed used pejoratively by people who know better) to refer to sensors, because not least sensors are not Y'CbCr, and sub-sampling implies there's a fully sampled RGB signal converted to Y'CbCr to produce a sub-sample from.

With camera systems, it does make sense to measure chroma and luma resolution both horizontal and vertical, but the 4:2:2 terminology is not the right thing to use to describe that.

Unfortunately, there's so many ways camera systems can work that single numbers or simple schemes can't really describe what is happening. Each scheme is a balance of factors as we've been discussing, and it's almost like you need an essay to describe things.

Things get more complex when you have an image that is not subsampled, but the RGB channels don't line up. Can you really call it 4:4:4? Sure you can, but it doesn't look the same as an RGB where the channels are in perfect alignment.

So - 4:4:4 says nothing about measured resolution, nothing about channel alignment and nothing about image quality. All it says is that the image is RGB or Y'CbCr with no chroma sub-sampling.

Graeme

Alister Chapman
February 11th, 2011, 01:58 AM
So - 4:4:4 says nothing about measured resolution, nothing about channel alignment and nothing about image quality. All it says is that the image is RGB or Y'CbCr with no chroma sub-sampling.

Graeme

But in most bayer systems, the sensor is sub sampling the chroma.

If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those sample, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 4k that's 4k R, 4K G and 4K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors.

If it's anything less than that, while the signal coming down the cable may have an even number of RGB data streams the data streams won't contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It's like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.

This is an issue within the industry in general. So called standards are not really standards as every manufacturer will manipulate or obscure the meaning of the term to suit their marketing requirements. "HD Camcorder" might mean 720P or 1080i and it doesn't necessarily mean that the camera can actually resolve an HD image, all it seems to guarantee is that the output signal will be an HD signal. An f1.8 lens might only be f1.8 under certain conditions, Resolution might be expressed as 1000 Lines, but some manufactures will measure horizontally while others use vertical or TVL/ph. Some boast 10 bit HDSDi, forgetting to mention that the top 2 bits are empty. You really have to look past the headline numbers and terms an d look closely at what is really going on.

Steve Kalle
February 12th, 2011, 04:30 AM
Hi Alister,

Quick question: which cameras output 8 bits through a 10bit HD-SDI?

Graeme Nattress
February 12th, 2011, 02:33 PM
But in most bayer systems, the sensor is sub sampling the chroma.


And that has nothing to do with 4:4:4 or 4:2:2 as they are referring to subsequent chroma subsampling (or not) of a RGB image that has been converted to a luma / chroma space like Y'CbCr (so it's post gamma correction also).

If you want to refer to what a Bayer sensor does with regard to it's ratio of green to red / blue pixels, then you can use the term Bayer Pattern sensor. If you wish to refer to chroma sub-sampling before transmission or as part of a codec, then 4:2:2 or 4:2:0 etc. are appropriate terminology.

As you point out, spec for pixel container doesn't determine measured resolution. Chroma sub-sampling notation doesn't determine measured chroma resolution.

Let's use the appropriate terminology and not apply a Y'CbCr chroma sub-sampling notation to a sensor with different ratios of R:G:B pixels.

Graeme

Alister Chapman
February 13th, 2011, 05:02 AM
4:4:4 can have either YCbCr or RGB color space, It is not always referring to chroma sub sampling or conversion to luma/chroma colorspace. The Sony F3 optional dual link output for example will be RGB 4:4:4 capable. HDCAM SR can use 4:4:4 RGB, there is NO color subsampling, nor colorspace conversion of the signal from the sensor. You can also refer to a sensor as 4:4:4 if it has 1:1:1 ratio of RGB samples.

As I said. If the original data isn't there, you can fill your data pipe with as much data as you want, call it what you want, argue over the terminology all you want, but when the data comes out the end of your pipe the data still won't contain the missing picture information.

A camera is a system from lens to output. The output quality will be limited to the lowest common denominator, starting with the lens and working back through sensor, processing, encoding and output.

The implication rightly or wrongly in many end users minds of having a camera with a 4:4:4 output is that the signal contains a full resolution 1:1:1 ratio of picture information for YCbCr or RGB. But as the ratio of Y:Cb:Cr or R:G:B off the senor or out of the processor is not 1:1:1 due to the sub sampling of chroma compared to luma (or R,G,B) by the bayer sensor pattern, then the picture information going into the 4:4:4 data encoder does not have a 1:1:1 ratio, and thus the picture information coming out of the 4:4:4 pipe will not have a 1:1:1 ratio. Anyone that says otherwise is trying to pull the wool over the end users eyes.

Yes strictly speaking the data is 4:4:4, but the content is not. The Implication of 4:4:4 is that all important 1:1:1 ratio of image data. You could even take a monochrome picture and encode it's output as a 4:4:4 data stream, again the data would be 4:4:4 but it wouldn't be much good for chroma key. I'm sure anyone making such a camera would have a hard time justifying any 4:4:4 claim in such an extreme case.

If you refer to Sony's HDCAM format you will often see this annotated as 3:1:1. Would you regard this as correct or incorrect? The cameras and decks record an anamorphic 1440x1080 image with 3 Y samples for each Cb or Cr sample, but for compatibility reasons the signal that comes down the HDSDi cable is 1920x1080 4:2:2. So while the data coming down the HDSDi cable is a 4:2:2 signal the content is not and no amount of reconstruction will ever make the 3:1:1 content the same as real 4:2:2.

It's important to understand this and realise that just because the data analyser, tech specs or marketing literature says the signal is 4:4:4 it does not necessarily mean 1:1:1 sampling of the light entering the camera lens. In fact when used the way it is by many manufacturers in terms of final image quality it's actually pretty vague.

Graeme Nattress
February 13th, 2011, 07:39 AM
Neither RGB not Y'CbCr are colour spaces - they are ways of storing image data that can be in any of a multitude of colour spaces. A colour space is something like sRGB or REC709, or Adobe1998, and is defined by a transform from XYZ along with a white point.

To say "nor colorspace conversion of the signal from the sensor" would be incorrect. There are always quite a few processes that happen from the sensor to a viewable RGB image. A basic image processing system would be something like: sensor data -> (demosaic here if a Bayer CFA) -> black offset correction -> colour correction matrix -> gamma curve -> RGB Image. The colour correction matrix will be applying a colour space conversion from the native space of the camera sensor to REC709 colorimetry, for instance.

So, let's look at the original HDCAM: The recorded luma signal is 1440x1080, and the chroma is 1/3 of that at 480x1080, which is why it's a 3:1:1. The 3 you'll note does not refer to that 1440 is 3/4 of 1920, but that the 480 is 1/3 of the 1440. Now, the particular numbers for HDCAM work the other way to, referencing back to "4"4 as "full", but that screws up the meaning of 4:2:2 as had been used when you consider 16:9 recording on Digibeta would now, instead of being 4:2:2 as we know, it would have to be something like 3:1.5:1.5 referencing back to the full RGB image before it got anamorphically squashed prior to recording. Similarly with 16:9 DV we'd have them recording as 3:1.5:0. (not even to get distracted by how the "0" is used in 4:2:0 is a mathematical abomination.) Panasonic's DVCProHD follows this convention also, being luma reduced before recording to 960x720 (for 720p) from 1280x720, then the chroma is halved to 480 due to 4:2:2.

That there is no SDI transmission standard for the HDCAM native data luma and chroma sub-sampled format does imply that a 4:2:2 1920x1080 signal is created by the deck as an output signal to be transmitted over the SDI. At that point it is real 4:2:2 or else it couldn't be transmitted. 4:2:2 refers to the signal, not to it's content, so to say that real 4:2:2 is a 4:2:2 signal which comes from a source with a high-enough resolution in luma and chroma to justify such appellation is not correct. It's either a valid 4:2:2 format or it's not formatted correctly. The notation assumes nothing about quality and as noted above on 3:1:1 HDCAM does not tell us the luma resolution at all, just how chroma is sub-sampled compared to luma.

"It's important to understand this and realise that just because the data analyser, tech specs or marketing literature says the signal is 4:4:4 it does not necessarily mean 1:1:1 sampling of the light entering the camera lens." That is correct. 4:4:4 refers to no chroma sub-sampling at a particular part of an image recording chain and that is that. Indeed, it has never referred to back to even as far back as the previous full RGB image (see anamorphic Digibeta case above) never mind light entering the lens!

Graeme

Peter Moretti
February 13th, 2011, 09:12 AM
Hi Alister,

Quick question: which cameras output 8 bits through a 10bit HD-SDI?

Canon's HDV cameras are an example.

Peter Moretti
February 13th, 2011, 09:19 AM
... There are always quite a few processes that happen from the sensor to a viewable RGB image. A basic image processing system would be something like:

sensor data -> (demosaic here if a Bayer CFA) -> black offset correction -> colour correction matrix -> gamma curve -> RGB Image.

The colour correction matrix will be applying a colour space conversion from the native space of the camera sensor to REC709 colorimetry, for instance. ...

Graeme,

Thanks for contributing to this discussion. May I ask in the case of the Red, where does compression take place? I'm assuming before colour correction matrix (so colour correction matrix -> gamma curve -> RGB Image all take place in RedCine-X, not in camera).

But then that begs the next ? in my mind, "What is 'black offset correction'?"

Graeme Nattress
February 13th, 2011, 09:22 AM
The analogue output of a pixel in a sensor will not produce zero volts at pure black (lens-cap-black), but instead there will be an offset voltage. Colorimetry math assumes linear colour data with black at zero, so a correction needs to be made.

In the RED, the REDCODE compression works directly on the raw data from the sensor. There's no demosaic or colour correction before compression. In REDCine-X, the REDCODE decodes back to raw, and from there the image processing pipeline works on the decompressed raw sensor data.

Graeme

Peter Moretti
February 13th, 2011, 09:50 AM
Right, Graeme, I know that the Red compresses before demosaic. It's kind of essential for a RAW workflow.

What I guess I was getting at is are there any steps that happen between sensor data and compression? It seems that black offset correction and maybe some other "low level" processing might/should happen before compression.

Thanks.

Graeme Nattress
February 13th, 2011, 09:53 AM
Black shading and pixel correction, then off to compression while still raw.

Graeme

Peter Moretti
February 13th, 2011, 10:07 AM
Some of the other cameras bake in values like WB at the AD stage at very high bit rates (like 14 or 16 bits). (Not to imply there is some deficiency in how Red does it.)

Am I remembering correctly that there will be a new RedCode coming out that will be 16-bits? And will this be for across the Red line or just Epic-X?

Thanks again.

Alister Chapman
February 13th, 2011, 10:12 AM
Neither RGB not Y'CbCr are colour spaces - they are ways of storing image data that can be in any of a multitude of colour spaces. A colour space is something like sRGB or REC709, or Adobe1998, and is defined by a transform from XYZ along with a white point.

Yes and No, by definition Y'CbCr is both a description of the way the luma and chroma signals are encoded as well as being a family of non linear encoded color spaces, which includes some of the ones you have outlined above. RGB is both a description of both the image processing method and also a family of additive color spaces. RGB itself is a color space with unlimited gamut. Conversion between Y'CbCr and RGB color spaces can sometimes lead to an overall reduction in gamut.

It would perhaps have been clearer if I had used the term encoding instead of color space, but the argument remains the same.

If the source isn't up to it, 4:4:4 potentially brings no advantage over 4:2:2

Graeme Nattress
February 13th, 2011, 11:01 AM
I just noticed that the point I wanted to make and missed was that 4:2:2 is not a "colour space", although often referred to as one.

"If the source isn't up to it, 4:4:4 potentially brings no advantage over 4:2:2" - I know exactly what you mean here, but it's still a problematic statement because 4:2:2 is a chroma sub-sampling scheme rather than a quality statement. What you would do in this case is encode the same signal via different chroma subsamplings and then reconstruct them all back up to 4:4:4 and compare back to the original RGB. You'd then know which chroma sub-samplings would show in as a lower quality and which would not.

If we think of an HD camera image, it's colour space is REC709, and that is based on the RGB colour model, and it can be encoded as Y'CbCr which may use a 4:2:2 chroma sub-sampling.

With mapping between RGB and Y'CbCbr encodings, there can be a loss of precision due to rounding of code values. There are also many Y'CbCr code values that don't map to valid RGB values, so you can get in the situation where if you start with an RGB encoded image and adjust it in Y'CbCr you may find that some of the adjusted values map back to invalid RGB values which would probably get clamped. If you keep it un-clamped floating point all the way, you can transform back and forth quite freely though.

"RGB itself is a color space with unlimited gamut." no idea what you mean here.

Graeme

Alister Chapman
February 13th, 2011, 12:24 PM
If you have your xyz axis representing RGB and these go from zero to an infinite amount of R, G and B then the gamut range is infinite. So the basic undefined RGB color space has unlimited gamut. All the defined color spaces are then contained within this unlimited gamut.

Graeme Nattress
February 13th, 2011, 02:05 PM
If you have your xyz axis representing RGB and these go from zero to an infinite amount of R, G and B then the gamut range is infinite. So the basic undefined RGB color space has unlimited gamut. All the defined color spaces are then contained within this unlimited gamut.

So yes, if you do that you can have an unlimited amount of RGB code values, however, that doesn't give you an infinite gamut. The gamut is constrained by the representation of the RGB primaries in xy, which is finite. Of course, you can define arbitrarily "way out" primaries, but I don't see that as a practical consideration. If you've not defined the RGB space, "basic undefined RGB color space", then you don't have a colour space and don't have any gamut. The code values of the RGB only become colours when you define what they mean in terms of the primaries.

Graeme

Steve Cocklin
March 31st, 2011, 09:16 AM
Does anyone know if the F3 will be 100% content approved by broadcasters like the BBC and National Geographic

I just found out that no the F3 is not 100% content approved by the BBC and National Geographic; for more information read this blog; http://dylanreeve.com/videotv/high-definition/2010/sony-f3-is-not-hd.html

Jim Tittle
March 31st, 2011, 11:09 AM
To meet their specs, all you'd have to do is add a KiPro Mini or something similar. There's no mention of it in their specs, but I'm assuming that the BBC requires you to use a lens, too. (Not included with the F3).

Andrew Stone
March 31st, 2011, 12:56 PM
I just found out that no the F3 is not 100% content approved by the BBC and National Geographic; for more information read this blog; Sony F3 is not HD | Edit Geek (http://dylanreeve.com/videotv/high-definition/2010/sony-f3-is-not-hd.html)

Hi Steve,

Welcome to DVinfo. You will find that a good number of the people here who use XDCAM based cameras already own an outboard recorder that does over 50 mbit 422 capture, most notably Convergent Design's nanoFlash unit. As such, this is not an issue for those of us with a nanoFlash or similar device.

There is a forum dedicated to the NanoFlash and the soon to be Gemini 444 uncompressed data recorder and you will find lots of information on these sprinkled throughout the this F3 forum and the EX1/EX3 forum here on DVinfo.

Brian Drysdale
March 31st, 2011, 01:00 PM
Sony regard the on board recording as a proxy, they intend you to use an external recorder for the master recording.

Alan Roberts puts BBC camera settings in his assessment: http://thebrownings.name/WHP034/pdf/WHP034-ADD68_Sony_PMW-F3.pdf

Sophie Bucks
April 12th, 2011, 06:14 PM
I just found out that no the F3 is not 100% content approved by the BBC and National Geographic; for more information read this blog; Sony F3 is not HD | Edit Geek (http://dylanreeve.com/videotv/high-definition/2010/sony-f3-is-not-hd.html)

Hiya I think you need to by a Nanoflash or similar the your sorted for the BEEB

Giuseppe Pugliese
April 13th, 2011, 07:50 AM
to kinda bring back the topic to the question F3 vs RED....


I just got back from a shoot in Africa... We would have never been able to get our film shot on budget and on time if we were shooting with the RED system. The F3 did NOT over heat. In fact, I tried to make it overheat... We were shooting in 110 degree weather. I left the camera out in the sun with no cover for hours while rolling... I never once got an error or heat issue. That would have never happened with the RED camera.... Also the simple ease of backing up on set was much quicker and needed way less harddrive space to achieve.

If I had picked the RED system over the F3, we just never would have made our days in that heat, under those conditions. I was in the most testing environments with that camera, and never once did the camera tell me "nope I dont wanna work". I was shocked, and I would love to see what it DOES take to overheat that camera... African tropical weather and heat, with dust and rain and super high precipitation. It make feel plasticy, but it sure did handle the beating for a full month of shooting.

Thierry Humeau
April 13th, 2011, 03:12 PM
The F3 specs say 0°C to +40°C (+32°F to +104°F). I am pretty confident you can extend that range by 10% without worrying. As a matter of fact, in March, we were shooting at -20F in Yellowstone with a PMW-350 that is also rated at 0°C to +40°C (+32°F to +104°F) and had no problem at all.

Thierry.

Steve Kalle
April 13th, 2011, 04:01 PM
Is overheating really an issue with current Red cameras such as Red One MX or Epic? I thought I had read about these issues with the first Red One model but Red fixed most or all issues with the MX model. Yes, no?

On the flip side, I really can't believe most of what I read on the net these days due to the rabid fanboy-ism. There are a few individuals I trust to tell the truth but even they succumb to the hype occasionally.

However, something that drives me bonkers is a manufacturer dedicating an entire forum to the first 100 or so owners of a piece of equipment such as the forum with a separate thread for each Epic owner. Really?! I mean, come on! And then there are numerous people praising and envying those owners. It looks to me like Red took a page from the Apple playbook - ie, make a product and create crazy hype around it. I thought the Reality Distortion Field was strong with Apple but Red's RDF is impenetrable.

To be honest, my biggest annoyance is all the 'Resolution is King' and nothing else matters mentality. I guess the most successful film looked like crap - Avatar because it was shot on small-3-chip-1080p cameras. To top it off, it was projected in IMAX and looked stunning!

Ok. My diatribe is over.

Now, about the F3 vs Red. The Red Epic is an amazing little camera and its light weight and small size allow huge cost savings in support gear from lower cost Steadicams to smaller jibs, cranes, tripods, etc...

For high-budget films and TVCs, if you can't afford the Alexa, then a Red is the best choice. However, if a DIT station is not in the budget, the F3 fits better in the overall workflow.

I still want to know what is better: the raw R3D format or 444 S-Log. Seeing as you must spend about $10,000 minimum for 444 S-Log recording (when Gemini is released), the total cost of ownership for both cameras is not very different.

For those looking to buy, I think the way Red has handled its customers is a big plus, ie., original Red One owners were offered great incentives for the updated MX version. I haven't seen Sony do anything like that for its customers.

One last quick point: Red does not have ND filters built-in and it has become evident that many many people want NDs built into the camera (thanks to the FS100).

No matter what, BOTH cameras produce AMAZING images.

Giuseppe Pugliese
April 13th, 2011, 08:06 PM
Any camera running in high temperatures will overheat, the new RED cameras are still prone to it as I've heard. On set we had 2 7D cameras, they shut down and over heated in a matter of minutes, we couldn't even get the camera mounted in the car, before it over heated.

I really love RED footage, but for me, the Epic is just kinda overkill and too small. My next camera after the F3 will most likely be a used Alexa, unless something better comes out between the F3 and Alexa market... I dont like the way RED handles the workflow or the costumer service for that matter. WAY too much ego and not enough actual help around the world. I know that If I'm shooting in Africa or Asia, I can have support with Sony or Arri... RED... well good luck with that.

When I had an issue with my cards, I called my sony rep in the states, and within a few hours I was getting help from an engineer at Sony UK trying to help get my situation fixed. Thats service.

David C. Williams
April 13th, 2011, 09:35 PM
I've seen one Epic, and it had the noisiest fan I've ever heard on a camera. It slowed to near inaudible when recording, but the fact it had to run so hard when not recording makes for interesting speculation on their usability in hot environments.

Dennis Dillon
April 18th, 2011, 06:53 PM
This thread has the most hits of all the threads on this camera.
I wonder why?

While working for Sony (Sony ICE , Independent DP team member) at the Sony NAB booth, a gentleman who I have never met informed me he was the President of the Red User Group. Sorry, I never took his name down. No disrespect,I literally met hundreds of DP's that week. Im sure if he wishes to add to this he can.

He ask me about the camera and then introduced me to two DP's who are Red users, and wanted to know more about the F3. After a bit of basic technical exchanges regarding the work flow in 709 and S Log, I demonstrated what I think is the most important feature of the F3. Sensitivity, period. After pumping up the gain to 18 and indicated it's ISO 6400 rating, their reactions were nothing but positive after seeing that there was little noise in the image.

Holding a Candle to the Sony PMW-F3 | CineTechnica (http://blog.abelcine.com/2011/04/07/holding-a-candle-to-the-sony-pmw-f3/)

We discussed the ability to custom LUTs to the the 35 Mb proxy copy,and how that would ease the post session. We also discussed the RGB 10 bit uncompressed features, and how SR or the many third party non/compressed recorders were about to raise the bar for the common DP.
They told me they were going to buy two F3 for their next Doc. Not saying one was better than the other, only indicating the price point/performance/worklflow level brought them to that decision.

Look, Sony has gone 4K, 65mm with the F65 and has handed down the F35 imager to the rest of us. I'm sure the DSP in the F35 is way ahead of the DSP in the F3 and the FS 100. Of course!!!

They are moving into 4K with a a killer imager. It is 8 K that puts out a 4 K image. Red folks would ask me, Is the F3 imager 4K?,... no it is not, it is 3.5K. But we are talking about a sub 2K image as a standard for today and many years to come, until 4K TV is the norm. The comparison should be how Sony takes their 3.5K image and how Red takes their stated 4K and to produce a sub 2 K image.

I admit I have never shot with a Red One, and was very tempted to drop down the 1k for the first issue. And Im not in the know to the latest Red issue. So please add to this.

But I had to stick by what my then clients and same clients today needed. And the workflow just did not fit with them, so I waited for the next iteration of a single imager.

So Canon drops the 5D on us (An AP request), and many jump on it like fly on sh_t. Sorry Chris H.
Glad to see there were considerably less 5D geeks running around the NAB floor with modified mounts and Zeiss CP attached. I mean really, how can you take that codec and noise level seriously. I would love to hear from the newly ordained 5D DP's, who touted it's low light capabilities (This was all about spending less money on a good LD/Grip and camera package). and see a head to head comparison with the F3 18db ISO 6400 and say the DSLR is still viable. Viable yes if your business plan is 200-400/day for an A camera. Weddings anybody. Not a knock on wedding videographers. We are talking about many notches above that, "I will love you till death" moment.


Say good by to the DSLR period. (see the FS 100, same sensor and less $ after you factor in the extras to meet the feature set of the FS 100).


Argue this, if you have graded your 5D and had 12+ stops to work with, and no noise after 1200 ISO, please send me frame grabs and and the CC log, and I'll eat.... shut up. The F3 has a little noise at 18db/6400 ISO, and you would have to blow it up x2 to see it.

So its down to what your client needs. 8/10/14/16 bit. For now 8 bit is the norm. Tomorrow is another day. Im sure, with all the budget issues facing broadcasters and production facilities, 10 bit/ Log will slowly make its way where 8 bit is today.

Sorry for the rant.

Jacques Mersereau
April 19th, 2011, 08:29 AM
F3 vs RED

Regarding the Digital Cinema System Specification, which calls for a minimum of 2K (2048x1080),
I would like to hear comments about the possibilities of the Sony F3 being able to _output_
a 24pfs or 48fps@2K signal for capture?

I believe RED can do this now.