View Full Version : Report: F3 looks almost the same a EX3


Pages : 1 [2]

Alister Chapman
March 9th, 2011, 05:24 PM
I got about 950H 850V resolution at MTF50. That's not a bad set of numbers for a 1080 camera. My charts and test procedure are not perfect, possibly with a better chart you would see marginally higher numbers. What I can say though is that visually on a monitor it appears at least as sharp as my EX's, if not sharper. This appears to be because there is no noise to blur or soften the image.

Steve Kalle
March 9th, 2011, 06:18 PM
Resolution isn't the most important factor; so, try not to get too hung up on it. Remember that many people have been praising the 5D & 7D; yet, they can resolve barely more than an SD camera (500ish TVL). Btw, I am certainly NOT one of those praising those 2 cameras.

PS I'm on Alister's side about peoples' infatuation with Red's 4k/5k as 'resolution' when 4k only means the number of pixels. And yet, even Red misleads people by stating '5k resolution' on their website.

Peter Moretti
March 10th, 2011, 04:56 AM
I got about 950H 850V resolution at MTF50. That's not a bad set of numbers for a 1080 camera. My charts and test procedure are not perfect, possibly with a better chart you would see marginally higher numbers. What I can say though is that visually on a monitor it appears at least as sharp as my EX's, if not sharper. This appears to be because there is no noise to blur or soften the image.

Alister, I may be wrong, but I thought all cameras have a Nyquist band limiting filter just before the output going to the A/D converter. I believe that menas an absolute limit of about 1K tv lines for 1080 cameras. But that's not at MTF50. So the MTF50 number is going to be lower.

If that's correct, I believe getting significantly higher rez than an EX from any 1080P camera probably ain't gonna happen--please excuse my American, LOL.

Alister Chapman
March 10th, 2011, 06:40 AM
When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don't I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents anything above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere around Nyquist or half of the expected sensor resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it's full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920x1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it's not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to half the pixel resolution so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920x1080 image on a 1920x1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As Nyquist is half the pixel resolution of the system, for a 1920 sensor anything over 960 LP/ph will potentially aliase, so we don't want resolution above this. You don't want to see a higher number than this as it has the potential for problems as the extinction resolution must be higher than this and thus there must be the risk of aliasing. This where seeing the MTF curve helps, as it's important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it's even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast higher than 960 LP/ph will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920x1080.

Peter Moretti
March 10th, 2011, 07:13 AM
Agree with everything you've said. But I believe that in addition to the OLPF there is a hard cutoff filter just before the A/D converter. This prevents sending a signal that could alias in the digital signal processing b/c OLPF's aren't perfect, and in the case of a Bayer sensor, the OLPF will have to allow some red and blue aliasing.

A result of this hard cut filter is an absolute limit to what total resolution can be.


P.S. Well I should say I agree with the tenor of everything you're saying. I believe you're not using LP/ph properly. If you mean line pairs per pixel height, Nyquist is 1080/2 = 540. But your explanation of the role of the OLPF jibes with my understanding, FWTW.

Alister Chapman
March 10th, 2011, 07:19 AM
MTF plots for the camera don't show a hard cut off.

Peter Moretti
March 10th, 2011, 08:40 AM
Alister,

I could try to explain this away as dithering, interpolation, sharpening, codec distortion, electronics' noise, setting the A/D frequency higher than twice the optical Nyquist frequency of the sensor, the fact that the sensor's Nyquist rate is theoretical in that it assumes perfect fill, but I'm just not sure.

That said, I'm almost positive that the A/D includes what could be thought of as an electronic LPF, otherwise you'd have digital aliasing that will have no picture value whatsoever. The net effect of this "ELPF" is that there is an exact governor in the system for resolution coming off the sensor.

Glen Vandermolen
March 10th, 2011, 09:55 AM
So, a camera that shoots as good as an EX3, but has a cleaner image, less noise, better control over depth of field, a much greater variety of lens selection and is better in low light?

Ding, ding, ding! We have a winner!!!

David Heath
March 10th, 2011, 10:31 AM
I believe that in addition to the OLPF there is a hard cutoff filter just before the A/D converter. This prevents sending a signal that could alias in the digital signal processing b/c OLPF's aren't perfect, .........
I don't think so - by the time the image hits the sensor it's too late, it's the image on the chip and the sensor pattern that burn the aliases in. Put into a system a higher frequency than the Nyquist limit, and the output will wrap round the Nyquist frequency.

To take Alisters example further, then think of putting in a signal that's even finer than the sensor pattern. Let's say 960x1.5 line pairs across the horizontal (so 1440 white lines and 1440 black). Move across horizontally, and pixel by pixel what you'll see will be :

Pixel 1 - first 66% white
Pixel 2 - last 66% white
Pixel 3 - last 33% white
Pixel 4 - first 33% white
Pixel 5 - first 66% white etc etc

Which is a light grey line every two pixels, followed by a dark grey line for the next two. Across the sensor width, you'll see 480 line pairs.

A frequency 1.5x the Nyquist limit has been turned into an alias 0.5x the Nyquist limit. And no filtering etc that comes after the sensor can distinguish between this alias and a genuine pattern of 480 line pairs.

Peter Moretti
March 10th, 2011, 11:08 AM
David,

I believe I understand what you're saying and it does make sense. Essentially the sensor's photosite grid itself acts as a "hard filter," so resolution above Nyquist gets broken down into a sub-Nyquist image.

I appreciate you and Alister's input, but I have to dig into this whole issue of a digital filter more, b/c I could swear there is more than one Nyquist filter in a camera design.

BTW, how do you explain Imatest numbers above Nyquist (which I realize in no way proves the existence of an "ELPF")?

http://www.dvinfo.net/forum/sony-xdcam-f3-cinealta/492524-aliasing-moire-2.html

Thanks.

Peter Moretti
March 10th, 2011, 01:54 PM
Okay guys, if you could, please watch Chapter 2.

Panavision (http://www.panavision.com/media-center)

At times 1:20, 6:00, 7:15, 8:00 and onward, Larry Thorpe talks about what I've been trying to describe. I fully leave open the possibility that he is not being entirely accurate. But I know I am now entirely confused.

Alister Chapman
March 10th, 2011, 02:41 PM
You can have resolution above Nyquist, but the issue is that anything above nyquist can lead to aliasing with any kind of fixed or repeating pattern. It is the relationship between the light falling on the sensor and the pitch of the pixels that causes the aliasing which is why it is all but impossible to eliminate through electronic processing or filtering. The aliasing occurs right at the point where the light hits the sensor, so the signal read from the pixels contains aliased information and an electronic low pass filter will do nothing other than further reduce the system resolution. This is why the DSLR's aliase so badly. The OPLF is designed for high resolution stills where every pixel is used. When you start skipping pixels you create large voids between each light sample and Nyquist ends up well below the OPLF cut-off. If it could be fixed through electronic processing then Canon etc would have done it a long time ago, but it can't, because the aliases are baked in to the signal being pulled out of the pixels and it's next to impossible for the processor to know what is real detail and what is aliasing.

Aliasing occurs at the sensor. You can take the 1000 line resolution image after processing and feed it in to a 2000 line capable signal chain without any issues, there is no advantage to be gained from an electronic low pass filter. If anything it is advantageous to pass as much resolution up the chain once it's off the sensor to allow for sharpening and detail correction which will increase the frequency of the signals even though resolution is not actually increased.

Peter Moretti
March 10th, 2011, 03:00 PM
Alister, I really don't mean to be a pain in the ass about this. And I very much appreciate your replies. But the whole point of an "ELPF" is not to eliminate aliasing that's already occurred due the sensor grid, it's to eliminate frequencies above the Nyquist limit of the A/D converter before they enter to A/D converter. The presence of such frequencies would create additional/new aliasing during the A/D phase.

A/D is digital sampling too, so it has it's own Nyquist limit. What is tripping me up is the idea that the sensor does not pass on resolution above Nyquist. I thought it did, which is why I thought an "ELPF" is needed. If the sensor does pass on resolution above Nyquist of the A/D, as I first thought, then of course an "ELPF" is needed. And Larry's comments make sense.

Again, I'm not trying to get into to pissing match with anyone. I'm just try to hash this all out, and there aren't a lot of places to do that. No please take all this in the manner it's meant. ;)

And I leave open the really possibility that I've just jumbled this all up. But if you have the time ponder this over some more and maybe watch the video I've linked to.

David Heath
March 10th, 2011, 03:18 PM
At times 1:20, 6:00, 7:15, 8:00 and onward, Larry Thorpe talks about what I've been trying to describe. I fully leave open the possibility that he is not being entirely accurate. But I know I am now entirely confused.
Ah, OK, he's talking about a far more general case than we normally consider, with different sampling for the sensor and the digitisation. So an optical Nyquist limit set by the sensor dimensions, then he talks of a conversion to analogue and then signal digitisation - which will have it's own Nyquist limit.

It's more normal now to not have the intermediate step, so the optical sampling matches the signal sampling. You want 1920 samples horizontal to be recorded? Easy, make a sensor with 1920 photosites across.

Come to a Bayer sensor which could give more resolution than a 1920 system could handle. Hence the OLPF needs to stop detail greater than the sensor optical Nyquist, then a filter to reduce the image detail before the downconvertor. I'd expect the filtration to be part of the downconversion process, and it all to happen digitally.

As for the greater than Nyquist Imatest frequencies, I suspect it means input to system frequencies. So will show the value of the output response, but you can also expect the output frequency to be mirrored around the Nyquist point. You can put frequencies in to the system that are greater than Nyquist, but you won't get such out - only sub-Nyquist aliases.

Nowadays, A/D convertors work on a photosite by photosite basis, not on the waveform as a whole basis as was the case digitising analogue signals.

Peter Moretti
March 10th, 2011, 03:40 PM
"You can put frequencies in to the system that are greater than Nyquist, but you won't get such out - only sub-Nyquist aliases."

David, okay let me be sure I follow you. Let's assume a monochromatic sensor (to not have to consider Bayer implications) of 1920 X 1080. Its Nyquist limit will be 1080 LW/ph a.k.a. 540 LP/ph. And let's say it has no OLPF (so we easily allow aliasing frequencies to hit the sensor).

Now let's say we have fine hair detail in the image that is smaller than 540 LP/ph; we call it 700 LP/ph. IIUC, that 700 LP/ph detail will only register as an alias at 380 LP/ph (2 * 540 LP/ph - 700 LP/ph). There will be no higher than Nyquist image information recorded by the sensor. It will only be recorded as an alias below Nyquist.

Is this correct? And please feel free to point out ANY errors I've made. Thanks much.


"Nowadays, A/D convertors work on a photosite by photosite basis, not on the waveform as a whole basis as was the case digitising analogue signals."

Perhaps he was talking exclusively about CCD's? Aren't they read line by line and use an off-chip A/D converter? This would also make sense as Canon and Sony up until very recently only made CCD pro video cameras.

Thanks again.

Dennis Dillon
March 10th, 2011, 04:13 PM
I see this thread has taken on more issues than just that the F3 matrix is easy to match to an EX3 matrix at 709. I can confirm that as well. We have had three weeks of utilizing an F3 with an EX 3 as well as a PDW-800. All matched very well. This is of course not comparing imager sizes and optical results. The point being, you can place any of these cameras together where they best suite the need. All EX 3 and F3 HD SDI outs went to Nano at 50 Mb 422 and dropped onto an optical disk via a U1. Sensitivity being the huge advantage of the F3, so we keyed to the 800 and ND'd the F3.
Still waiting for SR-1 and S Log. This is when the F3 will stand apart from the above. That is when I order another F3, both recording to SR.
I was in England and hooked up with Mike Tapa and purchased an adapter for my Zeiss ZF's. I have to say on a quick test the Sony Primes were just as sharp as the Zeiss's. After using both now for a while, it is really nice to have the larger focus rotation from the Sony Primes. Focus was much more accurate. I also like the F stop indicators set off to the camera operators side by Mike's adapter.

David Heath
March 10th, 2011, 04:51 PM
Is this correct? And please feel free to point out ANY errors I've made. Thanks much.
Looks spot on to me! I think you've got it.
Perhaps he was talking exclusively about CCD's? Aren't they read line by line and use an off-chip A/D converter? This would also make sense as Canon and Sony up until very recently only made CCD pro video cameras.
No. I feel you can consider four "dimensions" to the TV picture. It's a bit of a simplification, but......

Time, horizontal position, vertical position and brightness (or electrical value corresponding to it). We''ll ignore colour.

In any system, it's reasonable to consider time and vertical position as "digital" (or discrete) values, because of the frame by frame and line by line system.

In a tube camera, the horizontal position is inherently analogue by nature of the scanning process. In a chip camera, it can be seen as digital by nature of the discrete number of horizontal photosites.

As far as brightness goes, then ALL systems are inherently analogue, the photosites individually read out an amount of charge corresponding to the incident light. That is what gets read in to the A/D convertor, to give a digital value corresponding to the charge.

In position terms, neither hor or vert position are "analogue" which require sampling. It's the positioning of the photosites in both directions which do that, and that's what defines spatial aliasing.

Gene Gajewski
March 10th, 2011, 05:04 PM
This is quite an interesting thread.

Having somewhat of a background in RF and audio digital signal processing, I still find it somewhat difficult to grasp the phenomenon of aliasing in optical sensors. But it is possible.

*Signal processing as most folks know it*

In terms of audio, things are quite clear, you are sampling a single sensor using a regular interval. Your data is a series of samples separated by time. Frequencies above Nyquist (1/2 interval freq), roll around and are seen as aliased unwanted frequencies and can be seen as (alias freq = measuresd freq - interval). All simple enough.

*Pixellated sensor aliasing*

This situation is quite different here. Here we have many sensors, each one producing a single sample. The key point is that the *relationship* between the series of samples is not time, but space. This means the sampling *interval* is based on pixel count (not time), thus the aliasing is spacially-based, not time based.

However, the general techniques to anti-alias are the same. For a desired resolution you can:

1 - Increase the interval.

In audio, this is the sampling frequency. In a pixellated optical sensor, this is the pixel count. If you use a sensor with a greater than, say 1920x1080 resolution, you will reduce aliasing in a 1920x1080 image. Simply put, this raises the Nyquist interval and pushes aliasing farther out of the desired range. (Of course, you still have to decimate the data to get the desired output resolution)


2 - Decrease the sensors resolution to that below Nyquist.

In audio, this can be done by modifying the sensor *before* the A/D stage, through either mechanical or electrical properties.

In a pixellated sensor, this is done using by placing an optical low-pass filter *before* the A/D stage, some sort of difractive filter that that begins the blur at around Nyquist.

Obviously, there are finer points and tradeoff issues related to the implementation of either increased resolution or pre A/D filtering (for some other day's conversation) , but basic issues are the same.

Something to consider : Lines-per-inch (MTF) is related to Nyquist as both occur at 1/2 interval rate. Anybody that tells you their camera can resolve 1000 LPI needs to have a sensor bigger than 1920 horizontal, and the glass to back it up with....

Alister Chapman
March 10th, 2011, 05:27 PM
"You can put frequencies in to the system that are greater than Nyquist, but you won't get such out - only sub-Nyquist aliases."

David, okay let me be sure I follow you. Let's assume a monochromatic sensor (to not have to consider Bayer implications) of 1920 X 1080. Its Nyquist limit will be 1080 LW/ph a.k.a. 540 LP/ph. And let's say it has no OLPF (so we easily allow aliasing frequencies to hit the sensor).

Now let's say we have fine hair detail in the image that is smaller than 540 LP/ph; we call it 700 LP/ph. IIUC, that 700 LP/ph detail will only register as an alias at 380 LP/ph (2 * 540 LP/ph - 700 LP/ph). There will be no higher than Nyquist image information recorded by the sensor. It will only be recorded as an alias below Nyquist.

Is this correct? And please feel free to point out ANY errors I've made. Thanks much.


I realise you've asked David to answer this, but I thought I'd try, although I have to admit this is pushing my knowledge to it's limits. What you will see will ultimately depend on the alignment of the fine detail with the pixels on the sensor, so the answer is not clear cut. Going back to my original example IF the detail lines up perfectly with the pixels you can resolve out to the full sensor resolution, but in the real world that's almost impossible to achieve. In practice what you can expect to see is the fine detail all the way up to the Nyquist frequency, plus some additional detail above Nyquist as well as instances of detail above Nyquist cancelling out or interfering with some of the detail below Nyquist. It's the mixture of additional detail plus cancelled out detail that leads to the patterns that we observe as moire and aliasing. The exact amount and structure will depend on the alignment or orientation between the pixels and the detail. This is why circular zone plates are so good at showing up aliasing as the circular pattern explores all pixel to detail orientations and then as all orientations are presented the resulting aliases displayed occupy all possible orientations, so the alias patterns are also circular. In addition the location of the aliase patterns on the zone plate tell us something of the pixel structure on the chip.




"Nowadays, A/D convertors work on a photosite by photosite basis, not on the waveform as a whole basis as was the case digitising analogue signals."

Perhaps he was talking exclusively about CCD's? Aren't they read line by line and use an off-chip A/D converter? This would also make sense as Canon and Sony up until very recently only made CCD pro video cameras.

Thanks again.

These days CCD's are read out on a pixel by pixel basis. If it weren't then it would be impossible to have pixel masking for hot pixels or pixel to pixel off sets for black level etc. In addition modern A/D converters are easily able to operate at many times the frequency of the equivalent analogue signal, so the Nyquist frequency of the A/D should not limit resolution.

Gene Gajewski
March 10th, 2011, 05:45 PM
Alister's last post exemplifies the general information I posted about sampling and aliasing:

He notes that in the appearance of moire, the "amount and structure will depend on the alignment or orientation between the pixels " and this should reinforce that optical sampling issues are *spacially* based.

Also "you can expect to see is the fine detail all the way up to the Nyquist frequency, plus some additional detail above Nyquist as well" is an indication of the fact that (optical) low-pass filters with a sharp falloff above Nyquist are difficult to make and thus some moire will occur. The choice of filter, it's falloff, pass-through loss are all part of that wonder science of filtering, in this case optical.

One last thought... any additional detail above Nyquist *is* always aliasing. This is why, for a given desired resolution, filter designers usually like to start a filter's cutoff at some slight point below Nyquist. However, since the filter is not a brick wall, some aliasing does get through. Increasing pixel count makes a filter more effective for a given output resolution, since the aliasing occurs later in the filters falloff and so aliasing is more attenutated.

Then again, some aliasing can be pleasing and may appear as an (artificially) sharper image.

Filter choice and characteristics is an *art*, not a science.

Alister Chapman
March 10th, 2011, 06:52 PM
I was looking at the charts I have on my laptop for some examples of how orientation effects aliasing. I don't have many as I'm away from home, but I did start looking at the chart I used when I did the MTF50 tests on the F3 and I found what I was looking for, but in a slightly unexpected way.

In the horizontal and vertical resolution scales you see nothing unusual for a bayer sensor, you can see aliasing in the 800 and 900 LW/PH regions. But take a look at the 45 degree resolution wedge that I've circled in green. You can very clearly resolve the lines out to the limit of the wedge at 1000 LW/PH and there is no aliasing. This hints that there is greater sampling diagonally than horizontally or vertically. I'm a little surprised to see this and am wondering if it hints at a 45 degree tilt to the bayer matrix as used on some of the Sony ClearVid sensors. Anyone else care to comment?

Gene Gajewski
March 10th, 2011, 07:15 PM
45 degree resolution....

Sounds like it may be algebraic:

Maybe something like the square root of the sum of squares of the vertical and horizontal resolution, but I won't swear to it without further research...

For a 1920x1080 sensor, this would work out to an 45 degree resolution of 1100 LPI, *before* considering the low-pass filter, so it sounds about right in line with Alisters observation.

Ray Bell
March 10th, 2011, 08:54 PM
Well, I guess if no one else is going to say it, I will have too...

The F3, as nice as it is, is too expensive... there..

The F3 should have replaced the EX3 and stayed at that price point...

As far as I can tell, Red seems to be establishing the price points of some of the new hardware coming out in the next gen of cameras.. and Red is not cheap.. good quality but you pay a premium for every Red item from the top all the way to the Red t-shirt/hat... its good stuff, over priced because we all want the higher quality equipment.. Red has set the price point goals.. the other companies will follow suit....

I think this year we are all going to see a trend, the trend that everything is going to cost an arm and leg if you want it... again, the F3 is too expensive...

David C. Williams
March 10th, 2011, 11:16 PM
Well, I guess if no one else is going to say it, I will have too...

The F3, as nice as it is, is too expensive... there..

The F3 should have replaced the EX3 and stayed at that price point...

The EX3 and F3 are aimed at completely different markets, why would you want one to replace the other?

You seem to have some great misconceptions about what the F3 is. No one here is saying it's too expensive because no one here thinks that.

Seriously, if you can't make your money back and a lot more with an F3 your choosing the wrong camera for your market.

David Heath
March 11th, 2011, 04:29 AM
In practice what you can expect to see is the fine detail all the way up to the Nyquist frequency, plus some additional detail above Nyquist as well as instances of detail above Nyquist cancelling out or interfering with some of the detail below Nyquist.
I know why you're saying that (it's a literal translation of a zone plate) but it's not quite true.

Any frequency above Nyquist will turn into aliasing, period. If the input frequency is Nyq.+X, the output frequency will be Nyq.-X.

The other feature of aliases is that if you look at a fine block of detail on a res chart, then if it's aliasing it will have two features. The number of lines in the image will be less than are actually on the chart. Pan across, and the lines will appear to move in the OPPOSITE DIRECTION to the pan! (And it's this fact that coders don't like, and why so much attention is paid to aliasing.)
But take a look at the 45 degree resolution wedge that I've circled in green. You can very clearly resolve the lines out to the limit of the wedge at 1000 LW/PH and there is no aliasing. This hints that there is greater sampling diagonally than horizontally or vertically. I'm a little surprised to see this and am wondering if it hints at a 45 degree tilt to the bayer matrix as used on some of the Sony ClearVid sensors. Anyone else care to comment?
It's normal, if difficult to explain. The best I can do is to say that anything on a diagonal of a zone plate can be resolved into x,y components. Each of those will have a maximum value set by Nyquist. But the resultant will be greater than each by itself (a vector sum) and the max must then come for 45deg diagonal lines, with a theoretical limit of 1.414Nyq. (Sq rt of 2, by Pythagorous.)

It's getting widely off topic, but it brings up an interesting point relating to the use of pixel shifting in both horizontal and vertical directions. If you look at a zone plate for something like an HVX200, you get the opposite effect - the diagonal resolution is LESS than the horizontal or vertical! You never get anything for nothing - Peter is robbed to pay Paul, and diagonal resolution is compromised for horizontal and vertical.

Getting back firmly to topic, then technicalities aside, the results for the F3 are broadly in line with what I'd expect - and overall pretty good. Slightly more aliasing than an EX3, but made up for in other ways (low noise etc.)

I don't argue with Ray Bells point - expense. I think a lot of people are feeling that the F3 is too expensive, whereas the AF101 just isn't good enough quality wise. Practically, that's why March 23rd promises to be such an important date, when the details of the large format NXCAM get revealed. A lot of fingers are crossed that it will give much of the quality of the F3 at the price of the AF101. We'll have to wait and see.

Steve Kalle
March 11th, 2011, 12:01 PM
David & Ray,

I also agree which is why I started a thread comparing the F3 with an Epic-S in a ready-to-shoot setup. If you want to record greater than 422, then the extra gear makes the F3 much more costly than an Epic-S. This doesn't mean I want the Red because I am already invested in XDCAM EX so my investment would be just the camera, lenses, mattebox and FF.

But for the person who is not invested and shoots TVCs and/or indie, the Red looks mighty appealing.