View Full Version : Reasons to go for C300 over RED Scarlet X?
Buba Kastorski November 25th, 2011, 01:45 PM The only differences between Epic and Scarlet are frame rate, 5k video, window size vs. res, etc., but the image "flavor" and quality of the two cameras are identical.
Well here you go, that's the reason #1 to buy "Epic" picture for a quarter of a price, to own a camera that huge Hollywood blockbusters were shot with, even though most likely I will never shoot nothing larger than "Friendly lu lu spa " commercial, or "Giuseppe and Maria Wedding Highlights" , but just to know that I have that tool, and I might shoot something better than that is great, AND the picture is amazing,
(please don't tell me about the last Sundance and that 7D movie - i know)
am i buying a dream - maybe, but I promise as soon as C300 is available from any rental house I will put it side by side with Scarlet X and will shoot "donkey balls" out of both of them , and 1D, and 5D, and EX1 looking for reasons to go for C300,
unless of course someone else will do that before me :)
David Heath November 25th, 2011, 02:56 PM But one thing to consider is that the Bayer process was in part developed to overcome issues created by the use of a 2x 2 sampled CFA (colour filter array) as used in the C300. Bayer has the big advantage in that it can help compensate for the leakage and cross colour contamination caused by the imperfect colour filters in a CFA.........
As always, it rarely comes down to a single factor, and same with Bayer versus Direct Read (as here). It's impossible to say "x is better than y" - and same here.
Additionally to Alisters comments, then it's also worth pointing out that for given sensor dimensions (let's assume 100x100) then true deBayering will squeeze a higher luminance resolution out. To a first approximation, typically about 80x80 in that case for luminance, 50x50 for chrominance. Hence the F3 sensor dimensions - such that 80% gives roughly 1920x1080. The C300 approach will give 50x50 for luminance and chrominance.
So does that mean I'm saying the C300 sensor arrangement is worse? Well no, and this is where it starts to get complicated.
Direct Read is far simpler to implement. And that means simpler (hence cheaper) electronics, and crucially lower power consumption. The key is in coupling it with a sensor of optimum dimensions - which the C300 does.
What I also foresee is a 2nd generation camera which ADDITIONALLY will deBayer the 4k sensor (or record it RAW) to give a 4k output. (OK, it'll be only about 80% of the res, but that's what most people refer to as 4k). Or just switch it to 1080 Direct Read when that's most appropiate. The F3 sensor is less versatile - it has to be deBayered, and is then 1080, end of story.
....2x2 CFA has been around since the 90's yet is rarely used.
I think you may be surprised how much it's being used, but not mentioned. It's one of the better ways of deriving video from a still sensor with a high pixel count, and is exactly how I was told nearly a year ago that the AF100 does it. The difference from the C300 in these cases is that alternate 2x2 blocks are omitted horizontally and vertically, so only 1 block in four actually gets read. The sensor dimensions for the AF100 seem to be 4700x2644 (for 16:9 cropping), so 2350x1322 2x2 blocks. Read out on an alternate basis and that gives you 1275x661 of blocks (and hence true resolution) - pretty well exactly what has been measured. It's a decent way of simply getting video from a designed for stills sensor, but can't compete with a sensor specifically made for video.
The AF101 is the one I have the figures for, but I understand it's far from the only case of the principle. It's an acknowledged technique for video from stills cameras. (And far better than earlier techniques that skipped whole lines asymmetrically.)
David Knaggs November 25th, 2011, 06:43 PM ... Scarlet output = Epic output,
... but the image "flavor" and quality of the two cameras are identical.
The thing that causes my reservations on this point were posts where Red One MX owners had just received their early Epic-Ms and were commenting on the improved images of the Epics, even though both cameras had MX sensors. The consensus eventually put it down to the improved electronics and signal processing of the Epic.
The fact that they are reducing a lot of the electronics and boards for the Scarlet (not to mention using sensors which didn't make the specs for the Epic) is what is currently giving me some doubts and reservations regarding the image quality and flavor of Scarlet and Epic being an exact match.
As I mentioned in an earlier post, I'm a big fan of impromptu-style shooting (by someone who really knows how to set up the camera, expose properly, etc.) with minimal or no grading as being a more genuine demonstration of the camera itself. The clip below is a perfect example. It's from an Epic, where an excellent shooter was standing on the beach and was waiting for 5 minutes for the rest of the crew to show up. So he took a few minutes of impromptu footage in the meantime. Admittedly, it's slow-motion (overcranked) - which the Scarlet can't do - but if someone can post Scarlet footage which matches the flavour and quality of the actual images in this clip, I'll be 100% sold on Chris's statement. And I sincerely hope that I will be!
RED EPIC and a spare 5 minutes on Vimeo
Regarding the C300, I'm most excited by the fact that it's a 4K sensor oversampling to 1080p. I'm a big fan of oversampling and I reckon that the C300 is likely to make sensational-looking images for this reason alone. Most posters in this thread don't seem to have noticed or mentioned this fact. My initial interest in the Red One all those years ago was the fact that you could take 4K images and oversample them into a 1080p output.
Well, the C300 does this without the massive files, extra computing power and extra computing time. It's a simple 50Mbps 4:2:2 onto CF cards, a battery which (per Jim Martin) runs for 7 hours and costs $150. The recent price drop of the C300 (apparently?) makes an even more compelling case.
However, the final image quality and "flavor" of the C300 won't be determined by oversampling alone. How good are its electronics and how good are the gamma controls and what sort of scene files or Picture Profiles are able to be constructed? Not to mention Alister's concern that the C300's sensor array might be prone to artifacting.
That's why I hope that Canon release a few C300s "into the wild" sooner rather than later. So that we can see what some good shooters can do with this camera. Without the colorists.
Don Miller November 26th, 2011, 10:28 AM I don't see how the C300 is oversampling, even if the term is just an analogy.
Tim Le November 26th, 2011, 11:20 AM 1) Tim, you don't need a side handle or module to power a brick. That is not accurate.
That is true. I should have been more precise in my wording. I was referring to the internal batteries and the RedVolts do require the side handle or a module. My point is the C300 has an internal battery bay and Scarlet/Epic does not. Therefore, the C300 has an additional size advantage over Scarlet when you consider this internal battery bay, which can power the C300 for 3 hours. In the behind-the-scenes footage of the C300 films, I noticed they used the internal battery very often. But this could be due to the camera needing 8.4V in and they didn't want to deal with stepping down the higher voltage bricks.
There are pro's and cons to 2x2 CFA and Bayer. 2x2 CFA has been around since the 90's yet is rarely used. A really good example of the issues that can be caused when not using bayer is Sony's F35 which has 2 pixels for each colour in a stripe array, yet has some pretty bad aliasing artefacts.
The C300's green photosites are offset half a photosite vertically and horizontally, which "cancels" out aliasing for that channel, according to Larry Thorpe. Does anyone know if other 2x2 CFA non-debayering sensors do that? I keep wondering what are the downsides to Canon's sensor design because it seems so simple and elegant.
The whole lack of a 10 bit output does ring alarm bells as to what bit depth the DSP is working at.
Canon says the DSP is working at 12 bits for the red and green and 13 bits for the green. All the non-linear processing happens at that bit depth.
Emmanuel Plakiotis November 26th, 2011, 12:05 PM Without being an expert, I think that the beyer filter addressed the technology inadequeacies of a bygone area. The most modern sensors (F65, C300) are not beyer and I don't think this trend is going to fade.
Many have point out that this new sensor readout "impersonates" a 3 chip camcorder. I wonder, isn't more accurate to say a 4 chip camcorder?
Tim Le November 26th, 2011, 12:50 PM The C300's sensor still has a classic Bayer pattern. It's just not doing any de-bayering to reconstruct the image. There are exactly 1920x1080 red, 1920 x 1080 blue and effectively 2X 1920 x 1080 green photosites. There are twice the number of green photosites because green wavelengths account for most of the visual information.
The chip has four parallel readouts to readout these photosites directly: one each for red, blue, green1 and green2. The two greens are then combined so you end up with 4:4:4 color sampling, much like a 3 chip camcorder. At least that's the way I understand it.
Chris Hurd November 26th, 2011, 02:16 PM Many have point out that this new sensor readout "impersonates" a 3 chip camcorder. I wonder, isn't more accurate to say a 4 chip camcorder?
Sorry but no, that is not accurate. It impersonates a 3-chip design.
A three-chip camcorder samples green the same way as the C300 does. In a three-chip prism block, 25% of the incoming light goes to red, 25% to blue and 50% to green. Our eyes are more sensitive to the color green, and the green channel carries all of the luminance (brightness) info.
David Heath November 26th, 2011, 03:33 PM A three-chip camcorder samples green the same way as the C300 does. In a three-chip prism block, 25% of the incoming light goes to red, 25% to blue and 50% to green.
I have to disagree. In a 3 chip design, then all the red, all the green, and all the blue light get sent to their respective chips by the beamsplitter. So for each location, the relevant photosites get to respond to all the light corresponding to their filtered colour. Hence 100% of red light to the red channel, 100% of green light to the green channel etc.
With a Bayer filter, the individual photosite colour filters perform the spectrum separation, so three adjacent photosites (R,G, and B) are all that's really needed in principle. The second green photosite will just have the main effect of improving the noise figure in the green channel (and also improving spatial characteristics), and since that is what luminance is mostly derived from, that's no bad thing.
Don Miller November 27th, 2011, 09:45 AM There must be loss of photons in a beam splitter. Doing RGBG on a single sensor maintains the bayer advantage of 50% green, while avoiding the cost of a beam splitter. In the images released so far the Canon doesn't seem to have any color artifacting issues at all as far as I can see. So perhaps it's now possible to build very clean color filters onto CMOS now. Certainly the large sensor size, compared to 1/3, must help in construction. I'm curious how anti-aliasing is handled in the C300.
The C300 has more green sensors than the F3 has total sensors. That should make for more real world detail. The green readouts may be combined on chip so the the processor sees each pair as a single value. My understanding is that CMOS, unlike CCD, can include on-sensor functionality.
I assume Canon and Sony can build more sophisticated sensors than Red or Arri can source. It's a shame we can't see something close to a "raw" readout from the C300 sensor.
I expect with Red too much credit is given to "the brain" and not enough to the image processing that runs on the PC. In many ways Red and Canon have inverse strengths.
It's likely Scarlet output will be indistinguishable from Epic. It would probably cost Red more to cheapen up Scarlet than to build it with Epic engineering. They have plenty to do making their products the best they can be rather than engineering less image quality into Scarlet.
And besides, what does "the brain" do? In a Red after the A/D conversion it only runs a compression scheme and dumps the bits out. None of that messing business of actually making a video file. Perhaps the monitor portion of a Scarlet won't be as good or sophisticated as an Epic. But it doesn't seem there's a lot of opportunity for output quality to vary between the low and high models.
Epic will now only get the very best of the sensors. But they aren't going to be putting junk into Scarlet. They would hurt their brand doing that.
Tim Le November 27th, 2011, 11:29 AM It's likely Scarlet output will be indistinguishable from Epic. It would probably cost Red more to cheapen up Scarlet than to build it with Epic engineering. They have plenty to do making their products the best they can be rather than engineering less image quality into Scarlet.
And besides, what does "the brain" do? In a Red after the A/D conversion it only runs a compression scheme and dumps the bits out. None of that messing business of actually making a video file. Perhaps the monitor portion of a Scarlet won't be as good or sophisticated as an Epic. But it doesn't seem there's a lot of opportunity for output quality to vary between the low and high models.
Epic will now only get the very best of the sensors. But they aren't going to be putting junk into Scarlet. They would hurt their brand doing that.
I agree. Since RED cameras are just outputting and compressing raw data, the results should theoretically be identical. Graeme Nattress over at Reduser has said the image sensor is identical in Epic and Scarlet. They're not binning those chips--it's just the ASICs that are being binned. The consequences seem to be pretty much what they stated: there are limits in resolution and frame rates. There is one new limitation: HDMI and HDSDI will not work simultaneously on Scarlet. Epic has the same limitation now, but it's a firmware issue that might be resolve early next year. But it sounds like with Scarlet, it's a processor limitation, which makes sense.
David Heath November 27th, 2011, 04:31 PM There must be loss of photons in a beam splitter.
Theoretically, no. (And practically, to a first approximation, also no.) If a photon of wavelength corresponding to "red" enters, it goes down the route to the sensor dedicated to the red channel. Same for green, blue.
With any single sensor device, if a photon of "red" light hits a red photosite it passes through the filter and registers, if it hits a blue or green photosite, it gets absorbed in the dye and hence is lost for photographic purposes.
Hence why a 3 chip device is inherently more sensitive than any current single chip one of the same size format.
In the images released so far the Canon doesn't seem to have any color artifacting issues at all as far as I can see. So perhaps it's now possible to build very clean color filters onto CMOS now. ........... I'm curious how anti-aliasing is handled in the C300.
I wouldn't expect colour artifacting as the photosites for each output pixel are so close together. And what we don't know is what low pass filter Canon may use.
The C300 has more green sensors than the F3 has total sensors. That should make for more real world detail.
Not necessarily, it depends on how they get dealt with. If corresponding green values just get added together, that will well impact on sensitivity, but not detail. And remember what is important is luminance detail. Because of the formula for deriving luminance (Y=0.3R+0.69G+0.11B, if I remember correctly :-)), that's why it's true to say that the green channel is most important. But most is not the same as all.
The direct read out system used as in the C300 ignores spatial differences between corresponding R,G,B photosites. In true deBayering, those spatial differences are not ignored - and blue, red photosites also correspond to luminance resolution. Not as much as green, but......
And that's why the more you go into it, the more difficult it becomes to think in terms of the headline numbers. I can't pretend to understand the most subtle points, but "more green sensors than the F3 has total sensors" is not necessarily true.
Maybe - maybe not.
Alister Chapman November 28th, 2011, 02:57 AM Sensor theory is one thing, real world performance is another. Prism designs are ver efficient at splitting the light into the R, G and B wavelengths. There is very little loss and very little leakage between colours. But the light has to pass through a very thick piece of glass and this then causes issues of it's own (flare being just one). Bayer is clever because it relies on the fact that most real world scenes don't contain discreet primary colours and 2x2 CFA is clever because it does not need to be de-bayered. Each system has advantages and each has disadvantages. For example bayer is known to alias in some colour frequencies due to sub sampling and CFA may have cross colour issues as there may be little compensation for the overlaps in the colour filters.
In the end, I'm sure all of the current crop of cameras will produce great images. The skill will be in exploiting the strengths of the overall package. Some have simpler workflows, some have better dynamic range but maybe at the expense of a more complex workflow. Thus the "package" becomes more important than just the sensor.
Don Miller November 28th, 2011, 10:38 AM ...............
Not necessarily, it depends on how they get dealt with. If corresponding green values just get added together, that will well impact on sensitivity, but not detail. And remember what is important is luminance detail. Because of the formula for deriving luminance (Y=0.3R+0.69G+0.11B, if I remember correctly :-)), that's why it's true to say that the green channel is most important. But most is not the same as all.
The direct read out system used as in the C300 ignores spatial differences between corresponding R,G,B photosites. In true deBayering, those spatial differences are not ignored - and blue, red photosites also correspond to luminance resolution. Not as much as green, but......
And that's why the more you go into it, the more difficult it becomes to think in terms of the headline numbers. I can't pretend to understand the most subtle points, but "more green sensors than the F3 has total sensors" is not necessarily true.
Maybe - maybe not.
We can count photosites. The C300 has many more green. But you're right in that the total area (and sensitivity) should be equal between the two cameras. However the C300 photosites are perfectly distributed. Calculating spatial differences makes up some of that difference in the F3, but still has to have lower true resolution. You can't measure much less and get back to the same resolution.
Debayering helps with a lot of bad pixels. So I think the iphone will debayer for many more generations. I think pro video debayers because that was historically the way to make less expensive cameras. It's remarkable camera companies have been able to do with post capture processing. But all other things being, starting with an image that resembles reality has to be better than what comes off a traditional bayer sensor that is less than 4x the resolution of the final output.
I think the F3 debayers because that is what fits in Sony's low mid tier product line. I think Red debayers because that's the technology they could buy. I think the C300 doesn't debayer because they are now technically capable of doing it right, and don't have Sony's product cannibalization problem.
Of course, as a large corporation, Canon had to mess it up by squeezing it down to 50 mbs and 8 bit. Which for me makes the Scarlet more interesting. Maybe shooting 4K for 1080p at a somewhat higher than normal compression. I just need to get over owning 10-12 batteries.
I have no illusion that Epic or Scarlet will be meaningfully upgradable. The electronics aren't good enough. These are 3 year purchases. But that said I do agree with Alister that all of these products are likely very good. We've reached something of a golden age in lower cost high end DV. If all TV for the rest of the decade was shot with the F3 no viewer would request higher quality. I'm not even sure that anyone cares if projected video is more than excellent 1080p. My friends and family usually don't see a difference between cable SD and HD.
Peter Moretti November 28th, 2011, 11:35 AM FWICT, for all intents and purposes, it's a quad-HD Bayer pattern sensor (photosite color ratio of 1R:1:B:2G).
It's not Debayered b/c the multiple readout allows for a full HD image to be reconstructed by simply combining the 4 photosites to one pixel. The F35 does a similar thing, only using a vertical stripe pattern and it doesn't sample G twice.
What's left out in all this is that this is a very inefficient usage of a photosites and pixels. A good Debayering algorithm would increase resolution by quite a bit. But then you'd need a higher than 1080P frame size for the recorded image.
David Heath November 28th, 2011, 12:50 PM We can count photosites. The C300 has many more green. ........ You can't measure much less and get back to the same resolution.
Ah, but you're not taking into account the colour element of the subject. It helps to think of "input photosites" and "output pixels". And an output pixel has three values - one for luminance, and two for the two colour difference signals.
It's pretty easy to work out (to a first approximation) what will happen in the C300 case. Four "input photosites" go directly to make up one "output pixel" - end of story. Luminance resolution will be 1920x1080 - same as from a 3-chip system.
With deBayering it's different. If you deBayered the C300 chip, you firstly go to a 3840x2160 "output matrix" - each pixel having the three values for luminance and two chrominance. Here, for luminance, each "output pixel" will have the luminance value of it's corresponding photosite - plus a percentage of the luminance of neighbouring pixels. The clever part is how the weighting gets done. That means a sort of averaging - which will mean the output luminance resolution must fall short of 3840x2160. But it will be far better than 1920x1080. The chrominance values for each "output pixel" must also be calculated - and it's easy to work out that this must be coarser than for luminance - hence a function of deBayering is better luminance resolution than chrominance. Just like the human eye!
So the next question is, for a chip that's going to be deBayered how big does it have to be to match one read out in the same way as for the C300? For luminance, the answer is "about 25%" in each direction, so nominally about 2400x1350. Now isn't that strange!? Very close to what the F3 actually is! Clever people these engineers - put the right numbers into the number cruncher, and it's pretty obvious what they will say should be done!
It's why I quite agree with Alister when he predicts that for resolution at any rate, there won't be a lot to choose between the cameras. I do wait to see what the aliasing will be like for the C300 - especially for chroma, and especially out of band. I'm not going to try and predict that one..... :-)
Emmanuel Plakiotis November 28th, 2011, 12:54 PM What's left out in all this is that this is a very inefficient usage of a photosites and pixels. A good Debayering algorithm would increase resolution by quite a bit. But then you'd need a higher than 1080P frame size for the recorded image.
It might be inefficient resolution wise, but in every other respect is optimum. From what I've read so far it's not a bad decision. Which brings back the haunted question. Why Canon took so much pain to create a superb color image and doesn't allow the end user to take full advantage of it...
Tim Le November 28th, 2011, 01:47 PM It might be inefficient resolution wise, but in every other respect is optimum. From what I've read so far it's not a bad decision. Which brings back the haunted question. Why Canon took so much pain to create a superb color image and doesn't allow the end user to take full advantage of it...
I agree, Canon's method seem to be optimized for an HD signal and the C300 is only an HD camera. De-Bayering has disdvantages too, such as reconstruction errors and computation power necessary to do the de-Bayering. Part of the reason why the C300 uses less power is because it doesn't need to de-Bayer. Less power means less heat so the cooling system can run silently. Also, the C300's DIGIC DV III chip is expecting direct color readouts like the 3-chip sensors in the XF305 and the C300's sensor emulates this. It really was the best, most elegant solution for Canon, IMO.
Emmanuel, I think you're referring to the 8-bit limitations? The reason is just a practical one: Canon did not have a DSP chip that could handle a 10-bit baseband at the time the C300 was developed. All they had was the DIGIC DV III. However, this chip does process the sensor data at 12 and 13 bits before being conformed to 8-bits. We have to remember Canon isn't in the same situation as Sony or Panasonic, who have an established line of CineAlta and Varicam cameras that have the 10 bit infrastructure already developed.
Peter Moretti November 28th, 2011, 08:15 PM Another reason for 8-bit is that there is a trade-off in terms of resolution quality and color quality at work. FWIU, at low bit depths (and 50Mbps is probably on the border of this characterization), it's better to use those pixels for resolution than for color depth.
Now it's true that when DR is large, then 8 bits can lead to more color banding. But even XDCAM-EX, which is 35Mbps 4:2:0 8-bit can accurately record the expanded range that the Hypergammas provide. Now it might crap out w/ S-Log or whatever the F65 spits out.
But I don't believe that C-Log is as aggressive a log curve as is S-Log, since Vincent Laforet and his editor said that C-Log looks good w/o using a LUT or adding a color correction layer on top of it.
Don Miller November 29th, 2011, 08:22 AM If 8 million sensors is too much, Canon could have gone with a less dense sensor and the XF100 chip set. Perhaps we'll get that in a C100.
It is interesting that in the three camera, C300, F3 and Scarlet, we have the three current better sensor designs for producing a 1080p file. We should be able to learn something from that spectrum of products.
Emmanuel Plakiotis November 29th, 2011, 09:56 AM Emmanuel, I think you're referring to the 8-bit limitations? The reason is just a practical one: Canon did not have a DSP chip that could handle a 10-bit baseband at the time the C300 was developed. All they had was the DIGIC DV III. However, this chip does process the sensor data at 12 and 13 bits before being conformed to 8-bits. We have to remember Canon isn't in the same situation as Sony or Panasonic, who have an established line of CineAlta and Varicam cameras that have the 10 bit infrastructure already developed.
Tim,
You believe the DSP is to blame for not having 10bit from the HDSDI?
Tim Le November 29th, 2011, 10:06 AM Tim,
You believe the DSP is to blame for not having 10bit from the HDSDI?
Yes, that's what Larry Thorpe says in this interview:
HD Magazine - HD Mag - Canon's Larry Thorpe on the C300's design andorigins (http://www.definitionmagazine.com/journal/2011/11/9/canons-larry-thorpe-on-the-c300s-design-and-origins.html)
At about 3:10 he talks about why it can't do 10-bits, but he also says in the future Canon will move from 8-bits. Lots of other interesting tidbits in that interview, as well.
David Heath November 29th, 2011, 01:23 PM Another reason for 8-bit is that there is a trade-off in terms of resolution quality and color quality at work......... it's better to use those pixels for resolution than for pixel depth.
There is also a trade off between bitdepth and compression. (For the same data rates.)
And banding can be brought on by too much compression, as well as over manipulation with low bit depth.
And in the case of bitdepthh, it's a direct correlation. You'd have to up the bitrate by 25% for comparable compression. (All other factors kept the same.)
|
|