View Full Version : why 1440x1080
Collyn Tabor January 5th, 2005, 03:47 PM We are looking at using a couple of the FX-1's but are concerned with the 1440x1080. What is the deal here? Do the camera's output to 1920x1080 or do you just have a blank on each side? Also 1440 x 1080 is 4:3, how does sony say that the camera is 16:9? Line doubline? Stretching?
Collyn
John Gaspain January 5th, 2005, 03:57 PM I have been wrong before, but this is my best guess.
I was wondering the same question and concluded that the pixel size is 1.333 to 1 therefore giving it 16x9........the pixels are rectangular not square like you would assume.
Hope this helps.
Barry Green January 5th, 2005, 05:03 PM John is correct; the FX1 uses wide rectangular pixels.
HDV records at 1440 x 1080, but the component output up-rezzes so you get a proper 16x9 image.
Actually, just about all HD recording systems record at less than 1920. HDCAM uses 1440 x 1080, and DVCPRO-HD records at 1280x1080.
Collyn Tabor January 5th, 2005, 05:16 PM The Brochure that I just downloaded from sony's pro site says that the 730 and 750 both have 1920 x 1080 effective picture elements. Typo on the DVCPRO-HD? should be 720 instead of 1080 right?
Collyn
Joel Corral January 5th, 2005, 05:29 PM very easy the 1440x1080 is @ 1.333 pixel aspect ratio.
so 1.333 x 1440 = 1919.52 or 1920x1080.
so indeed it is @ 1920 x 1080 square pixels or 1440 x 1080 1.333 pixel aspect ratio.
:)
Barry Green January 5th, 2005, 09:50 PM The camera images at 1920x1080, but the HDCAM recording system downsamples to 1440x1080 before recording. Maybe HDCAM SR records a full 1920x1080, but if so I think it's about the only tape system that does. All the others downsample.
HDCAM downsamples from 1920 down to 1440, and then on playback it'll interpolate new pixels up to 1920 again. That's a 33% increase.
DVCPRO-HD downsamples from 1920 down to 1280, and on playback it'll interpolate back up to 1920, which is a 50% increase.
Gabriele Sartori January 6th, 2005, 12:39 AM "We are looking at using a couple of the FX-1's but are concerned with the 1440x1080. What is the deal here?"
because it is cheaper. Also, the camera doens't have a native1440x1080 pixels CCD. They use a technique called pixel-shifting that is the biggest marketing bs of all times. It is interpolation or "upscaling" That is the reason why this wonderful camera is head and shoulder above the JVC for everything but the resolution...
Bye
Davi Dortas January 6th, 2005, 12:44 AM <<<-- Originally posted by Collyn Tabor : We are looking at using a couple of the FX-1's but are concerned with the 1440x1080. What is the deal here? Do the camera's output to 1920x1080 or do you just have a blank on each side? Also 1440 x 1080 is 4:3, how does sony say that the camera is 16:9? Line doubline? Stretching?
Collyn -->>>
Well you should look for more. The FX-1 CCD only record 960x1080 real pixels. This then get stretched and shifted around to 1920x1080 on playback.
Dylan Pank January 6th, 2005, 06:23 AM <<<-- Originally posted by Davi Dortas : Well you should look for more. The FX-1 CCD only record 960x1080 real pixels. This then get stretched and shifted around to 1920x1080 on playback. -->>>
Actually each of the CCDs is 960*1080, but the cam uses pixel shift to create an effective resolution of 1440*1080.
Since MPEG2 uses 4:2:0 colour resolution, there's no significant loss compared to true 1440.
Joonas Kiviharju January 7th, 2005, 01:38 AM Collyn Tabor wrote: "Typo on the DVCPRO-HD? should be 720 instead of 1080 right?"
No, I believe it's not a typo. If I remember correctly DVCPRO-HD is 1280X1080 in 1080i, and 960X720 in 720p. But it has a colour space of 4:2:2, which is alot better than 4:2:0 of the HDV. (Somebody correct me if I'm wrong.)
Toke Lahti January 7th, 2005, 03:38 AM <<<-- Originally posted by Barry Green: Maybe HDCAM SR records a full 1920x1080, but if so I think it's about the only tape system that does. -->>>
How about D5-HD and D6"voodoo".
And D9(=DVHS?)?
Chris Hurd January 7th, 2005, 04:00 AM << They use a technique called pixel-shifting that is the biggest marketing bs of all times. >>
Absolutely false. Pixel shift technology has been around for years and years and years, it was first introduced by Panasonic and used in some of their higher-end professional video cameras. It most definitely is NOT "marketing bs," it is very real and it works very well.
There is a lot of "marketing bs" in this business, but pixel shift technology is quite the opposite of that. Instead it is a definite benefit. The proof is readily apparent in the image.
Dylan Pank January 7th, 2005, 06:09 AM <<<-- Originally posted by Joonas Kiviharju : Collyn Tabor wrote: "Typo on the DVCPRO-HD? should be 720 instead of 1080 right?"
No, I believe it's not a typo. If I remember correctly DVCPRO-HD is 1280X1080 in 1080i, and 960X720 in 720p. But it has a colour space of 4:2:2, which is alot better than 4:2:0 of the HDV. (Somebody correct me if I'm wrong.) -->>>
At present there are no DVCproHD cameras that support 1080i, the cameras are all 720p, but can record at 50/60fps, so the 1080i format is used where the producers would rather edit interlaced.
<<<-- Originally posted by Toke Lahti : And D9(=DVHS?)? -->>>
DVHS uses, AKAIK pretty ,much the same specs as DVB HD and is similar to HDV (ie 1440*1080i or 1280*720p, MPEG2 up to 28Mbs) but with additional audio options (AC3 and DTS multi channel surround sound options, and audio datarates of up to 1.5Mbs on the latter)
Graham Hickling January 7th, 2005, 10:05 AM Dylan, If one had an HDV camera and a DVHS deck, is there any straightforward way to add AC3 audio to the HDV footage (during editing) and then get it back onto tape via the deck?
i.e., it would be simple enough to mux the video and AC3 streams....... but would the deck accept it as input?
Gabriele Sartori January 7th, 2005, 10:40 AM "Absolutely false. Pixel shift technology has been around for years and years and years, it was first introduced by Panasonic and used in some of their higher-end professional video cameras. It most definitely is NOT "marketing bs," it is very real and it works very well."
Since I'm the marketing VP of an high-tech company I'm glad someone still believe with such a passion in marketing messages. I'm perfectly aware that P.S. has been around for ever, the Canon Gl1, Xl1 etc had it as well. Now, since you know everything, explain to the public how is working (I know, the green channel used for the luminance, I know) and why is mathematically different from a good interpolation. There is nothing mechanical shifting anything in these cameras, it is better you know that...... It is just interpolation! Or do you believe they invent pixles out of nothing?
Dylan Pank January 7th, 2005, 11:04 AM <<<-- Originally posted by Graham Hickling : Dylan, If one had an HDV camera and a DVHS deck, is there any straightforward way to add AC3 audio to the HDV footage (during editing) and then get it back onto tape via the deck?
i.e., it would be simple enough to mux the video and AC3 streams....... but would the deck accept it as input? -->>>
To be honest I don't know, as I don't have access to a D-VHS deck.
However in theory it should be possible to mux a HDV stream to an AC3 track and create a MPEG-TS out of it. On the mac side of this you'd use FFmpegX, I'm not so sure about the PC side of things. Although it only supports the creation of 2 channel stereo, it could pass through a 5.1 AC3 stream created in other software (a.pack for example) which should be transferable to D-VHS with virtual-DVHS.
For more information you'd be better of haunting some of the www.videohelp.com forums where I believe they do this sort of thing a lot. I've read that people have backed up captured HDV streams onto D-VHS without too much trouble.
Gabriele, P.S. is not mere B.S. (though I'm not necessarily touting it as the worlds greatest method for creating video images). It works by offsetting one of the CCDs (the green one IIRC) which effectively increases the luminance channel resolution. DV (and HDV) only sample half the colour resolutions (Y-r and Y-b) so therefore that those channels have a lower resolution is irrelevant.
Where you seem to misunderstand is in this idea that "the green channel is used for the luminance" - in fact, all the channels are.
Basically
0-0-
-0-0 (green)
+
-0-0
0-0- (red or blue)
=
0000
0000
That's my extremely non-technical understanding of it.
Toke Lahti January 7th, 2005, 01:34 PM Gabriel, I've understood that "green is used for luminance" applys to 1-chip cameras, which have double amount of green pixels compared to red and blue with bayer filter.
I'd illustrate pixel shift like this:
yx 1 2 3 4 5
1 rb g rb g rb
2 rb g rb g rb
So camera gets 1920 horizontal samples with 3*960x1080-chips.
Of course there has to be interpolation between pixels when luminance is calculated. Otherwise eg. with full green screen you would get green-black stripes. So equity isn't as good as with real 1920x1080-chips, but pixel's size is bigger, so sensivity is better.
With 1920x1080 1/3" chip one pixel's area would be quarter of hdcam's etc. 2/3" chip. 960x1080's pixel's area is half than 2/3" so its still somewhat usable.
How's fx1's sensivity in ASA? 200?
Barry Green January 7th, 2005, 08:02 PM How's fx1's sensivity in ASA? 200?
Under low-light conditions, it's about 160. Over most of the exposure scale it should be at least 200. I only tested its response at the lowest light levels, but if the exposure curve is similar to the DVX's, it should pick up a half stop when not at the lowest light levels.
Gabriele Sartori January 8th, 2005, 01:53 AM Toke: "I've understood that green is used for luminance applys to 1-chip cameras, which have double amount of green pixels compared to red and blue with bayer filter"
AND
Dylan: "Where you seem to misunderstand is in this idea that "the green channel is used for the luminance" - in fact, all the channels are."
Who ever said that only green is for luminance? Green as you Dylan said is for "enhancing the luminance resolution" that is what I was referring to anticipating some lecturing on the green channel and its offset.
For Toke, I'm not so ignorant, I perfectly know that no bayer is present in a 3CCD camera. Anybody know that a 3 CCD camera has a dichroic prism that split the light in 3 and each CCD takes a fundamental color.
What I was saying is that although the FX1 is the greatest camera in its price range, it has a barbarian way to generate artificial resolution, first they have the BS (because just a tiny bit more than BS is and I'll tell you why) pixel shifting, than they have a brute interpolation in order to get to 1080i . They start with 3x 1MPixel camera and they obtein the equivalent of a 2 MPixel camera. Basically the resolution is artificially increased two folds doubling it overall. Barbarian.
About the pixels shifting, clearly they shift for half pixel the 3rd CCD (usually green , here is where green come from) but than there is a huge amount of interpolation going on inorder to readjust all the things in a two dimensional plan withsequential pixels and a 4-2-0 or 4-1-1 coding. Such interpolation need some amount of low passing filter in order to avoid artifacts and here most of the gain is lost but finally there are pixels now that you can count and you can sell marketing-wise.
The mileage vary but I would rather like to have less low pass filtering (there to avoid nyquist and many other artifacts) and my 3 CCD with their native resolution.
I think that 3CCD camera will eventually go away. It is much better to use the same amount of semiconductor material and make a single bigger CCD with the right resolution. I could accept less than 3X the size in order to compensate for the lower yield of a bigger chip.
The equivalent pixel size of a 3CCD camera is bigger because of the 3CCD but the gain in number of photons hitting the pixels is not for real since you are divinding the available light /3 with the prism.
The JVC JD10 is inferior to the sony under many point of views but philosopically in my opinion is better what they did, a single CCD with the native and real resolution instead of BS resolution. That it is BS is also verifiable putting a HD1/10 against a Sony FX1. THe sony wins everywhere but the ability to resolve details (in good light condition) is about equivalent or a bit better for the jvc that in theory has 50% of the Sony resolution. So is the sony resolution real or just marketing BS?
Anhar Miah January 8th, 2005, 06:52 AM sorry but i dont understand this whole issue of 1ccd vs 3ccd, as you know in theory 3ccd has a 1:1 ratio to start of with, what happens after is upto the DSP (interpolation etc) thats a different issue. The idea that 1CCD is better i find strange, because you will always need to have a bigger image sensor beacuse you can never get 1:1 colour ratio (hence bayer pattern).
Of course a 3ccd system has some negative issuses, but you know what, in the end its working system.
I'll give you an example say you had the choice of having a 35mm cmos camera (1 image sensor) and at the same time you had another camera (everything else equal) it had 3*cmos images, which one would you pick?
I do agree with you about the electronic gigery pokery thats going on, thats not right..
Toke Lahti January 8th, 2005, 08:33 AM Sorry Gabriele for underestimating your knowledge. Somehow I always get tricked to this stereotype that marketing = no technical understanding...
I would also welcome cameras with one big chip to this price range.
And only reason would be DOF. Another thing might be that they could have physically a little bit shorter body.
Also maybe a little bit cheaper price for not having prism.
Gain with 1-chip is not noticeably better than 3-chip. As you might know from physics prism doesn't eat any light and both have these color filters before the cell.
And with 1-chip there is of course a lot of interpolation going on.
To have same resolution than 3*2Mp (no interpolation), one needs 8Mp chip.
So, with 4/3" one-chipper one would get same sensivity and resolution than with 3*2/3".
I believe that 4/3" size (bit like olympus) would be a sweet spot in cell size. With 16:9 the width would be 19.2mm compared to 2/3" 9.6mm, super16 12.3mm and super35 24mm.
I'd like to have just one camera for all kind of shootings and I think 35mm's DOF is just too hard for doco work without focus puller.
With 4/3" one could have same sensivity than modern hdcams eg. ≈ 400 ASA. Problems might become with aperture. With hd resolution fx1 has only f2.8 at tele end. 2/3" zoom lenses usually have f2 and 35mm zoom lenses are around f3, but they cost a fortune.
Just to think about lower price lenses, one can compare still camera lenses. 10X zoom lenses in kino size are around f4 at wide and f5.6 at tele. 4/3" is much smaller than 35mm, so the aperture might be something in between.
So 4/3" camera ASA 400 with zoom lens f2.8 might have reasonable price and sensivity would be double compared to fx1 and half compared to hdcams.
1" camera could use 16mm lenses...
Why there aren't these kind of cameras available?
Isn't there enough prosumers and professionals in the world that would like to have this kind of camera for let's say 10k$?
Toke Lahti January 8th, 2005, 08:42 AM <<<-- Originally posted by Anhar Miah: I'll give you an example say you had the choice of having a 35mm cmos camera (1 image sensor) and at the same time you had another camera (everything else equal) it had 3*cmos images, which one would you pick? -->>>
If the image's resolution and DOF (cmos size) is the same who cares?
Usually they are not, so you have to pick one.
Gabriele Sartori January 8th, 2005, 09:47 AM Toke, I agree 100% with almost everything you are saying, I totally disagree just with this:
"To have same resolution than 3*2Mp (no interpolation), one needs 8Mp chip."
That is not true Toke, you can't have it both way. I understand where you are coming from 3X2= 6, (1X8) -LP filtering = 6 but it is not working that way. On a 8 MP single CCD camera you have an equivalent of 6-7 MP depending on how aggressive is the Low Pass filtering, AA filtering ecc. On a 3 CCD camera with 2MP you still get two MP of luminance resolution if there aren't tricks applied. Only if you shift all the 3 CCD doing a double Pixel shifting you can actually have 6 MP but than you have to filter. You just made a sort of equivalent to single CCD 6MP camera with bigger pixels .
Yes a prism don't lose per-se but when you split the light let say in 2 (not with a prism, with another device), 50% of the photons go one side, 50% go the other side (total is the same) if the splitter is perfect you have 3dB less light on each output. On the prism what I will now call the attenuation is higher than 3dB< I should do the math, probably 4.5. At the end almost nothing get lost and I consider under the losses point of view not better, probably not worse than a single CCD.
We should look ad high end photo-camera, why they don't do them with 3CCD? Is the quality any worse with only one? They are spectacular!!
The reason why I would prefer a single bigger CCD is like you DOF and less electronic processing if the size/pixel-count is right.
There are cost reasons involved as well but I don't think that we pay a price that is related to cost. The price is just a marketing positioning scheme so nothing would change for us.
My ideal beast would be a native single big sensor (CCD are great but are slow to read) 1920x1080 60P camera. I like your reasoning about the current digital photo camera evolution. The only think the existing sensors are missing is reading speed. Once they fix that and they build a big-mama sensor on the video cameras the result will be incredible. I bet you they don't do it because they are afraid to give away too much quality for cheap.
Toke Lahti January 8th, 2005, 08:34 PM <<<-- Originally posted by Gabriele Sartori: Toke, I agree 100% with almost everything you are saying, I totally disagree just with this:
"To have same resolution than 3*2Mp (no interpolation), one needs 8Mp chip." -->>>
My typo: should be 4Mp.
<<<-- Yes a prism don't lose per-se but when you split the light let say in 2 (not with a prism, with another device), 50% of the photons go one side, 50% go the other side (total is the same) if the splitter is perfect you have 3dB less light on each output. On the prism what I will now call the attenuation is higher than 3dB< I should do the math, probably 4.5. At the end almost nothing get lost and I consider under the losses point of view not better, probably not worse than a single CCD. -->>>
Ah, I wasn't thinking. With prism beam is first splitted and then color filtered. With bayer there is only color filtering. So with 3chip one looses 1.5 stops?
<<<-- We should look ad high end photo-camera, why they don't do them with 3CCD? -->>>
Actually there were 3ccd pro digital still cameras back in the days, when big chips weren't available.
<<<-- My ideal beast would be a native single big sensor (CCD are great but are slow to read) 1920x1080 60P camera. -->>>
Then you might like this upcoming box-hd from Sumix based on Altasens chips (same as Kinetta is supposedly using).
Have you read those DIY hd threads here in dvinfo?
<<<-- Once they fix that and they build a big-mama sensor on the video cameras the result will be incredible. -->>>
They already have those with genesis, dalsa, d20 etc., but production series with those cameras will be at most couple of hundreds per model, so the price will not go down with them.
To get prices down, I think there should become a multipurpose camera for both stills and motion from companies like canon, nikon or kodak, that don't have to protect some older expensive models.
I really haven't thought what kinf of camera body would be optimal for both still and motion shooting...
There are also big issues with affordable data transfer and recording systems. There is a need for good visually and math losless compressions optimized for raw bayer pictures. And then all you need carry with you is battery powered nas with few TBs and gigabit ether to camera...
Balazs Rozsa January 9th, 2005, 11:59 AM >>With prism beam is first splitted and then color filtered. With bayer there is only color filtering. So with 3chip one looses 1.5 stops?
With a prism the splitting and the filtering happens at the same time, so the loss of light is very low. With a bayer filter about two-third of the light is blocked in the filters (for a green individual pixel for example the green light gets through, the rest is lost). So if you build a 3 CCD and a 1 CCD camera using the same CCDs, the 3CCD camera is about three times more sensitive to light than the other one.
Ken Hodson January 9th, 2005, 04:25 PM Hey gents,
I would like to recomend this Steve Mullen article. It does a good job of explaining the pro's and con's of both.
Have a read let us know what you think.
http://videosystems.primediabusiness.com/ar/video_ccd_counting_needed/
Gabriele Sartori January 9th, 2005, 04:31 PM "With a prism the splitting and the filtering happens at the same time, so the loss of light is very low."
It is not exactely like that but there is a fundamental truth, with 3 CCD the light is sampled 3 times in the same place so there is theoretical 3X the amount of available light. It is not like that also because the colors are not 1/3+1/3+1/3 but it gives an idea.
About the prism there are two factors palying, one is the efficiency of the prism, the other is the splitting of lights. The first one depend on the quality of the prism and will never be 100%, the second one is a phisical phenomenon and when you split the light in 3 on the average each beam is 4dB less powerful than the original one (also here is not a precise 1/3 it is just for the sake of conversation. This is not important though because the green light for example will go to the green CCD almost untouched. With the single CCD this happens as well but the light that doesn't belong to a certain color is unused so it is lost.
VERY IMPORTANT though, a similar implementation (and losses) assumes that no pixel shifting is present hence a 3CCD 2MP (per CCD) camera will have a maximum resolution of 2MP not 4, not 6. It will be more sensitive to the light though.
As I said, you can't have it both way.
Toke Lahti January 9th, 2005, 06:27 PM <<<-- Originally posted by Balazs Rozsa: With a prism the splitting and the filtering happens at the same time, so the loss of light is very low. With a bayer filter about two-third of the light is blocked in the filters (for a green individual pixel for example the green light gets through, the rest is lost). So if you build a 3 CCD and a 1 CCD camera using the same CCDs, the 3CCD camera is about three times more sensitive to light than the other one. -->>>
Well, isn't there same color filters in front of 3ccd block's cells than in front of 1ccd cell's pixels? Or how else green ccd gets only green etc.?
Thanks Ken for the link.
One thing in the article puzzles me: "Three chips and an optical-prism add bulk and cost. The prism also limits the maximum F-stop of the lens."
Zeiss Digi Primes have f1.5. How limited is that?
Zeiss Superspeeds have f1.2. I don't see the diffrence so remarkable.
Gabriele Sartori January 9th, 2005, 06:40 PM "I would like to recomend this Steve Mullen article"
I saw this article in the past. It is honest and well written. I'm not sure where he gets the number for the low pass filter slope that allows to him the calculation of real vs teorethical pixel count rendition I disagree on some numbers but is not to important. The article is fair in explaining that fundamentally 3CCD solutions aren't necessary anymore. (If a single bigger sensor is used I would add). He talks also about F limitations with the prism I'm no sure what he is referring to. I can only conclude that 3CCD today is a marketing call, and is even more evident because while we see better and better single CCD camera with all the new bayer filters/methodologies (like the JVC for example) some companies are going for no reasons in the opposite direction (like some $1K 3CCD consumer cameras for example).
Truth is that common believe refers to the 3CCD solution as a solution for better colors while the main advantage is actually in the luminance. Usually when light is very low, cameras start to lose saturation, for an equivalent 3CCD camera this happen quite a bit later and it is probably one of the co-factors that are convincing people that 3CCD = better colors. In my opinion 3CCD = more photons usable for luminance and for this reason I hope that one day they build prosumer cameras with a single big sensor.
New sensors with a better filter and higher use of the green channel (59% of the light intensity) are closing the gap in luminance with 3CCD; a bigger sensor would definetly close the gap. THe standard is either 4-1-1 or 4-2-0 so in terms of color information a single CCD is already almost an overkill.
Gabriele Sartori January 9th, 2005, 06:52 PM "So if you build a 3 CCD and a 1 CCD camera using the same CCDs, the 3CCD camera is about three times more sensitive to light than the other one. -->>>
Well, isn't there same color filters in front of 3ccd block's cells than in front of 1ccd cell's pixels? Or how else green ccd gets only green etc.?"
The information is splitted but is ricombined in the electrical domain so almost nothing is lost really. Also, even if there is a filter for each channel, most of the light that goes on that channel already belong to the right color so almost nothing is lost. THe attenuation exist but is gained back later in the electrical domain. Naturally if you make full use of this great opportunity you can't do tricks like "pixel shifting" so the resolution is determined by the single count of the number of pixel of one of the 3 CCD.
2+2+2 = 2 (you have as a gift extra color information that you have to decimate anyway and extra luma information that is nice since doing so you have more photons per pixel and that is a good thing to have.)
Remember that in the single CCD not too much information is lost as well but for different reasons. At the end there is a bit more signal in the pixels of 3CCD camera but the difference is probably 1 to 3 dB vs the 1 CCD camera. Very little in voltage, but still important when you are at the limits. Almost undetectable 95% of the time. (I am referring to the same CCD size)
Toke Lahti January 10th, 2005, 02:11 AM I think that a fair comparison between 1ccd and 3 ccd would be 4/3" 4Mp vs. 3x2/3" 2 Mp.
With these you get same resolution, pixel size and amount of light per pixel.
I would guess that single 4Mp chip is cheaper than three 2Mp's plus prism.
Only problem I see with bigger imager size is that there's no fast zoom lenses available.
Why is this?
Can someone elaborate?
Is it just that eg. with eng lenses picture quality is so bad with f1.8, that there is now such zoom lenses with 35mm imager?
(Panavision has a zoom with T2.3 but that's about it...)
If you want fast zoom, lets say 10x with f2, is the imager size limit really 2/3" and why?
As a far future dream would be that camera has a big imager with so much resolution and sensivity that in low light conditions one could just use a portion of the imager and faster lens...
Balazs Rozsa January 10th, 2005, 04:19 AM <<<-- Originally posted by Toke Lahti : ... Well, isn't there same color filters in front of 3ccd block's cells than in front of 1ccd cell's pixels? Or how else green ccd gets only green etc.? -->>>
Some surfaces of the prism has a number of coating layers that let only certain wavelengths to pass based on interference (dichroic mirrors). The rest of the spectrum is bounced back and continue its way to the other CCDs. This is a more efficient way of utilising the incoming light than with a bayer filter where the unwanted portion of the light is just lost in the filter.
Balazs Rozsa January 10th, 2005, 04:54 AM <<<-- Originally posted by Gabriele Sartori : "VERY IMPORTANT though, a similar implementation (and losses) assumes that no pixel shifting is present hence a 3CCD 2MP (per CCD) camera will have a maximum resolution of 2MP not 4, not 6. It will be more sensitive to the light though.
As I said, you can't have it both way. -->>>
I don't see a reason why you could not use pixel shifting and still taking advantage of the better sensitivity of 3CCDs. By pixel shifting you can increase the resolution and still the amount of light per pixel remains the same.
One interesting camera is the JVC 8Mpixel camera that uses 4 2MPixel CCDs (a red, a blue and two green). It uses pixel shifting in both the horizontal and vertical direction. The resulting pixel color arrangement and number is quite similar to a single 8MPixel bayer sensor. So the resolution you can produce is about the same. Still the individual pixels get more light with the pixel shifted camera.
Toke Lahti January 10th, 2005, 09:25 AM I'm not sure if we should move this conversation under the topic " I don't believe 3-CCDs are needed anymore", but I'd like to question that is dichroic prism somehow more efficient than bayer filter?
Here's a nice illustration of 3ccd block:
http://en.wikipedia.org/wiki/Dichroic_prism
Let's take as an example white light coming in.
In 3ccd first mirror bounces blue light (1/3 of the whole) and blue ccd registers it.
How does this differ from blue filtered pixel in bayer ccd, where blue filter filters 2/3 of the whole resulting 1/3 to pass?
(And we are still comparing 4/3" 1ccd with 3x2/3" 3ccd, so the pixel size is the same.)
Then a little bit more about resolution: especially if yuv sampling is used where chroma resolution is less than luminance one 4Mp is as good as three 2Mp's. How about full rgb resolution?
What are the best debayer algorithms, how do they work and how efficient they are?
What is the needed resolution for 1ccd to get same acuity than 3x2Mp with full rgb (4:4:4) signal?
Balazs Rozsa January 10th, 2005, 11:39 AM <<<-- Originally posted by Toke Lahti : Let's take as an example white light coming in.
In 3ccd first mirror bounces blue light (1/3 of the whole) and blue ccd registers it.
How does this differ from blue filtered pixel in bayer ccd, where blue filter filters 2/3 of the whole resulting 1/3 to pass?
(And we are still comparing 4/3" 1ccd with 3x2/3" 3ccd, so the pixel size is the same.) -->>>
If you use a bigger lens with a bigger CCD of course you will have better light sensitivity.
But if you use a single 4/3" CCD you need to have a lens with two times the focal length as opposed to a 2/3" CCD. If you enlarge an f2.8 lens to an f2.8 lens with two times the focal length the weight increase will be 8 times. And this lens will be able to give more overall amount of light to your sensor so that your camera can catch up with sensitivity. But with a similar sized big lens a 3CCD camera would be even more sensitive.
I don't say however that if you add together the price of the imaging block and the lens the 1CCD camera will be more expensive (of course only in the case of lenses with a mild zoom range). A 2/3" prism must be expensive. A cheap single chip 4/3" video camera would be cool.
<<<-- Then a little bit more about resolution: especially if yuv sampling is used where chroma resolution is less than luminance one 4Mp is as good as three 2Mp's. How about full rgb resolution?
What are the best debayer algorithms, how do they work and how efficient they are?
What is the needed resolution for 1ccd to get same acuity than 3x2Mp with full rgb (4:4:4) signal? -->>>
There are few RGB cameras available especially with a single sensor. In the case of the Panavision Genesis they put 11.5 MPixels to generate and record a 2MPixel RGB 4:4:4 image.
Toke Lahti January 10th, 2005, 04:41 PM <<<-- Originally posted by Balazs Rozsa: <<<-- A 2/3" prism must be expensive. A cheap single chip 4/3" video camera would be cool. -->>>
I've also understood that it is too hard to use prism with bigger chips than 2/3" or smaller pixels than hdcam's because of thermal expanding (what's the right word in english?) etc.
4/3" appears very tempting.
40-150mm F3.5-4.5 for Olympus is $280.
By adding one 0 to the price I bet it could be 15-150mm/f2.8...
Mass market is the key thing.
Canon HJ11x4.7B/T2.1 for 2/3" weights 1.6kg and costs $23k.
<<<-- There are few RGB cameras available especially with a single sensor. In the case of the Panavision Genesis they put 11.5 MPixels to generate and record a 2MPixel RGB 4:4:4 image. -->>>
Genesis' brochure says: "12.4 mega pixel, true RGB sensor (not Bayer pattern".
Whatta heck?
Of course Panavision will add resolution when they develop better ways to record it.
Andre Jesmanowicz January 14th, 2005, 01:02 AM Slow down.
A single chip camera is equivalent to 4-chip camera with every chip having FOUR times less pixels! Two chips are green - shifted by 1/2 pixel (!!!), one is green and one is blue (shifted also). If you believe in a single chip camera resolution you have to believe in increased resolution of 3 chip camera (except for a one green - not two). Study a diagram explaining JVC chip at work. It is equivalent, at least horizontally, to the operation of shifted sensors in 3 chip cameras.
That simple,
Andre
Gabriele Sartori January 14th, 2005, 02:38 AM "A single chip camera is equivalent to 4-chip camera with every chip having FOUR times less pixels!"
you are so wrong!
Andre Jesmanowicz January 15th, 2005, 12:11 AM The other two chips are blue and red (not green - sorry).
If you ever had chance to see an enlarged ccd sensor you would find that there are spaces between active pixels. Theoretically, one could put another pixel between every two active but it is not possible in practice. Now, use a 4 way prizm, put four sensors shifted by half of pixel spacing, up, down and right, and you got an effective sensor with a double resolution, verically and horizontally. The Sony HD camera has 1080 pixels verically so only horizontal shift is needed.
Is it clear now?
Andre
Valeriu Campan January 15th, 2005, 12:54 AM Andre,
Fuji have a small pixel in between the large pixels. It can be activated for capturing a little more highlight detail:
http://www.dpreview.com/news/0301/03012202fujisuperccdsr.asp
Gabriele Sartori January 15th, 2005, 02:38 AM "Theoretically, one could put another pixel between every two active but it is not possible in practice"
Doesn't work like that. Probably you don't know that a CCD has an array of microlenses in order to use almost all the area from pixel to pixel. The effective active area of a Pixel is indeed smaller but almost all the area is used without space in the middle in order to catch light thanks to the microlenses. THis is true for a single CCD or 3 CCD and doesn't really leave "space in the middle" unless is done intentionally but in that case some light is lost with all the consequences......
Only in the Sigma SD9 and in the Kodak 14N photo cameras there weren't microlenses since microlenses can produce aberrations in certain conditions (using some old optics) . Both cameras have ben discontinued and reintroduced in the market with the layer of microlenses.
Toke Lahti January 15th, 2005, 04:16 AM <<<-- Originally posted by Andre Jesmanowicz: Is it clear now? -->>>
Very clear, as it has been.
Remember that no camera uses 4ccd construction.
If you are using pixelshifting with all ccd's, resolution with this 4ccd system would be identical to 1ccd system, that uses same amount of silicon. Eg. 4x 1/3" 1Mp compared to 1x 2/3" 4Mp.
What is the benefit of having 4ccd's then?
You want to have deep DOF?
When digital cinema will reach its prime (after some decades) there might be a need for multi imager systems again. If the imagers are already big enough and one would like to have more resolution and sensivity but not less DOF.
Andre Jesmanowicz January 16th, 2005, 01:27 AM I want nothing. It was simple explanation of how one can get increased resolution by shifting a green chip in 3 ccd cameras.
|
|