Wayne Morellini
July 29th, 2005, 01:53 AM
I have been giving it some further thought and I think it might actually be possible to pixel shift RAW (uncompressed native mask output) single chip bayer footage (or others). Normally pixel shifting bayer, for example, would be difficult, because each pixel has only one real primary, the others are calculated, so there is no overlapping colours to shift. But if the frame is made up from subsamples of a higher resolution frame then there canbe real colour data for all the primaries on each pixel to shift around (note: I am note saying this is as good as three chip pixel shift).
Resulting pixels are made up of:
RGRGRG R.R.. ..... .....
GBGBGB = ..... & .B.B. But green is: .....
RGRGRG R.R.. ..... ...G.
GBGBGB ..... .B.B. ..G.G
RGRGRG ..... ..... ...G.
This means that sensors that only output low resolution combined from an high resolution sensor, at normal frame rates, might be suitable to be pixel shifted to higher resolution. But the downside is that it is preferred if the output is 4:4:4, and uncompressed, and you have to figure out how to get the sensor to send out the combined pixels shifted.
So how do the the figures stack up. To get the resolution you are after you have to start with a sensor of at least that resolution. Output will have to be 1/3rd resolution in each direction plus a bit of overlap. 1/9th resolution, unless you want to do half pixel shift.
The interesting thing is how does 4:2:0 progressive work? If it canbe made to diagonally stagger it's reds and blues, like pixel shifting, you should be able to reconstruct 4:4:4 SD colour space a bit by using pixel shifting methods. I.E:
RR
RBB
BB
I am not saying that pixel shift is perfect but good compromise.
I recently made an assertion about filming pure primaries/complementaries breaking down Bayer and pixel shift schemes (due to a lack of data alternative colour primary pixels, and planes on the forum, but I am wrong and have figured something else out. For each primary colour filter, if you let some of the non target colors bleed through each filter, all pixels have a chance to register all colors. All you have to do is either work out the level of each colour that should be there from normal debayer methods to subtract wrong colours, and/or keep the bleed through so low that it only occupies the least significant bits of a pixel, and canbe filtered out in debayering by clipping the lower bits. For instance we have 12 bit pixels, but the filter lets through non target colours on the least 1 or two bits. This leaves a good 10 bit pixel data left for colour correction, but each pixel record the existence of primary or complementary colours. You could keep the alternative colour bleed through so low that the last two bits register no light, 25%, 50%, 75%, and max saturation. So when somebody comes along with monochrome lighting, bright red, or purple sweater, the camera can record accurate detail about where the detail in the sweater is, and where it's edges end.
Resulting pixels are made up of:
RGRGRG R.R.. ..... .....
GBGBGB = ..... & .B.B. But green is: .....
RGRGRG R.R.. ..... ...G.
GBGBGB ..... .B.B. ..G.G
RGRGRG ..... ..... ...G.
This means that sensors that only output low resolution combined from an high resolution sensor, at normal frame rates, might be suitable to be pixel shifted to higher resolution. But the downside is that it is preferred if the output is 4:4:4, and uncompressed, and you have to figure out how to get the sensor to send out the combined pixels shifted.
So how do the the figures stack up. To get the resolution you are after you have to start with a sensor of at least that resolution. Output will have to be 1/3rd resolution in each direction plus a bit of overlap. 1/9th resolution, unless you want to do half pixel shift.
The interesting thing is how does 4:2:0 progressive work? If it canbe made to diagonally stagger it's reds and blues, like pixel shifting, you should be able to reconstruct 4:4:4 SD colour space a bit by using pixel shifting methods. I.E:
RR
RBB
BB
I am not saying that pixel shift is perfect but good compromise.
I recently made an assertion about filming pure primaries/complementaries breaking down Bayer and pixel shift schemes (due to a lack of data alternative colour primary pixels, and planes on the forum, but I am wrong and have figured something else out. For each primary colour filter, if you let some of the non target colors bleed through each filter, all pixels have a chance to register all colors. All you have to do is either work out the level of each colour that should be there from normal debayer methods to subtract wrong colours, and/or keep the bleed through so low that it only occupies the least significant bits of a pixel, and canbe filtered out in debayering by clipping the lower bits. For instance we have 12 bit pixels, but the filter lets through non target colours on the least 1 or two bits. This leaves a good 10 bit pixel data left for colour correction, but each pixel record the existence of primary or complementary colours. You could keep the alternative colour bleed through so low that the last two bits register no light, 25%, 50%, 75%, and max saturation. So when somebody comes along with monochrome lighting, bright red, or purple sweater, the camera can record accurate detail about where the detail in the sweater is, and where it's edges end.