View Full Version : Sumix 2/3" 1920x1080 CMOS


Pages : 1 2 3 4 5 6 7 8 [9] 10 11

Jose A. Garcia
May 30th, 2008, 10:47 AM
Or Fahard, perhaps summix could do something for DIYers?

That'd be great too. If Sumix could develop a multipurpose visually lossless RAW hardware encoder with 26pin Micron/Aptina compatibility (to plug ANY headboard) and say DVI out. Add basic internal software to control the sensor and that's all. The board controls width, height and fps. Color correction can be done in post because we're shooting RAW. The hardware encoder encodes just anything that's coming from the sensor so you can have any possible resolution and framerate being only restricted by the encoder speed or the sensor itself. DVI out can be used to plug a LCD with an adapter. Aptina (Micron) sensors are cheap, pretty much standard, fast and have great latitude and sensitivity. If you want 35mm DOF, add an adapter. You can also buy the only Aptina 16x12mm sensor available (with 200fps at full resolution and 2352x1728 pixels) to have real cinema 35mm DOF.

That'd be the most basic DIY filmmaking kit ever. Now we just need Farhad saying it can be done. They already have the lossless hardware encoder inside the 12A2C camera.

Farhad, if you develop that hardware encoder and add something like DVI in, Composite in and a few more, you can actually sell it for multiple different purposes, not just DIY cameras.

Juan M. M. Fiebelkorn
May 30th, 2008, 03:59 PM
what sensor is that?

Never heard of it. Are you somewhat related to micron?

Jose A. Garcia
May 30th, 2008, 05:47 PM
No, I'm not. I know I'm always talking about Micron sensors but I think they're pretty good for their price. I tested their MT9P031 sensor last year for my first digital cinema camera project and found it to be really good in terms of color, latitude, low light...

I'm attaching the flyer for the sensor I'm talking about. The only problem is price. more than 1,000 euro. for the bare sensor and about 10,000 euro. for the demo board. Doesn't say anything about the headboard.

Paul Curtis
May 31st, 2008, 03:21 AM
No, I'm not. I know I'm always talking about Micron sensors but I think they're pretty good for their price. I tested their MT9P031 sensor last year for my first digital cinema camera project and found it to be really good in terms of color, latitude, low light...

I'm attaching the flyer for the sensor I'm talking about. The only problem is price. more than 1,000 euro. for the bare sensor and about 10,000 euro. for the demo board. Doesn't say anything about the headboard.

Jose,

This is an example of a sensor that looks good but i don't think is practical for digital cinema. It's really a machine vision sensor and does that very well

- it's 4:3 not ideal but no great problem
- internal 10bit? that seems pretty low processing.
- 7micron pixel size is good, but no mention of fill factor, the sensitivity isn't as good as it could be. Bayer will reduce that even more
- here's the big issue though :16mm x 12mm is in no-mans land as far as lenses go. No S16 lens will cover that, and 35mm lenses will have a crop factor (try and find a 35mm lens wider than 14mm and then find a fast one). I used the 14mm kodak sensor and struggled to find cmount lenses that just about covered it and i even tried a canon 10-22mm efs lens with an adaptor (which worked but impractical for focusing and aperture adjustment)

Hence the APS-C size. It has the lenses needed. No matter how good the sensor, without suitable and quality glass, it's blind. The solution has to be homogenous.

For focus issues something like the birger mount for electronic lenses could be developed so that pulling focus became practical (and wireless focus too). So some hardware to drive the AF and aperture via a separate controller.

It would be a great idea if summix did as you say and developed a processing board for LUT conversion and real time compression. This could even be applied to existing cameras through HD-SDI ports as well if it was designed broadly in the first place. (one box does many things)

cheers
paul

Jose A. Garcia
May 31st, 2008, 07:58 AM
What about this one?

http://download.cypress.com.edgesuite.net/design_resources/datasheets/contents/lupa_4000_8.pdf

It's a square sensor but its width is close to APS-C size. It says 15fps at full res. Could be 25-30fps at 1152 or 858 pixels of vertical res.

Anyway we would need a way to control the sensor.

Régine Weinberg
June 1st, 2008, 06:05 AM
Well done
The APS format sensor is THE idea.
But it is a mass market product for the photo industry.
Let face this
The Nikon D3 is the only,
since the D1, full format sensor they have.
Why.
Less pixels, bigger pixel an a full format chip is the only way to have less digital noise. You can take pictures up to crazy 12000 ISO, you need this for taking pictures of live acts, or in door shoots in small bars etc.
The sister product the d300 has quite the same features but an APS chip and ISO can be pushed only up to 3200 for a asking price a fraction of the D3.

Aps is a billion mass market product, we even don't need 12 Mpix.
Our dream chip would be 5 K big pixels to have low noise and 30 fps. Could be done, sure.
But we are simply no multi million market and nobody ever will do such a silly crazy product. In the sense of marketing.

The money behind red is OAKLEY no kidding.
Same story as Ubuntu,
not perfect but very nice to use.
Not exact what we are crying for
but the best we can get.

John Wyatt
June 1st, 2008, 09:27 AM
In the struggle to come up with a low cost high quality camera system, for people who would rather be filming than building portable computers for their cameras, is it possible to make some brief lists? To decide what we would like (would need some discussion), what we have now (would need some research), and what exactly needs work (would need some community!). If we can agree on what we reasonably want, we can quickly find out what's already available, and then concentrate on what are the problems left to be solved in order to bring it into reality. I know Jose and Daniel have been trying to achieve this with the help of the Sumix Corporation on this very thread, but I'm still not clear in my own mind what the problems are and what we should be looking at as a priority to bringing it closer. Not helped by the sheer variety of problems, such as turnkey or open source software, limitations of frame grabbers, how to power a micro computer with camera batteries, best lens mount, ever changing technology and standards for motherboards and sensors, preferred onboard storage strategies, RAW or compressed, 2k or aim for the future with 4k, LCD monitors you can't see in daylight, minimum orders for sensors and non disclosure agreements. It's endless. The fact that SI have battled the same issues and come up with a camera I'd love to own but will probably never be able to afford, makes me wonder if it's ever going to be possible before this decade is out (it really does seem like we're trying to get to the Moon sometimes!), to finally get the camera we need to make no budget movies. It is such a deep problem, you can easily get caught up in the building and almost forget why you wanted a camera in the first place. But perhaps a big problem like this can be broken up into smaller parts and solved by different people. Maybe a community mission (like Linux) can solve it; I just wonder how long it will take before we are over taken by events, and non film makers are shooting their kids on the lawn with that 3k camera...

Régine Weinberg
June 1st, 2008, 09:41 AM
http://www.toshiba.com/taisisd/indmed/products/prod_detail_ikhd1.jsp
look at this

Daniel Lipats
June 1st, 2008, 10:27 AM
There is much more to getting a complete camera than just hunting for the perfect sensor. The Sumix camera project is a very good example of this. The hardware, software, and optical aspects need to be addressed. A weakness in any of the areas make it all just an expensive paperweight.

There have been a variety of problems with getting a Sumix based camera running. Obstacles were in communication, technical, and implementation.

I have reached the limits of resources to invest into this project. I will take my losses and quit. Tonight I will be requesting a return of the camera to Sumix.

I regret having to do this but I don't see any other options. This is just not going in the direction I had in mind.

Jose A. Garcia
June 1st, 2008, 10:38 AM
Règine, I've seen that camera many times. It's quite expensive, has 3CCD (I'd rather have just one sensor) and does just 1080i.

Paul Curtis
June 2nd, 2008, 01:49 AM
What about this one?

http://download.cypress.com.edgesuite.net/design_resources/datasheets/contents/lupa_4000_8.pdf

It's a square sensor but its width is close to APS-C size. It says 15fps at full res. Could be 25-30fps at 1152 or 858 pixels of vertical res.

Anyway we would need a way to control the sensor.

Jose,

Yes it's quite interesting but the QE (Quantum Efficiency) and the FF (Fill Factor) together seem very low. And that's to do with how good the sensor is at converting photons to charge. So despite 12 micron pixels there must be a lot of on pixel circuitry to cut that result down. CCD on the other hand has the whole pixel so QE tends to be much larger.

You must remember by the time you've whacked a bayer mask over it the sensitiviy nose dives again.

That's one benefit of 3 chip systems, not only more sensitive but also the ability to apply analogue gain on the channels individually which makes for better colour balancing (tungsten vs daylight)

Even so, it's an interesting sensor heading in the right direction...

Im going to read the datasheet in more detail once i've had more coffee.

cheers
paul

Paul Curtis
June 2nd, 2008, 02:19 AM
In the struggle to come up with a low cost high quality camera system, for people who would rather be filming than building portable computers for their cameras, is it possible to make some brief lists? To decide what we would like (would need some discussion), what we have now (would need some research), and what exactly needs work (would need some community!).

John,

It's a nice idea but im not sure how practical is it. There are so many components and each component is evolving at a different rate (PC hardware changes very quickly, solid state is become more practical etc,.). Or new third party hardware makes something previously impossible, possible (thinking about the hard drive recorders about to appear or SxS as a recordig format)

It seems that every now and again some intrepid folk start off on a camera project, learn a lot and give up. One day im sure someone will crack it but i suspect the personal cost would be higher than just buying a camera, it's a work of passion more than finance.

This forum is a great resource and anyone starting out should go back and read old threads and then 'stand on the shoulders of giants' and learn from past mistakes.

SI have sunk an enormous amount of time (money) into software development. I doubt an indivdual has that resource. The fact that this development cost is split amongst many camera sales. I would hope that Jason et al are in it for the long term and their software evolves along with the sensors that are available.

On the software side leveraging existing solutions is the only way forward, use of After Effects workflow (doing log to linear) or even the adobe raw convertor (which has been used for vfx work anyway). The new adobe DNG format could be the one to watch because it includes the kind of RAW metadata that is required for moving images. Or perhaps cineform RAW will open up and become a public standard.

But these solutions are an individual system, you couldn't build a busines selling a camera to 3rd parties in this way. These solutions might be a single production job, come the next one and the playing field has moved again.

I hope to document my findings when i had a go at all this, but im in the middle of a production right now so time is limited!

cheers
paul

Farhad Towfiq
June 2nd, 2008, 03:54 AM
Hello all,

We are understanding what film makers need a little better. We will try to make the software much easier to use before encourage another DIY integrator to touch the camera head. Extra software features, like automatic look up table is already added. At the end the purpose of this camera head is to be used by artists who want to set their own setting and compose their own Look up tables. The purpose of this camera head is to allow talented and technically capable film makers to differentiate their work by controlling the sensor and post processing themselves. The extra effort can pay back for them not only in producing unique result, but also offering their integration work to other film makers for a price. Michal Dell was putting together computers for himself when his friends asking to purchase them from him and started his Dell computer business.

As far as sensor is concerned, Altasens is still by far superior to anything else in latitude and high signal to noise ratio. The reason Red is pushing higher pixel count is because it is easier technology for them. They are playing Houdini on you as he upped the challenge by suggesting that he would open the safe from inside. The real challenge is larger and more light adsorbing and less noisy pixels and not their plurality

Gottfried Hofmann
June 2nd, 2008, 08:49 AM
There is much more to getting a complete camera than just hunting for the perfect sensor. The Sumix camera project is a very good example of this. The hardware, software, and optical aspects need to be addressed. A weakness in any of the areas make it all just an expensive paperweight.

There have been a variety of problems with getting a Sumix based camera running. Obstacles were in communication, technical, and implementation.

I have reached the limits of resources to invest into this project. I will take my losses and quit. Tonight I will be requesting a return of the camera to Sumix.

I regret having to do this but I don't see any other options. This is just not going in the direction I had in mind.

This is sad. I'd like to know more about the current problems as I am very close to buying one. Same goes to Jose...

Biel Bestue
June 2nd, 2008, 01:06 PM
This is sad. I'd like to know more about the current problems as I am very close to buying one. Same goes to Jose...

yeah i agree, i'd like too

Daniel Lipats
June 2nd, 2008, 01:35 PM
Well in hopes of coming up with solutions, I will demonstrate...

One of the problems I faced when shooting with this camera is controlling light. It seems to be very sensitive to slight changes in light intensity. The picture below will do a good job of demonstrating:

http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/images/compare.jpg

Shot was intentionally composed to demonstrate the topic. Lit with a small light kit, and a par outside the window aimed at the vase.

The HV20 image copes with the scene just fine, there are no extreme highlights, detail is very well preserved on the corners of the table as well as under the vase. But the image is more or less flat. There is not a lot of difference between the brightest and darkest part of the picture on the table.

The Sumix image on the other hand is very different. There is a huge contrast difference on the table. Big highlight under the vase, and the corners of the table are dark as light intensity falls. The Altasens sensor is able to pick up a big difference between the darkest and lightest part of the table. It can detect slight changes in light intensity, but can not view a high range of brightness.

Unfortunately, this makes shooting anything more complicated, requiring more complex light setups and a higher degree of control. This is true especially outdoors. You have to stay in a small range of light intensity.

Strange enough, the images posted on the Silicon Imaging website for the SI-2K contradict my results. Without knowing what kind of light setup used in the shots its difficult to get a good idea of exactly how different the camera behaves, but it seems to have much less trouble with a range of light intensity.

Jose A. Garcia
June 2nd, 2008, 05:18 PM
I had the same problem too. I don't know why our tests are so different from SI2K tests. Both cameras have different Altasens sensors though, SI2K sensor is of course 2K and it's listed as "Professional Broadcast" but the Sumix one is listed as "Videoconferencing". I'm quite sure that's where the main differences are.

Solomon Chase
June 2nd, 2008, 06:58 PM
This is mainly a software issue. It is not mapping the values correctly. I have calibrated SMX12AC footage and it looks to have similar dynamic range to HV20 if not more, after tweaking.

The other part of the problem IS hardware related, but Farhad already mentioned it is because of a bad IR cut filter, that has been fixed.

Daniel Lipats
June 2nd, 2008, 07:51 PM
Solomon,

Do you mean tweaking as a post process to a RGB file?

Solomon Chase
June 2nd, 2008, 09:07 PM
Solomon,

Do you mean tweaking as a post process to a RGB file?

Remapping the levels with curves, and fixing the red/green color spectrum bleeding. I used After Effects.

Send me a raw HD size grab from the Sumix and HV20 and I can match them. (of course the color skew from the bad IR Cut filter intrinsically messes up the color matrix, but It can still look decent)

You have to expose for the top end (highlights) to get a perfect match though. It looks like the exposure on the Sumix is set a little too high on your example JPG.


Here's an example:
http://solomonchase.com/sumix/cc1.jpg

Noah Yuan-Vogel
June 2nd, 2008, 10:12 PM
I'm under the impression that contrast issue you are talking about just has to do with recording and display gamma. you might just be seeing the linear sensitivity to light that any such sensor has... youd need use an LUT to make the images gamma-encoded to make it look correct on most monitors for normal viewing. i remember having the same thing happen on my sumix m73. i think you need gamma encoding of around 0.45 to compensate for the normal 2.2 gamma of most displays. I believe DCI says 1/2.6 encoding and 2.6 display gamma is closer to the human eye so it probably keeps more relevant information and displays more naturally. someone correct me if im wrong. im sure SI's camera just compensates for this automatically as any video camera would. of course these machine-vision cameras dont have these settings built in even though they have the same/similar sensors, just takes a bit more programming.

Daniel Lipats
June 2nd, 2008, 10:19 PM
I can do better than that. Here is the video from which the screen capture was taken.

It is in CineForm RAW format, captured in real time with StreamPix 3. Set to FilmScan1 quality. Captured on a quad core (q6600) system. You will see scanlines, this is because my (cheap) gige network adapter is unable to keep up with the signal.

http://dreamstonestudios.com/personal/daniel/SMX12A2C/video/SMX12A2C.avi

Here is the full resolution HV20 image:
http://dreamstonestudios.com/personal/daniel/SMX12A2C/video/hv20.jpg

Solomon Chase
June 2nd, 2008, 11:31 PM
I can do better than that. Here is the video from which the screen capture was taken.

It is in CineForm RAW format, captured in real time with StreamPix 3. Set to FilmScan1 quality. Captured on a quad core (q6600) system. You will see scanlines, this is because my (cheap) gige network adapter is unable to keep up with the signal.

http://dreamstonestudios.com/personal/daniel/SMX12A2C/video/SMX12A2C.avi

Here is the full resolution HV20 image:
http://dreamstonestudios.com/personal/daniel/SMX12A2C/video/hv20.jpg

Noah is right... If you had the new IR cut filter and had underexposed another stop, these images could have looked identical... here is what I was able to do:

Daniel Lipats
June 3rd, 2008, 12:18 AM
My original reply explained that my camera had already been replaced with a new filter.

However, I'm not sure if there is an even newer version or not.

You may be right about underexposing it a bit more. But its heavily underexposed already. If I had underexposed any more the only part of the table you could make out would of been the highlight. I no longer have the HV20 around to test with, so there is no way to be sure.

The HV20 image contains a lot more data in the shadows already. Dropping exposure more on the Sumix image may have lost it all.

Farhad Towfiq
June 3rd, 2008, 12:53 AM
Daniel,

As I mentioned when we sent you the replacement it was a temporary replacement. The filter you have on the SMX-12A2C is still an absorptive filter that is not quite suitable for Altasens sensor. We use this filter on Micron sensor with good results as Micron has narrower range of colors. For altasens we are preparing interference reflective filters with scratch resistance hafnium coating. The design of the head is also changed so the filter can be screwed in and out in a few seconds with fingers without using a tool.

Daniel Lipats
June 3rd, 2008, 01:02 AM
Farhad,

When can we expect an updated filter and software to be ready? Do you think it could make substantial improvements?

Farhad Towfiq
June 3rd, 2008, 01:21 AM
I check the progress tomorrow and let you know. I expect end of June is reasonable, but you can not count on it as the same person coating the filters is also preparing beamsplitters for interferometers.

Paul Curtis
June 3rd, 2008, 01:56 AM
You may be right about underexposing it a bit more. But its heavily underexposed already. If I had underexposed any more the only part of the table you could make out would of been the highlight. I no longer have the HV20 around to test with, so there is no way to be sure.

I think this highlights (no pun intended :) a fundamental difference with these DIY cameras. The signal you get out is much closer to what the sensor saw and in any grown up camera there is an enormous amount of colour processing that happens before anyone sees the image (not just a simple LUT either). In the DIY world there are no (software) tools that handle this processing - sure you can use AE and get most of the way (using the cineon plug in is quite useful) but most of these cameras have different LUTS for different exposures and also fixing the colours as well. Take Vos was doing a lot of work on this for his Pike camera head and im not sure how he got on in the end?

That's one component that all DIYers would need, perhaps a AE plug in that aids in calibration and demosaicing.

SI must have done this work, along with all the software tools to aid in focusing and controlling the image. It's no mean feat and fully justifies the cost of their solution.

CCDs are easier than CMOS as the raw image out is much closer.

Also in Daniels lens tests (the resolution charts) the particular lenses that were being tested really weren't that good. And whilst some of the machine vision lenses do seem to produce nice images a lot of them are very substandard (afterall they're most CCTV lenses). The fujinons are nice and i never manged to test the schneiders which look good on paper.

Obviously with the 2/3rds size in the summix then you could use digiprimes (as SI demonstrate) despite that fact that they're really designed for prism systems. But then you'll need PL to cmount adaptors and probably a head case redesign. Also who has a set of digiprimes laying around, if you did you'd probably have a good camera too... The alternative is the huge array of 16mm lenses but a lot of these won't cover the sensor fully and in fact really aren't that good resolution wise (i tried cookes and switars).


cheers
paul

Noah Yuan-Vogel
June 3rd, 2008, 08:31 AM
I dont think this should be that much of an issue, I'm certain sumix has the ability to do LUTs in hardware as they did with my M73. Perhaps it just isnt implemented yet, or daniel hasnt gone to the trouble of programming it yet. this isnt stuff that should be done in after effects in a normal workflow.

paul, what makes you say raw images from ccd are easier? im not sure the differences are significant. every sensor out there naturally has a linear response to light. whether the camera automatically corrects for colorspace/gamma etc is up to the manufacturer/programmers.

Farhad Towfiq
June 3rd, 2008, 08:41 AM
Daniel,

By enabling hardware gamma correction you can compress the high end of brightness range and avoid saturation.

The new firmware also allows altering exposure time frame by frame and along with a normal exposure frame producing a short exposure frame also, so variation in high intensity spots can be captured.
All the extra possible software niceties are difficult to implement, publish and support as standard features. But some one with strong programming skills can easily manipulate the flexibility of sensor/firmware to his own advantage.

Daniel Lipats
June 3rd, 2008, 08:59 AM
Farhad,

I have no such options. I have not received any new software, i'm still using the original Sumix software that the camera came with.

Paul Curtis
June 3rd, 2008, 08:59 AM
I dont think this should be that much of an issue, I'm certain sumix has the ability to do LUTs in hardware as they did with my M73. Perhaps it just isnt implemented yet, or daniel hasnt gone to the trouble of programming it yet. this isnt stuff that should be done in after effects in a normal workflow.

paul, what makes you say raw images from ccd are easier? im not sure the differences are significant. every sensor out there naturally has a linear response to light. whether the camera automatically corrects for colorspace/gamma etc is up to the manufacturer/programmers.

Can you apply different LUTS at different exposures (a 3D LUT?)

The CCD vs CMOS thing is an observation. Every 'raw' image out from cmos i've personally seen looks desaturated and the colour red is usually off (even the red camera images seen very flat). CCD on the other hand seems more faithful to the scene. Now whether this is an issue of pixel size (perhaps), because the CMOS has onboard electronics in each pixel reducing the size compared to a similar CCD. Or whether it's fundementally because of the difference in materials i don't know. If anyone can agree or refute my observations i'd like to hear more and understand why.

cheers
paul

Daniel Lipats
June 3rd, 2008, 09:00 AM
Noah,

The image I originally posted was not white balanced, and no LUT was set. This is because the image was recorded as CineForm RAW. White balance and LUT adjustment is not baked as part of the video, you can adjust this in post. Its only metadata.

The video file I provided is a RAW file with Bayer preserved. This is sensor data without any processing. My understanding is that the only thing permanent about it is exposure. If you like, you can apply LUTs to this file and see what you can do

Daniel Lipats
June 3rd, 2008, 01:20 PM
Here are a few more videos. Most of them are just random tests.

Shot with StreamPix 3 using the CineForm RAW codec.
Filmscan1 quality, on a Core 2 Quad system.

Note that scanlines and any other visible artifacts are due to the cheap network card unable to keep up with the bandwidth.
[Please forgive the shake. Camera & touchscreen LCD mounted on a not very stable support.]

http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/1.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/2.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/3.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/5.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/6.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/7.avi
http://www.dreamstonestudios.com/personal/daniel/SMX12A2C/video/underExposed.avi

http://www.dreamstonestudios.com/per...2C/video/8.avi
http://www.dreamstonestudios.com/per...2C/video/9.avi
http://www.dreamstonestudios.com/per...2C/video/10.avi

Farhad Towfiq
June 3rd, 2008, 03:23 PM
Daniel,

For new software please send an email to our support.
I suggest if you decide to continue with this project to keep it as simple as possible. Mechanical integration, lenses, computers, display, power supply, programming, third party software, and endless testing is just too much for a single person. For a special effect team with a reasonable budget this activity makes sense. It is funny that Jose was/is contemplating to start from a bare sensor.

Jose A. Garcia
June 3rd, 2008, 04:46 PM
I'm not so crazy...

Sometimes I like to think I can do things like creating a fullHD camera from scratch just by myself, but those thoughts don't last.

What I'm actually considering is to buy the camera again once I have a little money to spare. My production team is keeping the HV20 and I'm always looking for things to do on my free time. I still think we can develop a great low cost digital cinema cam.

Gottfried Hofmann
June 4th, 2008, 07:53 AM
.
I suggest if you decide to continue with this project to keep it as simple as possible. Mechanical integration, lenses, computers, display, power supply, programming, third party software, and endless testing is just too much for a single person.

Well, I just want to attach a lens, plug the cam into a notebook and start recording. So no mechanical integration, display, power supply. Just software (maybe programming) and endless testing ;)

Jason Rodriguez
June 4th, 2008, 02:22 PM
Hi Daniel,

I'm looking at all your RAW files, and it appears as though the physical column arrangement of the sensor is reversed when it's being recorded.

So for example, your column alignment rather than being column:

1, 2, 3, 4, 5, 6 . . .

is more like 3, 1, 2, 6, 5, 4 . . .

This is causing some significant aliasing artifacts that shouldn't be there. This alignment would cause the RAW encoder to think the bayer pattern phase is still the same, but the row mis-aignment will cause some serious issues.

Also it could simply be a simple column mis-match (rather than three-way column out-of-order like above), meaning instead of column:

1, 2, 3, 4, 5, 6 . . .

it's

2, 1, 4, 3, 6, 5 . . .

This would cause a bayer phase reversal, and would also cause bad aliasing to occur as well. If the bayer phase was guessed correctly, then you would be getting the correct color, but the aliasing would not be removed.

Point is your images are demonstrating aliasing that shouldn't be there. Your images should be much smoother.

Thanks,

Jason

Daniel Lipats
June 4th, 2008, 03:05 PM
Jason,

Thank you for your analysis. This is something we have been working on. The videos are indeed showing aliasing:
http://dreamstonestudios.com/personal/daniel/SMX12A2C/images/aliasing.jpg

The aliasing is most prominent when CineForm is set on "playback - Quality" setting.
When using "playback - Fast", the aliasing is gone.

Hopefully this is something that will be fixed soon.

Thanks a lot.

Jason Rodriguez
June 5th, 2008, 06:00 AM
The reason you don't see it as much on "Fast", is because that algorithm is a simple bilinear demosaic . . . still though, even with bilinear, I'm seeing it on my machine here, and the resolution loss is very evident.

Thanks,

Jason

Farhad Towfiq
June 5th, 2008, 04:29 PM
Daniel, Does the codec software confuse the order of the colors? We do not see this problem in our Bayar data and deBayer images. Perhaps the codec is assuming a different order of GRGB than the sensor is putting out.

Daniel Lipats
June 5th, 2008, 07:53 PM
Farhad,

When selecting the CineForm RGB Codec, you have to specify the pixel order. I have tried all 4 pixel order settings, this one is the only one that works at all. Its also the same pixel order as in the documentation.

I don't think there is a problem in the Sumix Bayer data. The videos the Sumix software outputs seem to display correctly.

I am working with NorPix to resolve this issue.

Daniel Lipats
June 17th, 2008, 11:31 PM
Here is a bit of an update...

First of all, the bug with cineform has been fixed. New versions of cineform should work fine now. I have not had a chance to test it yet.

I did have a chance to test some new software from Sumix for the camera tonight. Its getting better, has more features and becoming easier to use. Has a new RAW capture mode, along with the old AVI. I was very happy to see zoom options, so now the video can be scaled down to fit on lower resolution monitors without cropping.

However, it still lacks a few basic features which are very important to our needs. Here they are summarized:

1) Preview
When recording, the video preview shuts down. No way to watch what your capturing.

2) Recording
You still have to specify how many frames you will record ahead of time, for example the default is 100. Once you hit record, the software captures 100 frames and when complete it dumps that to the hard drive. I can see the need for this because a single hard drive is simply too slow. A raid setup would be required for real time capture to disk.

Right now the Sumix software supports an uncompressed AVI format. If a 3 second uncompressed AVI video is ~600mb, then 60 min of video is about ~720gb.

I think we have to consider some form of compression.

Biel Bestue
June 18th, 2008, 05:18 PM
but still we should have the possibility to shot uncompressed, an option to choose is always wellcome

Peter Moretti
June 19th, 2008, 12:07 AM
Can you apply different LUTS at different exposures (a 3D LUT?)

The CCD vs CMOS thing is an observation. Every 'raw' image out from cmos i've personally seen looks desaturated and the colour red is usually off (even the red camera images seen very flat). CCD on the other hand seems more faithful to the scene. Now whether this is an issue of pixel size (perhaps), because the CMOS has onboard electronics in each pixel reducing the size compared to a similar CCD. Or whether it's fundementally because of the difference in materials i don't know. If anyone can agree or refute my observations i'd like to hear more and understand why.

cheers
paulPaul, I would imagine this has to do with the fact that most of the CMOS camers are single sensor design. So they use a Bayer sensor pattern that over-represents the greens at the expense of red and blue. Hence raw images will look greenish and desaturated.

On the other hand, most of the CCD cameras are three sensor design, so you are getting the full amount of color raw.

This may be the explanation for what you are seeing.

Jason Rodriguez
June 19th, 2008, 12:46 AM
Not quite true . . . just look at the Viper, a 3CCD camera that still has green as the most sensitive color. Also the Andromeda modification for the DVX100 showed magenta highlights where you find that the green has clipped before the red and blue channels.

The "green" can be very simply removed by white-balancing though. So when you see a "green-ish" RAW image, it simply means there is no white-balance applied.

Thanks,

Jason

Daniel Lipats
July 13th, 2008, 03:16 PM
I have been playing with different settings and im starting to finally get some good, clean images from the sumix camera. Its giving my Panasonic DV camera a run for its money in terms of dynamic range and sensitivity.

Here are some tests I shot this morning. Set at 12bit, 24fps, 0 gain. Unfortunately I forgot to white balance! So they all have a bit of a green tint to them. I tried to remove with some post work.
http://dreamstonestudios.com/delete/frame63_zeiss.jpg
http://dreamstonestudios.com/delete/frame_lens2.jpg

Here is a still from the same setup taken with a Panasonic 3CCD DV camera:
http://dreamstonestudios.com/delete/IMGA0245.JPG
http://dreamstonestudios.com/delete/IMGA0241.JPG

I think my next move should be getting my hands on a real HD c-mount lens. The images are a bit soft, and I think they can be improved on a lot.

I'm starting to like the results.

Noah Yuan-Vogel
July 13th, 2008, 06:03 PM
Good to see you are still working on this. The images look nice but i cant tell if the edge blurring is from poor lenses or odd bokeh, and is it just me or does it look like there is still some infrared light being picked up? i remember finding that without correct IR filtration on my old sumix camera images had a similar reddish tint and a some diffusion from the IR light not focusing correctly. it doesnt look that bad on yours compared to my old camera but it does still appear to be there at least to my eyes. im not even sure how you can compare to your panasonic (gs series?) dv camera... :P

Daniel Lipats
July 13th, 2008, 06:56 PM
Noah,

I think its just poor lenses. Something odd about this Zeiss one especially. The picture with lens2 prefix is sharper. I'm looking to spend $300-$500 on a decent c-mount prime but I can't find anything I like. I don't want to make another uneducated purchase, I have spent over a grand on poor lenses already.

The reddish tint is actually my rushed color correction attempt. It hurt the color quality a bit but does look better, it was pretty green. I will be shooting more tests and will make sure to white balance.

Yeah, I think its a GS150. I wanted to show what the scene looks like through a CCD camera and its the only one I have here at home. I hope to put it head to head with prosumer HD cameras soon, but first need better optics and a faster network card.

Biel Bestue
July 16th, 2008, 05:17 AM
Daniel, how is the camera performing with low light levels? when does grain appear?