View Full Version : 4:4:4 10bit single CMOS HD project
Rob Scott June 11th, 2004, 08:25 AM <<<-- Originally posted by Laurence Maher :
What I'm trying to figure out is, just how do i get my footage into Mac and FCP for editing, as I'm sick of PC non-stable systems. -->>>
OK, I'm not sure when this would be possible (it would require a 4- or 8-drive array inside a small enough box, etc.) ... But consider this possibility:
1 - Self-contained camera unit that captures to raw files
2 - Download the (large) raw files to your Mac
3 - Overnight run processing software to do Bayer filtering, color correction, gamma, etc. and compress to Aspect HD
4 - Delete the raw files and use the Aspect HD files as your "masters"
Obin Olson June 11th, 2004, 08:35 AM Mike lets stay in the hear and now ..I have a camera system...it shoots NOW ...it's what this thread is about... I understand what you want for a system but I am not selling a system. I am building one and if you wanted one well you could just follow in my footsteps and build one from what I am doing...lets just keep this on-track and about what we need NOW for our camera system(s) so we can go SHOOT and stop the idle chit-chat
Steve Nordhauser June 11th, 2004, 08:40 AM I agree with both of you. Rob needs to plan the future a bit since he is considering a large outlay of time (Rob, you've got mail) and needs a roadmap to cover where he is going. The work needs to be methodical, building up. Obin, you are right that basic functionality will be all that is required for a first usable release.
Mike Metken June 11th, 2004, 08:45 AM Obin,
agree. I just wanted to stress the fact that inexpensive field recording is available now straight into your NLE. The camera based recorder is further down the road.
Rob,
I think that Aspect HH only works with Windows and it is 8 bit. Prospect HD is 10 bit but for the next 4 months will only be sold packaged with Boxx RT.
Mike
Rob Scott June 11th, 2004, 08:54 AM <<<-- Originally posted by Mike Metken :
I think that Aspect HH only works with Windows and it is 8 bit. -->>>
Crap. Wasn't thinking about that. What about DVCPRO HD for FCP? Can you get that codec separately?
<<<-- Originally posted by Steve
...needs a roadmap to cover where he is going. The work needs to be methodical, building up. -->>>
Exactly. I'm going to start in Windows, with just basic CameraLink capture, preview, Bayer and output to 16-bit TIFF. Period.
But I'm trying to choose cross-platform tools so that the software could be easily ported to Mac OS X and Linux. I'm also going to try to develop it a modular fashion that parts of it could be used in embedded "firmware" inside a camera box itself.
(Unfortunately, IIRC, the CameraLink SDK supports Mac OS at all. Just Windows and Linux.)
Obin Olson June 11th, 2004, 09:00 AM that codec is 8 bit
Mike! c'mon man are you going to drag your AVID onto the set with all this software loading the system and try and hack a recording system out of it?? this is getting silly..we are NOT going to use our NLE computer TOWER to capture with...I am building a VERY small "capture" box that is FAR cheaper then my NLE system is and MUCH better for this task!!
Rob I was up till 4am last night reading shit and it sure looks like you could (in the future) build a box with a FPGA that could be programmed to capture and spit out image files...don't let this stray you from the path if BASIC software..but think how awesome that would be in the future..no PC at all jsut camera FPGA system and disk drives!
Mike Metken June 11th, 2004, 09:10 AM Rob,
DVCPRO HD is 8 bit also. I don't think that Panasonic would sell it separately, maybe only as some super expensive hardware solution.
10 bit gets expensive. When Prospect HD becomes available it may be $5K for the software alone. David Newman would know better.
I'd be satisfied with 8 bit Aspect HD. It is 4:2:2, up to 1440x1080p. You get the same or better quality than CineAlta recording. Major studo film productions have used that.
Obin,
I don't want to drag a computer to the set either. But you need a large monitor anyway. How are you going to hook it up to your camera?
Mike
Obin Olson June 11th, 2004, 09:16 AM 8bit sucks, what you see is what you get with 8bit 10bit is alot more like film you can push it in post ...we need to stay with 10bit UNLESS we want to shoot at a high framerate then we can use 8bit for more fps recording if needed
Mike Metken June 11th, 2004, 09:19 AM Obin,
If you want 10 bit, then Mac with FCP HD is the best. Boxx RT with Prospect HD is over $25K.
Mike
Obin Olson June 11th, 2004, 09:24 AM Mike FYI can open and color correct with combustion after effects etc and then downrez and compress for editing in premiere pro
Rob Lohman June 11th, 2004, 12:55 PM Obin: I thought we always needed to shoot at higher framerates
due to the rolling shutter? Or can we fix this in some other way?
What did you find in regards to FPGA solution you are talking
about? Rob and myself have been talking a bit about FPGA
through e-mail.
Rob Scott June 11th, 2004, 01:10 PM <<<-- Originally posted by Rob Lohman : Obin: I thought we always needed to shoot at higher framerates
due to the rolling shutter? Or can we fix this in some other way?
-->>>
This is AIUI ("As I Understand It") ...
You can shoot at a standard frame rate (24 fps). Reading out at the maximum possible speed takes far less than 1/24 of a second, so you put "blanking" time between frames. This simulates a standard 1/48 second shutter speed (or faster) when you are shooting 24 fps.
I calculate that reading a frame from the chip actually takes about 1/70 second. Breaking this down into decimals ...
1/70 sec = 0.0143 sec
1/24 sec = 0.0417 sec
So, to capture at 24 fps you do the following steps ...
capture a frame in 0.0143 seconds
blank for 0.0274 seconds
capture the next frame ... etc.
Does this sound right, Steve?
Steve Nordhauser June 11th, 2004, 01:16 PM Although it is possible to adjust the vertical blanking time somewhat, what you proposed hasn't been attempted yet. It might take some firmware changes in the camera. What Obin was proposing is based on the rolling shutter artifact. Each line is sequentially read from top to bottom. That means that there is a temporal difference on any image of one frame time from the top line to the bottom. The faster you go, the less the difference. Obin wants to readout at twice the frame rate he needs and toss every other frame. This will get him the desired rate with no timing adustments but reduce the single frame time in half.
I will have to think about what Rob suggested though - extending vertical blanking for a full frame time.....hmmmm.
Rob Scott June 11th, 2004, 01:23 PM <<<-- Originally posted by Steve Nordhauser :
Obin wants to readout at twice the frame rate he needs and toss every other frame. -->>>
OK, then, it would be ... ?
capture a frame in 0.0143 seconds
blank for 0.0065 sec
capture a frame in 0.0143 seconds - throw it away
blank for 0.0065 sec
et cetera...
I think that would be equivalent. I will have to read the documentation and figure out what resolution it's possible to program the readout and blanking times.
Rob Lohman June 11th, 2004, 01:35 PM Somehow I have a very hard time understanding this rolling
shutter thing. I understand how you normally want to wait and
capture a frame to get a certain frame rate.
I was however perhaps incorrectly assuming that since this is
all digital I would simply get a frame in the frame grabbers
buffer. I thought that was the main reason it was there. In
other words, asks for 24 fps and you get a buffer that gets
filled with information 24 times a second.
How to sync all of this is a question indeed.
Rob Scott June 11th, 2004, 01:44 PM ... I would simply get a frame in the frame grabbers buffer. I thought that was the main reason it was there. In other words, asks for 24 fps and you get a buffer that gets filled with information 24 times a second. I think you can control to with a finer degree than that by setting the vertical blanking interval (etc.). I'm not quite certain exactly how this works, 'cause I haven't read through the docs thoroughly yet. I'm going to shut up now because I am at the limit of my knowledge about this. :-)
Steve Nordhauser June 11th, 2004, 01:59 PM Mainly, this is a degree of what you want to know at what level of detail. Once you get to the PC side, yes, you request a frame, you get the next frame appearing in a buffer. But, because what the camera is doing will influence your imaging, you might want to understand the camera side.
Basically, there are two types of sensor shutters - full frame (also called global and asynchronous) and rolling (synchronous). Full frame works like film exposure - expose the entire surface, stop exposing, and then readout. With few exceptions (Micron Truesnap being one and some CCD architectures), the two are sequential so you can't expose for a frame time. But, there is no difference in time for a pixel a the top of an image to the bottom. A pencil held vertically and moved horizontally will be blurred by the motion during exposure time but that is all. To do 24fps with a reasonable exposure on an IBIS5, you need to expose for 1/48 sec and readout in 1/48th of a sec.
Rolling shutter, is different. Each line is read and reset. You roll down through the image. You can also roll a second reset to get different exposure times, but there is always one frame time difference from a top line to a bottom. Most objects don't fill the screen so the effect isn't too much on smaller objects. Or pencil, though will have both blur from motion during exposure and look slanted due to the time difference. On the Micron, you can expose for up to 1/24th of a second.
Obin's plan is to run the Micron at 48fps. Take every other frame and chuck it. You now have 24fps. You also have frames that read out in 48th of a sec, minimizing the rolling shutter effect. Rob suggested having a longer blanking time (dead time between frames) of the regular time plus 1/48th of a sec - no second frame read out to lose later. Lower the average data rate and reduce the storage size.
Jason Rodriguez June 11th, 2004, 02:01 PM <<<-- Originally posted by Obin Olson : Rob I was up till 4am last night reading shit and it sure looks like you could (in the future) build a box with a FPGA that could be programmed to capture and spit out image files...don't let this stray you from the path if BASIC software..but think how awesome that would be in the future..no PC at all jsut camera FPGA system and disk drives! -->>>
It's called the Kinetta HD camera. Should be shipping by the end of the fall/winter.
Obin Olson June 11th, 2004, 02:54 PM guys I have some footage to show...I am very impressed
I will encode it to windowsmedia 9 HD
Mike Metken June 11th, 2004, 03:06 PM Guys,
Just a little thought. You should look at this from the commercial point of view. If you create a small storage unit that could be made to work with box (POV) cameras, you can really cash on this.
There is a Sony HDC-X300 1080p camera that costs $15K. Canon HD auto focus/manual lens for it is $7K. It outputs HD SDI. If you can make the storage box work with this camera, you've made a Sony F900 killer.
That's why I think that this should be as mainstream as possible. If the world out there is all HD SDI, this project should be too.
If you make your storage HD SDI and sell 2,000 of them at $2K profit each to the industry, you've made 40,000,000 and you can buy each of us one of the complete systems as a little tip. You normally tip 15%. In your case we'll settle for a lot less.
My idea is to keep to industry standards. The same as for the storage goes for those industrial cameras.
You do something non-standard and you could infringe on somebody's patent and will not even realize it. But if the all industry does it and there are no patent labels on the other cameras, you're most likely OK.
Mike
Obin Olson June 11th, 2004, 03:19 PM link to HD wmv file...framerate got jacked somehow..duno
www.dv3productions.com/Video Clips/HD-test.wmv
Dennis Jakobsen June 11th, 2004, 03:39 PM That looks very impressive...
Could easily have fooled me into thinking it was film. The contrast seems great. The colors seems very natural, and clean...
Hope you are able to fix the framerate...
Jason Rodriguez June 11th, 2004, 03:43 PM Obin,
Very cool stuff.
BTW, is the "shutter" speed too high? That's what it seems like from the looks of things.
Also I take it you were recording at 8-bit instead of 10-bit? Just wondering, because again, some really harsh clipping in the highlights. There's no knee on these cameras, everything I assume is perfectly linear, so there's nothing to simulate the S-curve of film. So when you slapping into the highlights, you are literally "slapping" right into them-and boy can they hurt! :-)
Mike Metken June 11th, 2004, 03:52 PM I don't think that you can have a linear output unless you are using something like a 16 bit sampling. You need to have the knee at least for the highlights.
Mike
Jason Rodriguez June 11th, 2004, 03:56 PM Could easily have fooled me into thinking it was film. The contrast seems great.
Whoa there, I sure hope not, not with those highlights. Now I think when we get into 10 or 12-bit that we'll be talking about quality there. But for right now I see the promise, but not exactly the results. Then again, I totally understand that this is a project in the works, and I'm really excited to see where this is going.
BTW Obin, that's a really nice board you found, one question though is will this require a huge power supply like normal PC's (300W+)? If so, that could definitely take out the portability aspect of things.
Also where are you doing your bayer conversion right now? Inside XCAP? Rob, if you're going to write software, what type of algorithm are you planning to incorporate? I think right now the weakest link is the bayer algorithms out there that either produce 'stepppy' or 'stripped-dot' edges, and or blue/orange color aliasing. Actually normal DSLR cameras do this too when they reach the limiting resolution of the chip, but the conversion software has a special "false color filter" (at least that's what Canon calls it), that removes the blue/orange color moire problems where ever they do occur. So I think a good algorithm (even the best algorithms will produce color aliasing when the limit of the sensor is reached) combined with this approach (false color filtering) should give some really nice results.
Hi Steve,
Are you saying that with global syncronous shutters you can't expose for the entire duration of the frame, like a 360 degree shutter on film? For instance, in the Panasonic, Sony, etc. you can turn the shutter "off", so that you are exposing for the entire duration of the frame, i.e., 1/24th of a second for 24fps. This produces a lot of motion blur. Now if you can't do this with a global shutter, then none of the current HD cameras is using a global shutter, they must be using a roling shutter, unless their CCD's are setup with a full-frame that's over black-I've seen CCD's like that, that have one part exposed and then other other part is blanked over. But I don't believe most are like that, so therefore they probably have rolling shutters. If so, I've never seen what would appear like strange "slanting" artifacts, typically because there's so much motion blur when you have the "shutter" open that long. Does anybody know anything else about this?
Obin,
Is MicroATX small enough? 9.6" by 9.6" is pretty big IMHO. At least too big to make a nice box that you can carry on your shoulders without it sticking up way to high. Mini-ITX seems better at around 6.8x6.8, but there's nothing that size with a 64-bit PCI-X slot. Hmm, oh well . . .
Obin Olson June 11th, 2004, 04:57 PM Jason i want you to know this is ALL 8 bit and 8 bit SUCKS hard....BUT even with that in mind take a look at the truck jib like shot, the truck was almost totaly dark in the forground and the background is hard sunlight..EVEN in 8 bit I was able to pull the truck UP so you could see it...in 10bit you have so much more BUT even in 8 bit that is darn good in my book...notice the shot with me in it(ya that is me after 2 hours of sleep lastnight) that shot was exposed IN 8 bit for my face and you see the background is GONE if I exposed like I did for the truck then we would still have the background and could pull my face up and out of the blacks..try that with the Varicam!...bottom line is EVEN in 8 bit this thing is amazing in how much you can pull from the darks....this is from the fact that it's 4:4:4 so the darks have NO compression in them.....if only I could capture 10bit 24fps!!!!!!!!!!!! One thing I notice is this chip(cmos) has a MUCH softer look then ccd chips do when you hit the upper limits, I like it!
it's great to see some footage at last eh?(even 8 stinkin bits!!)
i am going with microatx..it's good enough and I am betting that microatx board with pci-x is $1,000 or more because they are the only company that makes one! so I will go with standard pci and dual sata for a 2 disk drive raid using a 3ghz p4 2gigs ram 1 ide os disk and 2 sata drives inside a microatx case and use a 7inch 1024x768 display that mounts on the camera and a 2nd "control" display for the system...maybe 17inch 1280x1024? and a good graphics card with dual head and svhs out for a producton monitor on-set...now if we can just feed Rob enough TacoBell cheesy-bean-and-rice we can get some software to run it all!!!!!
oh yes the days of minidv are so so so OVER!
Eliot Mack June 11th, 2004, 05:09 PM It's great to see the progress, Obin. Can 2 SATA striped drives handle 1280x720 10 bit uncompressed raw footage?
When do you think you'll have the full system pieces together to start integrating/bug fixing?
Thanks,
Eliot
Obin Olson June 11th, 2004, 05:12 PM 10bit 1280x720 on raid sata? easy.
maybe even 48fps or more
Steve Nordhauser June 11th, 2004, 07:54 PM That micro ATX motherboard chip set can handle either the ICH5 or ICH5R southbridge. It would be great with the ICH5R since that would give you the built in RAID capabilities. Obin, you might want to ask. Otherwise, it is perfect.
Jason, on CCDs, there are a number of different architectures that allow you to overlap exposure and readout - interline transfer - you trade fill factor for extra storage between pixels, frame transfer - those are the ones you describe with an entire extra sensor split between two sides - much more expensive. Micron Truesnap is the only CMOS solution I've seen. We are releasing a VGA 250fps camera with Truesnap, but not applicable here.
Hey, when it comes time to make some money, don't forget me. If a product can be developed, I'd like to be involved in the camera head, system whatever.
Just a thought anyway. At some point in my life I may have to justify all this to my boss.
To be fair, if this gets carried away from where I am, let's say into HD-SDI, that is OK.
Jason Rodriguez June 11th, 2004, 08:28 PM So an IT, or FIT type of CCD does not use a rolling shutter, or doesn't have to? Also frame transfer CCD's don't have to use rolling Shutters?
And I'm sorry to beat this horse to death, but again, don't still cameras have rolling shutters (the curtain shutter), but I've never noticed any problems with those systems? And film cameras have a mechanical shutter that does not expose the film at the same time, it sweeps across the front of the film, again in a sideways "rolling" manner, so some areas of the film have an exposure later than other areas of the film-I haven't seen any weird smearing from that effect either.
James Ball June 11th, 2004, 09:04 PM http://www.logisysus.com/
Specializes in small PCs
Many aren't as large as some dockable decks.
Also think of what you're replacing here. You're not replacing a video camera, you're replacing a motion picture camera. Small is nice but how many features are shot run and gun? None.
I'd also like to Amen someone else's comment about SDI. Indeed the whole world is HD-SDI.
Check out Blackmagic-design.com. They have a 14bit 4:4:4 HD with variable frame rate capture card for $1295. Lesser resolution for less money.
Jason Rodriguez June 11th, 2004, 09:44 PM http://www.logisysus.com/
Specializes in small PCs
The big problem I see with this stuff is that there's no 64-bit PCI-X interface, which kind of leaves you high-and-dry for anything remotely complicated, especially if you want to get into HD-SDI. Personally, let's stay away from HD-SDI. Camera-link is fine, we have the software, HD-SDI will take you to a WHOLE 'NOTHER level of sophistication and price. There's a reason right now why I don't have a $1,000 Blackmagic decklink HD. Yes, the card is cheap, but I don't have the $5,000 to plunk down along with everything else that goes along with HD-SDI, just for a stupid RAID array, the tape deck, etc. And a simple piled together SATA RAID won't do the 200MB/s+ that you need from a SATA RAID for an HD-SDI baseband signal, not without adding a lot of drives. So HD-SDI is going to get real expensive real quick. HD-SDI monitors are real expensive, everything in that category is real expensive, and I don't understand everyone's insistance on going that route, since it's going to add a good $15K onto the price of your camera.Check out Blackmagic-design.com. They have a 14bit 4:4:4 HD with variable frame rate capture card for $1295. Lesser resolution for less money.The 4:4:4 card is $2495, and that WILL require a tremendous amount of throughput, at least 300MB/s+ for sustained image transfer. That's going to set you back a bundle ;-)
Eliot Mack June 11th, 2004, 11:07 PM Neat to see this coming together. A couple of suggestions on the software effort:
-Qt is a really nice C++ GUI framework, just right for wrapping around a raw camera interfacing SDK. It has built in support for file handling, image handling and display, timers (to determine frame rate updates), TCP/IP networking, etc. Works on Mac/Win/Linux. There's a great book for it, "C++ GUI Programming with QT3", available for about $35, that includes libraries for all 3 platforms. Using it is free for open source projects. I've used it to implement a live video display application, and it's amazingly quick to get things working. I can volunteer some time in this area as the user experience will be very important.
-Setting up a CVS or Subversion server for the code would be great; that way people can download it, understand it, and start coming up with good ideas.
-One addition that would really make a difference for professional use would be to sync the frame grabs with incoming time code from an external time code generator. This is usually done on PC's through the RS232 interface. An event driven serial communications framework for Windows can be found here. http://www.tetraedre.com/advanced/serial2.php
Perhaps a Linux expert can add one for Linux. It's very easy to make external events trigger actions in Qt programs; no polling necessary.
The 1280x720 4:4:4 10 bit w/2 SATA drives looks like a great 1st stage design. This is exactly what low budget green screen shooters need.
Thanks,
Eliot
Obin Olson June 12th, 2004, 09:06 AM The 1280x720 4:4:4 10 bit w/2 SATA drives looks like a great 1st stage design. This is exactly what low budget green screen shooters need.
Thanks,
Eliot -->>>
man it's almot as easy as going to Burger King for a burger...in like 15sec i had a PERFECT greenscreen key with this camera!!
Eliot make sure Rob knows about what your talking about with the Qt stuff....he is going to write code for this project
Richard Mellor June 12th, 2004, 09:48 AM hi everyone
having trouble with the link to the clip .
Obin Olson June 12th, 2004, 09:57 AM you may have to download it...I had issues then I updated windowsmedia player and now it works great..try that
Richard Mellor June 12th, 2004, 10:10 AM I think its something with the page . I see your url frames
but it won;t link from the page and copy and paste won't link either.
David Newman June 12th, 2004, 11:17 AM This thread is moving fast; I didn't seem to be able to read it frequently enough. Several pages ago there was a discussion of CineForm products for use with this workflow (Aspect/Prospect HD), that followed with the issue of 8 bit vs 10 bit. I totally agree this is you going through the trouble to design and build your our camera you don't want to compromise on the quality of it output (i.e. you want 10bit or better.) CineForm Prospect HD is priced the way it is as is designed for multi-stream real-time 1920x1080 compressed and uncompressed workflows over HD-SDI. i.e. expensive hardare and the software to manage it. Do you guys need that?
The primary question for you guys is what is your intended post-production workflow? On the PC side the low cost solution that support > 8 bit is After Effect Pro (16bit RGB 4:4:4) and that is difficult to use as an NLE. The NLEs from Adobe, Ulead, Sony (Vegas), etc., are all 8 bit. CineForm has extended Adobe Premiere Pro to a 16bit per channel YUV 4:2:2 workflow (currently only sold with Prospect HD.) I'm wondering is their a market for a high depth NLE package (with Premiere Pro) that doesn't require HD-SDI and a dual proc system. i.e. a 10bit / high-end version of Aspect HD. CineForm is interested is designed products to the market needs.
Obin Olson June 12th, 2004, 11:43 AM David we do need a way to edit 10bit on PC I guess MAC finalcut pro HD can do it but like you said premiere can't and that sucks...I was going to capture all to tiff files in 16bit and do the color work in combustion and output 8bit for editing AFTER color work was done but it sure would be nice to edit 10bit....could your software have a 3rd "version" that would allow 10bit in premiere pro with a dual AMD or sungle p4 system? this thing is real and needs support from NLE systems, it can't cost too much as the whole point of this camera/system is lowcost highquality HD for Indy films Music videos and Commercial Production...so yes support from Cineform would be great IF it's a high enough quality codec...FYI that clip in wmv9 is 9 megs...it came from a 1 gig file! from 1 gig to 9 megs..that is amazing because the quality looks almost the same!!
Oh i think sony Vegas Video does HD at 16bit?
so how does Cineform stack up for image quality?
David why YUV 4:2:2??? whynot RGB 4:4:4 with 10/12/16bit?
the whole point that this camera can shoot images that you have more control in post is 4:4:4...maybe 4:2:2 would be ok if it was high bit depth...??
Eliot Mack June 12th, 2004, 12:32 PM Hi David,
There is definitely a need for some sort of workable 10 bit HD codec that doesn't use dual Athlon 64s and HDSDI. It looks like the easiest way to capture is to RAW files. From there, the files can be color corrected and converted to a file format suitable for editing and compositing.
Avid is introducing a new HD codec designed specifically for multigenerational editing and compositing; it has about 5:1 compression but can apparently handle multiple generations of work without artifacting:
http://www.avid.com/DNxHD/
They will be making the source code to this codec available in another couple of months. I could see a fit between the capabilities of this codec and Cineform's knowledge of optimizing codecs to run on mainstream equipment. The download will be free, so it's easy to try out.
If a variation of the Prospect codec can handle compositing and editing without artifacts, that's great too. Whatever works!
Thanks,
Eliot
Les Dit June 12th, 2004, 12:59 PM David,
It would be nice to have a tool to allow higher bit depth images to be used in Premiere Pro. Perhaps a lowest common denominator of the tool would be a codec that allows the user to batch convert a sequence of images ( tiffs, cineons ) to an avi file for use in the NLE. I don't suppose that Premiere would allow the high bit depth image sequence to be imported directly, that's too bad.
As new image stream formats come about with the various projects, there will be a need for 'stream conversion' tools for those raw high bit depth files.
If Premiere can host the 10 bit files and do 1st stage color grading on them before Premiere chops to 8 bits, everything is there for a low budget digital intermediate system!
There is a great niche here for those filmmakers wanting a more film like experience than DV, but they don't want to originate from film.
Personally I don't even think it's a problem if the codec isn't real time capable, 1/2 res proxies let me color correct just as well.
Jason,
Often the inadequacies of the image don't show up on a still. Thinks like awkward motion artifacts and fixed pattern noise ( or any noise ) will only become distracting when the movie is run at speed.
-Les Dittert
Obin Olson June 12th, 2004, 01:21 PM Les I think way more shows up on a still image then a moving one! have you ever looked at feauter film frames on the net they look dirty and grain filled..not when you watch the film in theaters
Les Dit June 12th, 2004, 01:32 PM Obin,
Sharpness certainly shows up on stills better than moving. But fixed pattern noise, quantizing, and other non uniformity is much more obvious when the image and the tones are moving.
For example, how can you tell the difference between fixed pattern noise and the image, if you only have one image to look at? You can't! Fixed pattern noise means that the same noise pattern exists in consecutive frames. It can't be detected with one frame.
Another example: Movie of a sunset. The quantizing ( banding ) if subtle won't be seen in a still. But when put into motion, the subtle bands can be seen slowly moving across the frame like an odd rainbow effect.
I was not referring to dirt, but even dirt shows up *much* better in motion. When we work on scanned effects shots, the dirt painters ( dust busting ) always flip between a few images to catch the dirt particles they need to paint out. Especially if the dirt or dust spec is actually on a shot of a dirt field!!!
-Les
<<<-- Originally posted by Obin Olson : Les I think way more shows up on a still image then a moving one! have you ever looked at feauter film frames on the net they look dirty and grain filled..not when you watch the film in theaters -->>>
David Newman June 12th, 2004, 02:29 PM >-- Originally posted by Obin Olson :
> Oh i think sony Vegas Video does HD at 16bit?
No Sony Vegas is only 8bit and RGB (no YUV support either.)
> So how does Cineform stack up for image quality?
It is designed for HD online. It is equilevent (or better) than D5 (which is YUV 4:2:2 10bit.) D5 is the workhorse standard for HD masters.
> David why YUV 4:2:2??? whynot RGB 4:4:4 with 10/12/16bit?
History. Single link HD-SDI is 10bit 4:2:2 YUV. YUV is a more natural compression format -- optimized for the human visional system. RGB compression is less efficient. 150Mb/s compressed RGB would look worse than 150Mb/s YUV 4:2:2 (this is even true for 4:4:4 YUV although to a lesser extent.) Basically is you want the benefits of compression, YUV is the way to go.
> the whole point that this camera can shoot images that you have more control in post is 4:4:4...maybe 4:2:2 would be ok if it was high bit depth...??
I believe 4:2:2 is plenty particularly if you source is extracted from a Bayer pattern CCD/CMOS. 4:2:2 is more chroma resolution than the source which is 4:2:0 equivelent. 10bit YUV has enough data to prevent banding artifacts that can occur in color correction.
David Newman June 12th, 2004, 03:01 PM >- Originally posted by Eliot Mack :
> It looks like the easiest way to capture is to RAW files. From there, the files can be color corrected and converted to a file format suitable for editing and compositing.
I see no reason not to capture directly into CineForm encoded file. We are doing this today from HD-SDI feeds.
>Avid is introducing a new HD codec designed specifically for multigenerational editing and compositing; it has about 5:1 compression but can apparently handle multiple generations of work without artifacting:
This is the same approach of CineForm. The only difference is the AVID codec is DCT based, whereas the CineForm HD codec is wavelet based. At the same quality the CineForm files are smaller (and faster.)
<<<-- Originally posted by Les Dit : David,
> It would be nice to have a tool to allow higher bit depth images to be used in Premiere Pro. ... I don't suppose that Premiere would allow the high bit depth image sequence to be imported directly, that's too bad.
This can be done; I know someone who already has an TIFF and DPX importer for Premiere Pro. This feature hasn't yet been integrated into the Aspect/Prospect HD.
>Personally I don't even think it's a problem if the codec isn't real time capable, 1/2 res proxies let me color correct just as well.
If we were to do this, we would aim to keep the performance up.
Eliot Mack June 12th, 2004, 10:32 PM <<<-- Originally posted by David Newman : >- Originally posted by Eliot Mack :
> It looks like the easiest way to capture is to RAW files. From there, the files can be color corrected and converted to a file format suitable for editing and compositing.
I see no reason not to capture directly into CineForm encoded file. We are doing this today from HD-SDI feeds.
-->>>
My understanding was that capture to RAW format allowed the Bayer filtering to be performed offline, which enables the use of better, more computationally expensive algorithms in performing the white balance/sharpening/etc. conversions. I especially liked the automated color balance from the image of a Macbeth target that Steve N. mentioned; this would negate the need to preset your camera for indoor/outdoor lighting to get the color right. Just expose for proper lighting levels and shoot a chart. This seems so much simpler than the usual methods that I'm surprised it's not more widely used.
The 10 bit 4:2:2 Aspect codec sounds like just the fit for what I need, which is a 1280x720 or 1920x1080 HD codec that I can use all the way through the compositing, editing, and final color correction stages without terabytes of storage and giant RAID requirements. If there is a codec plugin for Premiere Pro/After Effects/Shake then it would be a slam dunk.
How long would it take to get something like this to market?
Thanks,
Eliot
Steve Nordhauser June 13th, 2004, 06:14 AM David,
In an ideal world you are correct, recording in the final format would be great. I don't know the amount of real-time processing that can be done on a 3GHz machine. That is clearly your area. This should be possible for 720p, 8 bit. I don't know about 10 bit. Would it scale to 1080p? In 12 bit mode @60fps, we are talking about a 300MB/sec (unpacked) data stream.
The word I hear in this group is that they don't want to compromise on what I've been calling pre-post processing - the Bayer filter, RGB->YUV, compression steps. If this can really be done in real-time, all the way up the camera scale, you have an impressive product (or will when the cheap camera link cameras are supported) and I would suggest that it is not worth anyone here writing their own code. $1K to record fully processed and compressed data with a preview window, maybe with basic camera controls would be great for the people here.
I think the biggest processing and recording issues are a complete understanding of any step that is a potential loss of quality - like the Bayer filter.
Valeriu Campan June 13th, 2004, 07:31 AM It looks that the camera controller from RedLake has options for white balance settings and selectable output for various color spaces.
from the brochure:
"....White Balance Manual, Presets, and Semi-Automatic
Color Balance Real-time Advanced Color Filter Array Interpolation
Color Space Output RGB, sRGB, CIELab,YUV, or HSV..."
David Newman June 13th, 2004, 10:52 AM Steve & others, I admit that we have yet to experiment with any direct signals from camera link, so much of my thoughts are theoretical and based on our experience with HD-SDI encoding. Our encoder and decoder pair function quite differently to other compression algorithms on the market (patents aren't granted yet so I can't tell you how.) I believe it would possible to encode 1920x1080 camera feed in 10 bits (we haven't done 12 yet) at about 30 to 40 fps on a 3GHz P4, and 10 bit 1280x720 feed at about 70-85fps. I know there will be other overheads so these numbers could be high, but I still believe 1920x1080 @ 24p would be possible on a P4. I know there are quality concerns but I believe I can overcome many of them (and I would like to try.)
Here what I would like to do (probably in my spare time); rather than getting a camera and link card (I'm still interested Steve) I would like a range of raw Bayer sequences from different cameras and resolutions of different sources (data pre-processing i.e. the type of data I would see across Camera Link.) Would anyone kindly send me a CD or DVD of raw material in any format, all I need to know is resolution and the packing format. From here I can experiment on encoding performance and offer more accurate numbers.
If anyone is interested please send a disk to:
David Newman
CineForm, Inc.
5315 Avenida Encinas Suite 230
Carlsbad CA 92008
Thanks.
Les Dit June 13th, 2004, 12:21 PM David,
Would you be encoding the Bayer pattern directly?
The preferred Bayer demosaiking algorithms , like the variable gradient method, are adaptive algorithms. They are image dependent and take a lot of cycles. I don't think current CPU's, even with the most parallel SSE coding, can be made real time.
-Les
|
|