View Full Version : 4:4:4 10bit single CMOS HD project
Jason Rodriguez November 9th, 2004, 09:23 AM Isn't the 32 bit PCI bus the problem here Obin?That's why I'm also curious what the chipset is.
If he's using a good southbridge, there should be no problems, at least if the IDE is on-board. Hublink 1.5 from Intel runs at 266MB/s full duplex, so there shouldn't be any problems going back and forth from main memory, if that's what he's using (which I'm assuming he is). Also the IDE bus should be straight into the southbridge on a seperate channel from the PCI bus to prevent more bottlenecking there.
If it's a good southbridge (more than ICH4, like hance rapids, ICH5 or ICH6), then there shouldn't be any problems, especially if it's a southbridge that supports PCI-X (which Obin's doesn't), because then it has to have a guaranteed bandwidth of at least 500MB/s+.
Steve Nordhauser November 9th, 2004, 09:26 AM I think everyone is hitting around the right area. You need to pay attention to the system architecture. It is even worth looking at the chip sets. Some of the more recent Intel chip sets have a southbridge (ICH5-R) with a two drive SATA RAID built in. Sure you save $100 on the controller, but more importantly, the disk data (assuming two drives is enough) never hits the PCI bus. You can ignore this in the 64 bit world but it is very important to a 32 bit system.
Rob L. is correct (a least from my experience) that a little bit of assembly can go a long way. In these cameras 90% of the CPU time is probably taken in <1% of the software. Assuming that you use DMA to move blocks of memory to drives, the tightest loops in your software need to be examined. The Bayer preview. The compression, if any. Any real-time video algorithm (white balance). Since that is where the CPU spends its time, a 5% algorithm savings (time, not size) is almost equal to 5% faster CPU time. You might need to understand the CPU caching system also. You can prototype all in a high level, but the optimization should include a long stare at the loops in the code.
Rob Lohman November 9th, 2004, 09:41 AM Steve: exactly. It's funny you mention the points you do since
the packing, some parts of the preview are all in handwritten
assembly at this moment (I'm working on converting the
compression to assembly as well).Wouldn't the easiest method be to chop off the two LSB? A LUT would be ideal, but I think that would take too much time for each image, especially if it's 1920x1080.
One other thing I'm wondering is if you could do a 1/4 preview, and then apply a LUT to just those pixels for display, so you're not trying to do a transform on 2 megapixels @ 25fps. I think that'd be too much for any system (my dual G5 can't do real-time HD effects like that either, so I definitely wouldn't expect that out of a small computer). So with the 1/4 preview, you now can see what's happening dynamic range-wise with the LUT, and that's only performing an operation on 960x540, which should easily be feasable, especially on a new Pentium M or Pentium 4.No, you can't simply lob off the 8 bits per color, here's why:
10 0000 0011 (10 bit color = 515)
this would end up being:
0000 0011 (8 bit color = 3
I don't have to tell you this results in a massive color and/or
brightness change for the pixel. I did it during some testing with
Rob S.'s RAW files once this way and got all kind of junk in the
frame (which took me a while to figure out). This is one of those
places where you easily go wrong with an algorithm implementation.
It would be far better to chop the 8 bits and set the highest bit
on the resulting bits if there was a value in the just chopped 8
bits (is what I did to check the RAW images from Rob S.). However,
you loose the 2 bits (or 4 for 12 bits) extra latitude during preview
so things will look washed out which you don't want. It would
probably be better to shift the 16 bits either by 2 or 4 bits to the
right. Or even better yet use a LUT (lookup table) indeed.
I do believe Rob S. is using a LUT for preview in his current version
of Obscuracam. This preview is indeed in half resolution (vertical &
horizontal) to reduce processing time and not having to de-bayer.
So that fits in neatly into the points I made above. Although it might
be interesting to have a full color/B&W before recording or have
a full B&W preview.
In my 3.2 GHz pentium 4 cpu I can view 1080i in realtime with a
WMV HD codec I think. So I think much must be possible, it all
boils down to proper design, testing, monitoring and profiling to
see which can and need to be (hand) optimized.
Steve Nordhauser November 9th, 2004, 09:48 AM Rob,
I'm hoping that was lop off the bottom two bits as in a shift right by 2 (or an integer divide by 4)
10 0000 0011 (10 bit color = 515) or 515/1024
this would end up being:
1000 0000 (8 bit color = 128) or 128/256
Obin Olson November 9th, 2004, 09:52 AM thanks for the info guys..I am sending all this to my code writer for review..
We will look at things in detail once I get a board to him for testing
Rob Lohman November 9th, 2004, 10:05 AM Steve: that was my middle point. At first I just lopped off the
complete high byte giving the problems. I know see Jason said
LSB and not HSB, my bad (I confused it with my own mistake).
However, it still gives you a bad representation of the dynamic
range which a LUT should be able to fix.
Brad Abrahams November 9th, 2004, 11:06 AM I just saw this tiny quiet pc on gizmodo:
http://www.akibalive.com/archives/000576.html
Wayne Morellini November 9th, 2004, 11:13 AM Most of these issues are things I discussed 6 months ago. Rob, it is good to see that you are taking such care handcrafting MC, I didn't realise that you have the history with the 386. Something I can tell everybody is that most programmers don't have these skills, they might know enough about C? to get it running reasonably, but in apps like this that is NOTHING. I finshed well ahead of the rest of my class in Uni and I would not trust most of them to do a job like this. I don't know the status with CPU load, but just recording is pretty low bandwidth, even simple preview is, but the problems you will get is handeling/programming architechture (not just the architechture itself) efficiently, the same with the OS, and stalling the CPU/memory systems. Doing any of these wrong will stall the hardware OS and/or CPU and appear to be massive CPU load. Anybody that depends on simple CPU stats will probably be scratching their heads trying to figure out what is wrong and where all the CPU is going. Do the calculation (well in MC you can calculate reasonable cycle consumption) trace through, test point the program and they will find the stalling regions. There is lots of realtime oriented drivers and BIOS's that are buggy (let alone OS's) that is why there are so many updates (for years) so even those high end programmers can't get it right. I could go to plenty of programming companies and ask them to write something like this, a lot would say fine they can do it like it isn't much of a worry, but few would be capable of getting the last 80% of performance. I am brutal, but I am sick of this stuff stuffing up the coimputer industry for everybody. One reason for my OS project is too permanment fix ALL these problems, to me many computer companies act like idiots drunk out of their minds. There is no reason that computers and apps couldn't be made to operate as reliably as most Microwaves, washing machines, TV's, and DVD's (well good ones) eventually. In the embedded/consumer electronics realtime industry the reliability of PC's is held with a bit of bewilderment, I'm afraid.
ggrrrrrrrrrrrrr..................................
If you can look up and down load the Stella, Atari 2600/VCS Programming manual, Lynx progframking manual and look at what emulator writters do. This stripps away all the ... OS/BIOS and leaves you with basic hardare and you will find out how come it is so difficult to program efficeintly. A lot of it involves minute timing differences as circuits stabilise and windows or opportunities, doing it wrong (or if somebody else makes a compatible circuit) can wreck performance or results. On a complex circuit blindly programming can set up a chain reaction of performance failures through related circuits (it is not Object Oriented programming). Running it on another chipset/Mainbaord/processor, can do the same as the underlying timing of the "compatible" hardware maybe different. So it is best to pick good hardware, with good drivers (like that Gige driver Steve mentions that DOESN'T come with windows) and abstraction layers (like DirectX+drivers does for different video hardware) in between you and all the different hardware versions, as much as is reasonable then setup OS/system propperly, and program carefully to these known to be good software layers. The good drivers and abstraction layers should take care of most of the timing (though you might get a bit better for certain/particular hardware by bypassing the abstraction layer this probably won't help a system with different versions of the hardware). Never assume that the abstraction driver is perfect, or good, some maybe, others need to be tested etc etc. In this situation all you need is to follow this simple formular with expertise, as long as you code C very efficiently with reference to hardware issues aswell, and maybe do the critical section in MC, you should be able to get within 80% of he maximun performance in fast systems (probably much less in slow systems that would do the job otherwise).
Wayne Morellini November 9th, 2004, 11:50 AM Ronald, yes the Filmsream is the SD Indie camera, versus our HD projects.
What I find interesting about the film scan 2K, or 4K, what happens when they scan 2.35:1, is the 2K fitted to the frame or is it still 1.78:1 frame (meaning actual frame now is 1.5K, or so, pixel resolution. Do hey go to 4K, or do they use 4K on 1.78:1 as well? I think it would be a matter of quality and film stocks aswell. I see a lot of stuff at the cinema that has grain that even my poor vision can pickup, even without glasses (even picked some grain on Imax scenes). So I think 4K or even 8K would be better. But what about just shooting at 2K and resolution upscalling to 8K ;) I think it is good for our purposes, buy digital theater may only accept 2K footage :( .
Now Barco CRT projectors were purer blacks than DLP, but were also weaker brightness (pluss maintance is more). With newer Crt phosor technology I don't know. But one technology to watch (well there are a few) is the Laser Crt (L-crt and many other names) invented in Russia. I research this because I wanted to make small laser display device (much to big at moment), it seems to have purity and performance to possibly beat DLP. Had picture of a demo of a guy holding one of the tubes standing infront of 50foot test pattern being projected, nice(little washout ;). What happens now is that film has to be transfered to digital and digital to film to show it to all people :( costly. I think in the future there might be a market for somebody to open up cinemas (or just change existing independent theatre) just to project digital only. All the independent film makers can get together a distribution chain and these cinemas show it. Now no transfering to film, unless it is popular and film company wants to distribute to film theatres. Most indies will not get much of a screening at conventional company chain theatre, because they want blockbusters (small indie theatre maybe), so they have to go somewhere else. In my local city we had two theatres, one owned by the biggest chin in the country, the other independent ex-bourdville theatre (one of the nicest I have ever been in). They couldn't get the latest blockbusters ubntil after major theatre has finished, so they went to old films and small films, and closed. The rival theatre chain wanted to open theatres a couple of times, the fisrt theatre chain protested (from vague memory that) that there was too many theatres and they also converted to 5 theatres. But then they open up another 8 theatres very near by, and then 6 or more theatres at one of the location the cheaper rival wanted to open up. Now their theatres not that full, the origionaly redeveloped 5 cinema site is now used for small films and some independents festivals. In this situation there is small potential for people to show non mian stream major films, as more and more independent cinemas close down.
With indie distribution company and website it would be much easier, as most of the marketing costs canbe replaced by site comments and reveiws, and local theatre promote. Now we get to the interesting bits. People (say the indie crowd) can preview on the internet (low res version, pay per veiw for broadband), they than vote and comment. Theatre owners go to indie distribution site and veiw comments find good movie and show it. If people want full version they go to theatre listed at the site, or order full copy protected version on HD/DVD from site. It will send the studio's crazy, as they loose control and the indies gain control (and cost efficeincy from few indie owned distro sites) andf Indies get low cost marketing replacement.
I find some hard to follow but I hope this is what you were after.
<<<-- Originally posted by Ronald Biese : Dear Wayne,
hm that is great, a bit more resolution but no compression at all and 4:4:4 out that is just perfect for PAL or NTSC. With an Argus or so great, the Tv indie cam up to now...Hurra..voila.-->>>
Yes that is the SD Indie camera, in these threads we are concentrating on the HD indie camera).
Régine Weinberg November 9th, 2004, 12:00 PM Dear Wayne, Primo, I got this email:
Ronald; I spoke with JVC regarding the camera your inquired about. It has
not become available yet due to a delay from the manufacturer's of the CMOS
chips, Rockwell. They are not expecting delivery until January 2005.
Targeted list price is around 20K U.S. dollars
Best regards
Tom
so a bluefish dual HDSDI card not cheap, an adapter for Nikon or PL mount lenses a DIY shoulderpad and some connector for Bauer or any other Ni Pack, Bluefish has Linux support so the NLE could be free and Bluefish can save Tiff avi or so plus a SCSI Array controler and I do guess 4 big SCSI 320 disks, voila a powerstation as backpack.
Me as a bit outdated but still running harcore realtime nut why not doing something as Kreines and inventing the real again. A tiny board a gig Gig Etehrnet controler and piggyback a 104 bord that has some smal reltime OS kernel in rom doing nothing as waiting what comes from the Gig E controiler and writing it to a stack of Toshiba 2.5 inch disks. On the "host" where the gig E controller sits in the only PCI slot there is running what you like and there you can send a message to the Gig E controler that will the piggy pack Array controler start or stop. On the host someting could run to control the Camera so the host is totally independend from the Array controler. The arry controler is nothing else like a controler in a dish/washer start stop and reading the Gig E it has a memory so that it can hold about 2 to 4 seconds images in a loop so evn if something happens before recording it's not lost as the camera is up and running, like Edit cam does.
The app running on the piggyback is nothing as a packed sniffer and does io operation to write it on the disk array.
if it's sound totally stupid send an email
Joshua Starnes November 9th, 2004, 12:17 PM Looks very cool to me, Obin.
North Carolina, represent!
Rob Lohman November 9th, 2004, 12:29 PM Thanks Wayne: although I am a bit rusty (didn't do much asm
programming in the last 5 years) and never gotten around to
MMX/SSE (catching up on that now!) I have a pretty extensive
history in lowlevel computer programming. I know how the nuts
and bolds of computers exactly work including BIOS, OS, I/O,
Windows and all sort of other stuff. I do fully agree that most
programmers have no idea how all of it works "under the hood"
at the low level.
If I remember correctly this year at IBC we had a prototype 4K
projector, but it didn't look better than the high-end 2K stuff to
my eyes. They presented Shrek 2 on the "regular" 2K projector.
The year before they had a 2K or 1K (think it was this) projector
that showed Pirates of the Carribean. So for now the 1920x1080
resolution should be enough for our needs I'd say.
I think the 2K/4K resolution is the full size of the frame in vertical
and horizontal, so that would probably mean their pixel aspect
is off? Like to get the correct size in square pixel it would be
3K x 2K or something?
Wayne Morellini November 9th, 2004, 12:35 PM Thanks, yes as we feared, JVC too expensive.
Your suggestion is very good, but also expensive (but if completely custom then manufacture in bulk canbe cheap) and this is the reason we went for small PC to cut back on costs. It is late and I am finding it hard to follow your post, but I have solution in mind that by passes cost, only two people know here, and maybe you would be interested.
Now for camera to use to save money on capture end, Sumix was talking of doing a compressed camera. Using that with PC decompression (for preview) should allow very low end PC to be used. But any compression should do at least lossless, but preferably also visual lossless down to 50Mb's codec. This will alow very good qulaity and space savings (50Mb's is HDV2 territory, maybe just viable for cinema production to large screen).
Richard Mellor November 9th, 2004, 12:45 PM the digtal projector scene is moving faster than even the hi def cameras . I had just 4 years ago a$38,000 barco crt projector that could output 720p
you can now for about $5,ooo get a benq dlp .that does 720p and looks almost as good. this is a link to a theatre chain that went digital . when finished we could shoot in 720p and display our work for small venues on a laptop and benq projector on a 8 foot wide screenhttp://www.microsoft.com/presspass/press/2003/apr03/04-03LandmarkTheatresPR.asp
Wayne Morellini November 9th, 2004, 01:12 PM Thanks Richard
I forgot to include another interesting thing about hardware. Even silicon chip designers have design rules to protect them from chip structural process based timing and electrical effects. One person in the group I was involved in gained at least 10 times+ processing speed increase (I think I mentioned this before somewhere, sorry if I repeat) by bypassing design rules. He might have been the only person in the commercial processing industry doing that (difficult).
Rob Scott November 9th, 2004, 01:46 PM Joshua Starnes wrote:
What about the 3300?To be honest, the biggest issue is budget. To properly support the software, I need to have one of each camera it works with. Right now I can't afford to buy one of everything, so I'm having to carefully pick and choose. It seems to me that most people would choose either the 1300 (native 1280x720) or the 1920 (native 1920x1080) and the 3300 is the odd man out.
Of course, I could be completely wrong due to the higher resolution of the 3300 and the smear issues of the 1300 ... but the 64-bit frame grabber you'll need with the 3300 will also jack up the price.
Bottom line at this point -- Unless someone out there can finance a 3300 for me to use, I am not going to be able to support it.
Joshua Starnes November 9th, 2004, 02:51 PM If I remember correctly this year at IBC we had a prototype 4K
projector, but it didn't look better than the high-end 2K stuff to
my eyes. They presented Shrek 2 on the "regular" 2K projector.
That's why they don't bother to film out at 4k either. The human eye is basically incapable if discerning a difference in picture quality greater than 2K. Above 2K everything still looks like 2K to the naked eye. While that difference in resolution maybe important to a computer in certain technological or scientific endeavours, it matters not in the least for film work. It's just an added, unnecessary, expense.
Rob LaPoint November 9th, 2004, 04:00 PM Hey Obin, I wasn't exactly speaking of cramming 2 MBs in a single box ( i meant a dual processor board) but that is actually a pretty good idea. The only problem would be 2 seperate power supplies, but that really wouldn't be hard to solve. Looking at things from a film workflow, having a computer attached is no more cumbersome than a video tap to monitor setup so if more realtime features require more computing power I say load it up ;)
Soeren Mueller November 9th, 2004, 04:37 PM @Rob (Lohman)
Sorry to disagree here with you Rob - but with modern CPUs and the newer/newest highly optimizing C/C++ compilers (eg. the Intel compiler) it is in fact better most of the time to stay in a more high level language like C++ and let the compiler do the optimization targeted for a selected CPU - MMX/SSE/SSE2 etc. all together with (slightly) different CPU designs aren't nearly as easy as i386 asm programming was.
And of course high level code is much easier to maintain than messing with low level assembler code - optimized for different CPUs...
Just my 2 cents
Of course you have to optimize (but do profiling first - to find out _where_ optimizations make sense at all) - but assembler code doesn't automatically mean it's the fastest code possible!
Rob Scott November 9th, 2004, 08:32 PM Soeren Mueller wrote:
(It's) better most of the time to stay in a more high level language like C++ and let the compiler do the optimization targeted for a selected CPU - MMX/SSE/SSE2 etc.Most of the time, certainly. But certain things -- like the screen preview for this camera project -- must be optimized by hand to get any kind of performance at all.
The first version of the preview code -- using standard for() loops and array access. Naturally, with array access (no pointer arithmetic) it was extremely slow -- somewhere on the order of 0.2 fps.
The second version used pointer arithmetic and was much better -- somewhere around 2-4 fps. I turned on all the optimizations I could, but IIRC this was the best the compiler could do.
I then hand-coded the loop with MMX and it currently runs at around 25 fps.
This just goes to show that your point about profiling -- and finding the bottlebecks -- is well taken. There is no way I'm going to code the entire application in assembly; there just isn't any point unless it really needs it.
Wayne Morellini November 9th, 2004, 11:44 PM As I understand it the human eye population has the potential for 2400 Dots Per Inch discernment (from a close veiwing distance) this only in a small scanning region of the central vision, and individual eys in the peopulation may have a lot less potential. There is a methord to train your eye to see better, I think the book, better eye sight withoiut glasses might cover tha one, by shifting the brains region of central vision to the true centre (apparently a lot of people concentrate on an area of the eye that is not true centre, and concentrate on a lower res side region, and by practicing recognition of details (improving the brains recognition). 720p works out around 150dpi, colour vision goes to around 1200 dpi, so here is a big difference between what we (or Imax, which looks a lot better than 2K) use and the true top end. I think the problem is grain size, the smaller the grain the slower it is, so previously 2K images were probably only avialble in high lit situations on regular (cheap) fiilm stocks. If you look at it the increase resolution of Imax over 2K has a lot to do with the fact that the film frame is ten times bigger and can fit in ten times more grains in the projection (but it fills more screen so the difference to us is probably 4 times, like normal theatre at 8K and 16K to us, but extra winde and high). Now even with my poor eyesight I can pick grain normal cinema and in dark scenes in Imax. So whats the max we should worry about, tricky, I would say between 1Mp 720p and 16Mp (max colour res for radiated display).
Newer technologies, like 4K projection, can have more optical quality issues (plus it depends on room lighting and screen used). Resolution is not the complete picture, so I imagine that it would be quite easy toput up a worked out 2K projector against a tobe optimised 4K one. If your wondering about the limits of a 4:3 15 inch monitor on the eyesight, it's 64Mp (actually why are we saying K it's millions of pixels in a frame) for Monochrome, you will notice that this is one quater what it should be, that is because gradiated displays effect vision and halve the resolution in each direction (16Million for colour). But you will notice diminishing returns for doubling resolution over 150dpi )where surounding pixels are starting to intergrate with general vision). This 4Mp one, was that Sony's new ribbon tech?
Wayne Morellini November 10th, 2004, 12:07 AM Soeren, yes there was big differences when the Intel compilers came out, because most compilers were using a very limited part of the insruction set. I don't know if the Intel one sues all he instruction set properly, but I imagine that it can't compete with the carefull instruction setup and manipulation of a good programmer, that even coding in C way stops the compiler from being able to organise such alternative sequences of instrucions.
But on to the question of project size, above a certain size it becomes rapidly more difficult to program MC than high level languages, so optimising for the 90%performance regions is best, like you guys say. But programming in MC can eliminate many errors, and what happens when programming in MC makes code 10 times smaller. So project size is everything. Even though much that was said about doing capture completely in MC was figuative, in reality only for embedded custom systems where capture canbe much smaller, due to lack of major OS and simplified standardised hardware model ;) but still major project. For Windows PC, optimise crucial sections in MC, optimise rest in C, as Rob says, with obviouse results. I would like to do my OS in MC because complete Windows completing OS could fit in 1-10MB (data structures etc) but this will require team of programmers and lots of money to do in yeart time frame. I have new programming team stratergy worked out (computer science) to prodcue best quality with top MC programmers (that don't normally work in team mode) exciting stuff, there is a sepcific benefit, I could even patent the stratergy, but in reality I might have o do in C then transfer it to VOS code, which is a high level form of low level language, so easier than C.
Wayne Morellini November 10th, 2004, 12:46 AM David you mentioned the cineform visually lossless codec for Bayer with 4:1 compression. I read your normal codec has a range of 6:1-10:1 for visually lossless, what is the range for the bayer codec, can we get to 10:1 reliably with multi generation on bayer? I have worked out that 4:! bayer 720p is close to a 50Mb/s stream which is close to 10:1 4:4:4 3 chip, which is very useful.
Obviously in camera compression we need true lossless. visually lossless and down to the quality of HDV2 50MB's stream for outside pro work (maybe equivalent to your codec at 10:1) for different jobs. Have you thought of licensing your codec to camera manufacturers (like Sumix, Drake, SI, Micron, Rockwell, Sony etc etc) in FPGA design (that canbe converted to high speed cheap custom silicon core reasonably easily, if anyboidy wants to mass market chips based on it)?
Thanks
Wayne.
Rob Lohman November 10th, 2004, 04:51 AM I think we are all on the same page in regards to hand crafting
certain pieces of code in assembly. This is only done after some
profiling and after implementation with some good testing to
make sure it increases the throughput enough. But as Rob's
example shows it is clearly a win situation on such demanding
applications.
Compilers have gotten far, FAR better at optimizing code them-
selves, but it still has troubles in optimally using registeres and
memory in my opinion which is one of the major places to speed
code up.
But as I said, I think we are all on the same page in that regard!
Wayne Morellini November 10th, 2004, 05:12 AM Don't forget intiutive leaps to accuratelly using odd little used instructions in interesting sequences that somehow abstractly speed up realtime performance, instead of the more obviouse compiler sequences ;)
Rai Orz November 10th, 2004, 06:34 AM What is the best resulotion for movie and cinema?
I think ARRI, as one of the best (film) camera produces since beginning of cinema, know the answer. So lets look a little bit inside of they first digital cinema camera D20:
http://www.arri.de/news/newsletter/articles/09211103/d20.htm
In this newsletter are some details about the chip, pixels and data outputs. You can also read some things between the lines.
Thats the goal, why not?
Rob Lohman November 10th, 2004, 07:36 AM There is also the Panavision Genesis beside the ARRI. I've done
a bit of information gathering:
Arri D20:
+ sensor: single 35mm 12 bit CMOS max 150 fps
+ sampling: standard bayer GR/BG
+ resolution: 3018 x 2200
+ framerates: 1 - 60 fps including 23.976 and 29.97 fps
+ shutter: mirror + electronic
+ mount: 54mm PL
+ internal bus: 10 Gb/s (gbit?)
+ power consumption: 54 W @ 24 fps (without viewfinder)
+ video mode:
- 2880 x 1620 sampling (16:9)
- 1920 x 1080 output (16:9)
- YUV 4:2:2 10 bit (single HD-SDI)
- RGB 4:4:4 10 bit (dual HD-SDI)
- Super 35 HDTV aperture size
+ film mode:
- 3018 x 2200 sampling (4:3)
- raw bayer output 12 bit
- up to ANSI Super 35 aperture
http://www.arri.de/news/newsletter/articles/09211103/d20.htm
http://www.arri.de/prod/cam/d_20/articles.htm
http://www.arri.de/prod/cam/d_20/tech_spec.htm
Panavision Genesis:
+ sensor: 35mm (probably 3 or foveon?)
+ sampling: full RGB
+ resolution: 12.4 mega pixel
+ framerates: 1 - 50 fps
+ 10 bit log output (1920 x 1080?)
+ 4:2:2 single HD-SDI out
+ 4:4:4 dual HD-SDI out
http://www.panavision.com/product_detail.php?maincat=1&cat=36&id=338&node=c0,c202,c203
Unfortunately there is almost no information available on the
technical specs of the Pana Genesis. Too bad. At least we know
that in film mode with the ARRI you are supposed to crop to your
favorite resolution. So we get:
16:9 => 3018 x 1698 (22.82% loss)
1.85 => 3018 x 1630 (25.91% loss)
2.35 => 3018 x 1284 (41.64% loss)
However, if they where to attach an anamorphic lens creating
a pixel aspect ratio of 1.78 or an output resolution of 3910 x 2200.
This is already 16:9 so no loss for that, for the others:
1.85 => 3910 x 2114 (03.91% loss)
2.35 => 3910 x 1664 (24.36% loss)
The ARRI article had an interesting discussion on de-bayering:If you look at the pixels you will notice that each red pixel, for instance, is surrounded by four green and four blue pixels. Also, because there is an overlap in the color spectra of red, green and blue, the available red value is at least in part the result of light in another color. Based on the knowledge of what the colors and values of those neighbor-pixels are, and based on the knowledge of the overlap in the color spectra, it is now possible to work out (reconstruct) what the green and blue values for that red pixel should be.
This process is more accurate than the interpolation used to increase the size (i.e. pixel count) of an image. In interpolation, completely new pixels are "made up" based on what the neighboring pixels look like. In Bayer data reconstruction we already have pixels, we just don't know two of the three color values. Since we do know the colors and values of neighbor pixels and since there is a color spectrum overlap, we can reconstruct the missing information very accurately.
Please note that the actual color reconstruction is more complicated than the method described here. For instance, to determine a given color value for a given pixel we use more than just the eight neighboring pixels. Furthermore, it is also possible to improve the result by incorporating certain assumptions about real world images in the algorithms (e.g. colors coincide at edges, etc.). We have simplified the process in this description to aid in understanding.
Wayne Morellini November 10th, 2004, 08:33 AM Your quote above won't turn up in the fourm quote reply function,must be a bug.
Recreate them accurately, sure! Lets see some spectral overlap=impurity (I've thought about this type of technique before), taking a punt at solving the impurity and selling it as an advantage is restoring missing colour, OK but impurity also=reduced accuracy on the origional colour. So how much impurity and how much origional colour you need, if impurity is low it only gives few bits of sccuracy, not 12bits, but enough to tell of a major swing and interpolate a more accurate replacement. But how much of the accurate primary colour for that pixel is left, 10, 8 bit. But if you look at what they say, they estimate from image principles in nature (like bayer does that chroma tends to follow), so good guessing based on approx.
Now I would also like to say, lets see them do that with SD resolution frame, as I say before the increase resolution over 1080, hides much (artifacts and miss-approximation). As I say before 720p is a territory where pixels start blurring in with each other, so unless they look for it, casual viewer may not mentally notice as much, even if picture appears to be of less quality than accurate 3 chip SHD picture to them. So the you can say the impressiveness of picture then becomes sublinimal (??) noticable but not enough to put finger on for most of audiance. After upscaling these malformations could be smoothed out, making the picture a little softer, but imperfections/details start to dissapear. Why I wanted to go to 3 chip 720p as a minium instead of SD, or 2160p in bayer.
The truth of these pixel resolutions of these sensors they are using might be a technological limitation and marketable advantage over film than real audience limtis. I think with three chip you can get more light and need less resolution because it is probably better to resolution upscale. I'm going to take a punt, get three SD pal chips and offset them on a prism to obtain a calculable HD image ;). How much worse would this be than one chip bayer?
Maybe so :)
Obin Olson November 10th, 2004, 09:09 AM i hate to say it Rob but if your supporting the 1300 I really think it is a waste of your time...that chip just can't cut it for a professional camera...the smear is that bad I think..I would say keep going though because the 3300 is not much to change code-wise to make it work..just WAY more data :)
Rob have you thought about sending the 1300 back and getting a 3300rgb?
Steve Nordhauser November 10th, 2004, 09:32 AM Obin in 1300:
You could be right. I defer to people who know the application on this one. Maybe broadcast?
Wayne on color overlap:
Go to any color camera or sensor datasheet and you should see a spectral response curve. This is the cutoff for the bandpass color filter array (CFA) that is put on the sensor for single chip Bayer color. As in audio filtering, the rising edges are not vertical, they slope. This causes an overlap at the R-G and G-B borders of the wavelengths - the color impurity.
You probably need this, now that I think about it. Otherwise, a green object would look the same green (just different intensities as the filter response changes) as it changed in wavelength until it crossed over a filter boundary. Hmm, so it is this overlap that allows you to have a continuous color response. You learn something new every day.
Obin Olson November 10th, 2004, 10:04 AM I *should* have 8bit 1080p captures today...I will keep everyone posted
Wayne Morellini November 10th, 2004, 10:10 AM Sorry Steve, I thought they were talking about something else. About the 1300, is there a way to get rid of the smear etc, by not adjusting the gain to high or something? Will there be a new sensor version?
David Newman November 10th, 2004, 10:31 AM <<<-- Originally posted by Wayne Morellini : David you mentioned the cineform visually lossless codec for Bayer with 4:1 compression. I read your normal codec has a range of 6:1-10:1 for visually lossless, what is the range for the bayer codec, can we get to 10:1 reliably with multi generation on bayer? I have worked out that 4:1 bayer 720p is close to a 50Mb/s stream which is close to 10:1 4:4:4 3 chip, which is very useful.-->>>
In our tests, visually lossless Bayer is ranges between 4:1 and 6:1. For 1280x720@24p 10bit that is between 36Mb/s and 56Mb/s (data rates double for 1920x1080p24) which will allows you to record 2-3 hours on a 60MB laptop drive.
We recently put up a quality analysis for our codec here: http://www.cineform.com/technology/quality.htm
We are moving forward to productize the Bayer version of the CineForm codec. And yes we have thought about the various licensing opportunities. However, these developments have been delayed due to the boom in the HDV side of our business (the Sony FX1/Z1 is causing a lot of interest in our technology.) For HDV work our compression technology has been licensed by Adobe and by Sony (announced today). Those two companies have been keeping us very busy. :)
Rob Lohman November 10th, 2004, 10:48 AM David: is 4:1 - 6:1 compared to the original data stream (ie, 16
bits per color sample) or the packed sample (ie, just the 10 or
12 bits)?
David Newman November 10th, 2004, 11:03 AM Rob,
4:1 to 6:1 is compared to the raw signal that we compress which is currently 10bits per Bayer element. The compression ratio would seem higher if it were compared to unpacked 16bit (350Mb/s) or slighly higher for packed 12bit (260Mb/s).
As for 12bit vs 10bit issue, which I'm sure will come up; the compression does a 12bit->10bit conversion with a user controllable gamma curve (much a like a good HD camera does from 12/14 to 8bit.) The 10bit data provides an excellent signal of downstream color correction without banding.
Régine Weinberg November 10th, 2004, 11:06 AM G\day as everybody know, Kreins is using 16 Toshiba disks like to be found in the I-Pod. It is an IDE array. please go there:
http://www.acnc.com/pdf/JS_IDE_308_316S.pdf
reading allmost at the bottom of the page they are using 16 IDE controller the host are two fibre channels well it's bulky but thats's the way to have the bandwith with IDE disks as the 2.5" from Toshiba is an IDE |Ultra Ata 133. Voila
Rob Scott November 10th, 2004, 11:06 AM Obin Olson wrote:
supporting the 1300 I really think it is a waste of your timeI realize the smear is ugly, but I thought it might be acceptable for a low-budget, not-quite-professional project. But then again, since the the smearing affects image quality significantly, the 10-bit depth and lack of compression artifacts don't matter as much. Perhaps an HDV camera would be a better choice at around the same price range. So ... you're probably right.
Jason Rodriguez November 10th, 2004, 11:57 AM David,
What framegrabbers are you going to support?
Are there any plans for gigabit ethernet support (Pleora's framegrabber)?
David Newman November 10th, 2004, 12:05 PM Jason, these details are still being discussed.
Rob Lohman November 10th, 2004, 12:14 PM Thank you for your information David.
Rob: have you seen this smearing in person on your camera? I'm
still a bit surprised by it since it looked so great earlier on (with
Obin's footage).
I forgot, but does the new altasens have a rolling shutter or not?
Jason Rodriguez November 10th, 2004, 12:29 PM yes, Altasens is rolling shutter also.
Rob Scott November 10th, 2004, 12:34 PM Rob Lohman have you seen this smearing in person on your camera?Yes, it usually occurs when there is a "hotspot" in the image. You can see an example in my blog (http://www.obscuracam.com/wiki/static/DevBlog2004July.html) -- look to the left of the blue blob and to the left and right of the bottom of the picture frame.
Aaron Shaw November 10th, 2004, 01:35 PM Altasens is rolling shutter? That's too bad really.
Obin Olson November 10th, 2004, 06:36 PM Ok so it looks to me that 67mhz is enough to keep the rolling shutter problem at bay:
www.dv3productions.com/pub/1080p.mov
Jason Rodriguez November 10th, 2004, 07:02 PM Just curious Obin,
How come your footage is so "jumpy"? Is this a result of the high-speed pixel clock, or are you dropping frames. It always seems as though everything is moving so fast and there's no motion blur, which contributes even more to the "jumpy" perception. So just curious what might be cause that, and is this the way it's suppose to look?
Obin Olson November 10th, 2004, 07:34 PM It's trying to record at 24fps and can't so it's jumpy and weird...that is the problem we are having at the moment...cpu load sits at 100% all the time when we record. I think this is causing dropped frames. The weird thing is CPU load is 10% with black and white preview @ 24fps 1080x1920 1/4 quad pixel readout!!! our frame size is 2megs a frame @ 8bit that would be about 48MB/sec for twin disk save @24fps it can't be the disk backing up both disk drives run 30MB/sec transfer allday....arrgggg
the above test is to show rolling shutter at 67mhz...not bad i say..I would shoot with that
Eric Gorski November 10th, 2004, 07:37 PM are there any cameras that are global shutter? and, er.. isn't progressive scan ideal? or is progressive scan possible on a cmos?? and.. i guess you still have a shutter with progressive scan.. ?? ack.
Wayne Morellini November 11th, 2004, 12:21 AM Very good, as long as it is a good alternative to Mpeg2 50Mb/s stream (or was that 36Mb/s) I think this will give everybody some excitment.
From what you said the comrpession for Bayer is ^-4:1 and 3chip 4:4:4 6-10:1, is that right?
How does your codec compare to the Avid codec used in the Ikegema camera, it is something like 145-220Mb/s. Is it true lossless or just megp2 like with about the same editable quality as your cinform codec (but at much bigger daya rates)?
Rob,
The hotspot, can you get rid of it by reducing it just below max pixel value, by reducing the gain or putting it in reverse, or just using ND or iris, or is it a contrast thing with surrounding pixels? My reasoning is that iof it is the top 5% then you can adjust the camera to shoot within the acceptable range to aviod the problem, then stretch out in colourisation proceadure.
Aaron,
The Alasens solves the rolling shutter problem by increasing read out time to around 480th a second, or something like that, that deals with it very well.
Eric,
What we get is progressive images, but the rolling shutter reads out at the same time as it is capturing, so slowly that the top of the image is older than the bottom, producing a slant. The Altaszens speeds this up (as Obin is also trying to do) to reduce the slant, and at 480th or a second if something moves fast enough to produce a descent slant people shopuld have a hard time noticing/tracking ti anyway. So they won't think the camera is drunk ;)
Obin,
I am taking it that adding together the CPU consumption, of the sepearte programs ruinning seperately, adds up to much less than when they are run together. To take a guess, it looks like what you are suffering from might be the two programs competing with each other, maybe or maybe not for the same rewsources, but they somehow interfere with each other enough to stall things (or cause the other program to wait) and drive up CPU loads.
Marin Tchergarov November 11th, 2004, 05:44 AM <<<-- Originally posted by Wayne Morellini : Yes that is the SD Indie camera, in these threads we are concentrating on the HD indie camera -->>>
Hi Wayne!
Me think the tread of Juan P. Pertierra is a "must read and copy ideas" tread for everyone here!
I can't call Juan's tread a SD only since the next camera on "surgeon's table" is Canon XL2.
More - his method is applicable to any similar device (ADC pin-outs visible).
More2- "the direct to disk" recording is much beter solution than any mini-micro computers
since you'll miss to deal with OS,drivers,bug**ry chipsets(SIS ;)-my own experience),slow CPU's or fast,
but energy unefficient CPU's...and so on.
The major problem with USB2(his device is USB2) is bandwidth.However I believe Juan will find
a faster solution when needed(for Canon XL2).
Steve Nordhauser,
I was very pleased with your announce of a GigaBit-out cameras in SI future production.
I'm not sure in a existence of a GigaBit-equiped removable HDD's .If they exists - food for thought (Yoda mode :))
If they exists not -may be you can create a small GigaE->SATA (or PATA) adapter? Well...may be I'm totally wrong
about GigaE and HDD's compatibility - the main idea (from Juan's tread) is to use very fast
interface to write directly to removable HDD's...
Just my 2 cents...
There is 2 things more - Camera control and viewfinder ...I don't know...
Marin
Rob Lohman November 11th, 2004, 06:14 AM Marin: most people here are following Juan's efforts. I've spoken
to Juan about the way he does direct to disk recording and he
has some FPGA appliance in there. The datarates of the camera's
he is modifying do not come close to the datarates we are
getting, and we also have a more complex interfacing system
I believe. Next to that we don't have any people here thusfar
with a good enough FPGA knowledge to guide us if we go that
way, so for now we are going this way.
Let me make a list of problems with FPGA:
1. it would need cameralink or gigabit ethernet in (Juan gets other kind of signals)
2. it would need firewire 800 out (which Juan's FPGA solution seem to have)
3. we need a viewfinder/monitor out (Juan already has this since it is on the host camera), since there is no host camera
4. we need controls to choose settings (Juan already has this since it is on the host camera), since there is no host camera
5. we would need some form of RAID support either in the FPGA (either make it ourselve or have it builtin) or in the harddisks attached (ie, the Lacie RAID 0 drives??)
So unless someone has some good answers FPGA wise for these
kind of "issues" I don't see this happening for now.
Marin: thanks for your thoughts though and welcome aboard DVInfo.net!
|
|