![]() |
thanks obin for posting some footage. keep it coming.
|
I was thinking about my idea of using one computer for preview purposes and one for capture. Obin I am sure that you guys will get your program optimized and preview during capture no problem, but would it give us more leeway to design computers with dual processor and then just have one box with a processor each for capture and preview (or is processing not the issue)?
|
hmm..pixel clock is or was 60mhz and shutter I think was 1/120 sec or maybe it was 1/60 sec.. gain was about 3db
that is a really bad bayer filter just the basic linbayer type One problem is that when you expose like I did and then BOOST the gamma in post the range of CC in Combuston is very small because of all the gamma boosting that has happened..this is why we need over 8bit for an image that can be pushed really hard for color "looks" |
not a bad idea ROb....so have 2 main boards one for capture and one for preview inside the case? split the CameraLink signal that is coming into the systems?
|
Why do you need two computers, one for digitizing and another for preview?
That is totally cumbersome, and very impractical IMHO. Might as well just lug a big cart behind you with everything in it. Of course we have no way of knowing whether the problem is performance or whatnot since I have absolutely no clue what Obin is running. If he's got a Pentium II 400Mhz with 128MB of RAM and a couple PCI slots, then yes, there's going to be performance problems. Pentium M 2.0Ghz with 1GB of RAM, PCI-X, Intel Extreme Graphics 2, etc., I'm not really sure there's going to be problems, especially since all we're doing is a data dump to hard-drive, not compression or anything else like that. Frankly as of right now there's no way to tell what the problem might be because nobody's willing to list their configurations here on the board. |
sob sob jason ;)
Hey I have a 3.06ghz P 4 with 2 gig ram and 2 IDE disk drives one 7200rpm and one 5400rpm both set as capture disks running the Epix 32bit framegrabber card on standard PCI 32 slot and a ATI AGP card with 128megs ram on it I will be capturing 8bit from the framegrabber later on today..I will post results when that is done |
I think I feel the same way as Jason about it. It just should be
possible to do it all in one system (even with some basic lossless compression) for at least 720p. Obin: do you know if your programmer is working in just C(++) or is he also doing assembly/MMX/SSE etc. (ie, handcoding for the CPU)? We are for Obscuracam for some important routines. I did some demo coding back in the days that 386 was a hot CPU and a LOT of speed can be gained with carefull handcrafting of processor instructions, algorithm optimization and whatnot. Even in the case where full resolution preview might be not too good to do you could think of the following: 1. display the preview at the maximum framerate the extra CPU cycles will support (so it might be lower than the real frame rate which is not too much of a problem for preview, Rob S. is doing this now for example) 2. do a full resolution de-bayer when the camera is not recording (so you can more easily set critical focus) and switch to half resolution (ie, you don't need to do a de-bayer) as soon as you hit the record button 3. do the preview during recording in black and white (no need to de-bayer) 4. check multiple ways to construct 8 bits from 10/12 bits (without loosing the ability to see the dynamic range), see which is fastest for preview You can combine all of this together ofcourse to get some very critical speed increases for the viewfinder system without getting a camera that you can't work with and hopefully gain a bit of cycles to do things like zebra striping / histograms (although these could perhaps be a pre-recording check only function as well) |
Hmm . . .
Thanks for the info Obin. Seems like a pretty good system to me. Maybe the 5200RPM drive isn't enough for capture, but processor-wise, you should have plenty of juice for 12-bit 1920x1080, especially now with that ATI card. So you're only able to get 8-bit? Or is that a problem with the program? edit: Nevermind, 12-bit is too much for that framegrabber, in fact 10-bit is probably too much also since there's no framebuffer on the card, so each 10-bit frame actually takes up two bytes per pixel rather than one. |
Isn't the 32 bit PCI bus the problem here Obin? You are getting
a massive amount of data in at 1920x1080 *and* need to shift this data to the harddisks.... Ofcourse in the end it all boils down to programming efficiency (which is the major component here), which we ofcourse cannot see for your program Obin (I'm not saying your programmer isn't doing a good job, but basic programming has nothing in common with hand crafted speed optimizing for example). |
Quote:
One other thing I'm wondering is if you could do a 1/4 preview, and then apply a LUT to just those pixels for display, so you're not trying to do a transform on 2 megapixels @ 25fps. I think that'd be too much for any system (my dual G5 can't do real-time HD effects like that either, so I definitely wouldn't expect that out of a small computer). So with the 1/4 preview, you now can see what's happening dynamic range-wise with the LUT, and that's only performing an operation on 960x540, which should easily be feasable, especially on a new Pentium M or Pentium 4. |
Quote:
If he's using a good southbridge, there should be no problems, at least if the IDE is on-board. Hublink 1.5 from Intel runs at 266MB/s full duplex, so there shouldn't be any problems going back and forth from main memory, if that's what he's using (which I'm assuming he is). Also the IDE bus should be straight into the southbridge on a seperate channel from the PCI bus to prevent more bottlenecking there. If it's a good southbridge (more than ICH4, like hance rapids, ICH5 or ICH6), then there shouldn't be any problems, especially if it's a southbridge that supports PCI-X (which Obin's doesn't), because then it has to have a guaranteed bandwidth of at least 500MB/s+. |
I think everyone is hitting around the right area. You need to pay attention to the system architecture. It is even worth looking at the chip sets. Some of the more recent Intel chip sets have a southbridge (ICH5-R) with a two drive SATA RAID built in. Sure you save $100 on the controller, but more importantly, the disk data (assuming two drives is enough) never hits the PCI bus. You can ignore this in the 64 bit world but it is very important to a 32 bit system.
Rob L. is correct (a least from my experience) that a little bit of assembly can go a long way. In these cameras 90% of the CPU time is probably taken in <1% of the software. Assuming that you use DMA to move blocks of memory to drives, the tightest loops in your software need to be examined. The Bayer preview. The compression, if any. Any real-time video algorithm (white balance). Since that is where the CPU spends its time, a 5% algorithm savings (time, not size) is almost equal to 5% faster CPU time. You might need to understand the CPU caching system also. You can prototype all in a high level, but the optimization should include a long stare at the loops in the code. |
Steve: exactly. It's funny you mention the points you do since
the packing, some parts of the preview are all in handwritten assembly at this moment (I'm working on converting the compression to assembly as well). Quote:
10 0000 0011 (10 bit color = 515) this would end up being: 0000 0011 (8 bit color = 3 I don't have to tell you this results in a massive color and/or brightness change for the pixel. I did it during some testing with Rob S.'s RAW files once this way and got all kind of junk in the frame (which took me a while to figure out). This is one of those places where you easily go wrong with an algorithm implementation. It would be far better to chop the 8 bits and set the highest bit on the resulting bits if there was a value in the just chopped 8 bits (is what I did to check the RAW images from Rob S.). However, you loose the 2 bits (or 4 for 12 bits) extra latitude during preview so things will look washed out which you don't want. It would probably be better to shift the 16 bits either by 2 or 4 bits to the right. Or even better yet use a LUT (lookup table) indeed. I do believe Rob S. is using a LUT for preview in his current version of Obscuracam. This preview is indeed in half resolution (vertical & horizontal) to reduce processing time and not having to de-bayer. So that fits in neatly into the points I made above. Although it might be interesting to have a full color/B&W before recording or have a full B&W preview. In my 3.2 GHz pentium 4 cpu I can view 1080i in realtime with a WMV HD codec I think. So I think much must be possible, it all boils down to proper design, testing, monitoring and profiling to see which can and need to be (hand) optimized. |
Rob,
I'm hoping that was lop off the bottom two bits as in a shift right by 2 (or an integer divide by 4) 10 0000 0011 (10 bit color = 515) or 515/1024 this would end up being: 1000 0000 (8 bit color = 128) or 128/256 |
thanks for the info guys..I am sending all this to my code writer for review..
We will look at things in detail once I get a board to him for testing |
Steve: that was my middle point. At first I just lopped off the
complete high byte giving the problems. I know see Jason said LSB and not HSB, my bad (I confused it with my own mistake). However, it still gives you a bad representation of the dynamic range which a LUT should be able to fix. |
I just saw this tiny quiet pc on gizmodo:
http://www.akibalive.com/archives/000576.html |
Most of these issues are things I discussed 6 months ago. Rob, it is good to see that you are taking such care handcrafting MC, I didn't realise that you have the history with the 386. Something I can tell everybody is that most programmers don't have these skills, they might know enough about C? to get it running reasonably, but in apps like this that is NOTHING. I finshed well ahead of the rest of my class in Uni and I would not trust most of them to do a job like this. I don't know the status with CPU load, but just recording is pretty low bandwidth, even simple preview is, but the problems you will get is handeling/programming architechture (not just the architechture itself) efficiently, the same with the OS, and stalling the CPU/memory systems. Doing any of these wrong will stall the hardware OS and/or CPU and appear to be massive CPU load. Anybody that depends on simple CPU stats will probably be scratching their heads trying to figure out what is wrong and where all the CPU is going. Do the calculation (well in MC you can calculate reasonable cycle consumption) trace through, test point the program and they will find the stalling regions. There is lots of realtime oriented drivers and BIOS's that are buggy (let alone OS's) that is why there are so many updates (for years) so even those high end programmers can't get it right. I could go to plenty of programming companies and ask them to write something like this, a lot would say fine they can do it like it isn't much of a worry, but few would be capable of getting the last 80% of performance. I am brutal, but I am sick of this stuff stuffing up the coimputer industry for everybody. One reason for my OS project is too permanment fix ALL these problems, to me many computer companies act like idiots drunk out of their minds. There is no reason that computers and apps couldn't be made to operate as reliably as most Microwaves, washing machines, TV's, and DVD's (well good ones) eventually. In the embedded/consumer electronics realtime industry the reliability of PC's is held with a bit of bewilderment, I'm afraid.
ggrrrrrrrrrrrrr.................................. If you can look up and down load the Stella, Atari 2600/VCS Programming manual, Lynx progframking manual and look at what emulator writters do. This stripps away all the ... OS/BIOS and leaves you with basic hardare and you will find out how come it is so difficult to program efficeintly. A lot of it involves minute timing differences as circuits stabilise and windows or opportunities, doing it wrong (or if somebody else makes a compatible circuit) can wreck performance or results. On a complex circuit blindly programming can set up a chain reaction of performance failures through related circuits (it is not Object Oriented programming). Running it on another chipset/Mainbaord/processor, can do the same as the underlying timing of the "compatible" hardware maybe different. So it is best to pick good hardware, with good drivers (like that Gige driver Steve mentions that DOESN'T come with windows) and abstraction layers (like DirectX+drivers does for different video hardware) in between you and all the different hardware versions, as much as is reasonable then setup OS/system propperly, and program carefully to these known to be good software layers. The good drivers and abstraction layers should take care of most of the timing (though you might get a bit better for certain/particular hardware by bypassing the abstraction layer this probably won't help a system with different versions of the hardware). Never assume that the abstraction driver is perfect, or good, some maybe, others need to be tested etc etc. In this situation all you need is to follow this simple formular with expertise, as long as you code C very efficiently with reference to hardware issues aswell, and maybe do the critical section in MC, you should be able to get within 80% of he maximun performance in fast systems (probably much less in slow systems that would do the job otherwise). |
Ronald, yes the Filmsream is the SD Indie camera, versus our HD projects.
What I find interesting about the film scan 2K, or 4K, what happens when they scan 2.35:1, is the 2K fitted to the frame or is it still 1.78:1 frame (meaning actual frame now is 1.5K, or so, pixel resolution. Do hey go to 4K, or do they use 4K on 1.78:1 as well? I think it would be a matter of quality and film stocks aswell. I see a lot of stuff at the cinema that has grain that even my poor vision can pickup, even without glasses (even picked some grain on Imax scenes). So I think 4K or even 8K would be better. But what about just shooting at 2K and resolution upscalling to 8K ;) I think it is good for our purposes, buy digital theater may only accept 2K footage :( . Now Barco CRT projectors were purer blacks than DLP, but were also weaker brightness (pluss maintance is more). With newer Crt phosor technology I don't know. But one technology to watch (well there are a few) is the Laser Crt (L-crt and many other names) invented in Russia. I research this because I wanted to make small laser display device (much to big at moment), it seems to have purity and performance to possibly beat DLP. Had picture of a demo of a guy holding one of the tubes standing infront of 50foot test pattern being projected, nice(little washout ;). What happens now is that film has to be transfered to digital and digital to film to show it to all people :( costly. I think in the future there might be a market for somebody to open up cinemas (or just change existing independent theatre) just to project digital only. All the independent film makers can get together a distribution chain and these cinemas show it. Now no transfering to film, unless it is popular and film company wants to distribute to film theatres. Most indies will not get much of a screening at conventional company chain theatre, because they want blockbusters (small indie theatre maybe), so they have to go somewhere else. In my local city we had two theatres, one owned by the biggest chin in the country, the other independent ex-bourdville theatre (one of the nicest I have ever been in). They couldn't get the latest blockbusters ubntil after major theatre has finished, so they went to old films and small films, and closed. The rival theatre chain wanted to open theatres a couple of times, the fisrt theatre chain protested (from vague memory that) that there was too many theatres and they also converted to 5 theatres. But then they open up another 8 theatres very near by, and then 6 or more theatres at one of the location the cheaper rival wanted to open up. Now their theatres not that full, the origionaly redeveloped 5 cinema site is now used for small films and some independents festivals. In this situation there is small potential for people to show non mian stream major films, as more and more independent cinemas close down. With indie distribution company and website it would be much easier, as most of the marketing costs canbe replaced by site comments and reveiws, and local theatre promote. Now we get to the interesting bits. People (say the indie crowd) can preview on the internet (low res version, pay per veiw for broadband), they than vote and comment. Theatre owners go to indie distribution site and veiw comments find good movie and show it. If people want full version they go to theatre listed at the site, or order full copy protected version on HD/DVD from site. It will send the studio's crazy, as they loose control and the indies gain control (and cost efficeincy from few indie owned distro sites) andf Indies get low cost marketing replacement. I find some hard to follow but I hope this is what you were after. <<<-- Originally posted by Ronald Biese : Dear Wayne, hm that is great, a bit more resolution but no compression at all and 4:4:4 out that is just perfect for PAL or NTSC. With an Argus or so great, the Tv indie cam up to now...Hurra..voila.-->>> Yes that is the SD Indie camera, in these threads we are concentrating on the HD indie camera). |
Dear Wayne, Primo, I got this email:
Ronald; I spoke with JVC regarding the camera your inquired about. It has not become available yet due to a delay from the manufacturer's of the CMOS chips, Rockwell. They are not expecting delivery until January 2005. Targeted list price is around 20K U.S. dollars Best regards Tom so a bluefish dual HDSDI card not cheap, an adapter for Nikon or PL mount lenses a DIY shoulderpad and some connector for Bauer or any other Ni Pack, Bluefish has Linux support so the NLE could be free and Bluefish can save Tiff avi or so plus a SCSI Array controler and I do guess 4 big SCSI 320 disks, voila a powerstation as backpack. Me as a bit outdated but still running harcore realtime nut why not doing something as Kreines and inventing the real again. A tiny board a gig Gig Etehrnet controler and piggyback a 104 bord that has some smal reltime OS kernel in rom doing nothing as waiting what comes from the Gig E controiler and writing it to a stack of Toshiba 2.5 inch disks. On the "host" where the gig E controller sits in the only PCI slot there is running what you like and there you can send a message to the Gig E controler that will the piggy pack Array controler start or stop. On the host someting could run to control the Camera so the host is totally independend from the Array controler. The arry controler is nothing else like a controler in a dish/washer start stop and reading the Gig E it has a memory so that it can hold about 2 to 4 seconds images in a loop so evn if something happens before recording it's not lost as the camera is up and running, like Edit cam does. The app running on the piggyback is nothing as a packed sniffer and does io operation to write it on the disk array. if it's sound totally stupid send an email |
Looks very cool to me, Obin.
North Carolina, represent! |
Thanks Wayne: although I am a bit rusty (didn't do much asm
programming in the last 5 years) and never gotten around to MMX/SSE (catching up on that now!) I have a pretty extensive history in lowlevel computer programming. I know how the nuts and bolds of computers exactly work including BIOS, OS, I/O, Windows and all sort of other stuff. I do fully agree that most programmers have no idea how all of it works "under the hood" at the low level. If I remember correctly this year at IBC we had a prototype 4K projector, but it didn't look better than the high-end 2K stuff to my eyes. They presented Shrek 2 on the "regular" 2K projector. The year before they had a 2K or 1K (think it was this) projector that showed Pirates of the Carribean. So for now the 1920x1080 resolution should be enough for our needs I'd say. I think the 2K/4K resolution is the full size of the frame in vertical and horizontal, so that would probably mean their pixel aspect is off? Like to get the correct size in square pixel it would be 3K x 2K or something? |
Thanks, yes as we feared, JVC too expensive.
Your suggestion is very good, but also expensive (but if completely custom then manufacture in bulk canbe cheap) and this is the reason we went for small PC to cut back on costs. It is late and I am finding it hard to follow your post, but I have solution in mind that by passes cost, only two people know here, and maybe you would be interested. Now for camera to use to save money on capture end, Sumix was talking of doing a compressed camera. Using that with PC decompression (for preview) should allow very low end PC to be used. But any compression should do at least lossless, but preferably also visual lossless down to 50Mb's codec. This will alow very good qulaity and space savings (50Mb's is HDV2 territory, maybe just viable for cinema production to large screen). |
indie theatres
the digtal projector scene is moving faster than even the hi def cameras . I had just 4 years ago a$38,000 barco crt projector that could output 720p
you can now for about $5,ooo get a benq dlp .that does 720p and looks almost as good. this is a link to a theatre chain that went digital . when finished we could shoot in 720p and display our work for small venues on a laptop and benq projector on a 8 foot wide screenhttp://www.microsoft.com/presspass/p...TheatresPR.asp |
Thanks Richard
I forgot to include another interesting thing about hardware. Even silicon chip designers have design rules to protect them from chip structural process based timing and electrical effects. One person in the group I was involved in gained at least 10 times+ processing speed increase (I think I mentioned this before somewhere, sorry if I repeat) by bypassing design rules. He might have been the only person in the commercial processing industry doing that (difficult). |
Quote:
Of course, I could be completely wrong due to the higher resolution of the 3300 and the smear issues of the 1300 ... but the 64-bit frame grabber you'll need with the 3300 will also jack up the price. Bottom line at this point -- Unless someone out there can finance a 3300 for me to use, I am not going to be able to support it. |
If I remember correctly this year at IBC we had a prototype 4K
projector, but it didn't look better than the high-end 2K stuff to my eyes. They presented Shrek 2 on the "regular" 2K projector. That's why they don't bother to film out at 4k either. The human eye is basically incapable if discerning a difference in picture quality greater than 2K. Above 2K everything still looks like 2K to the naked eye. While that difference in resolution maybe important to a computer in certain technological or scientific endeavours, it matters not in the least for film work. It's just an added, unnecessary, expense. |
Hey Obin, I wasn't exactly speaking of cramming 2 MBs in a single box ( i meant a dual processor board) but that is actually a pretty good idea. The only problem would be 2 seperate power supplies, but that really wouldn't be hard to solve. Looking at things from a film workflow, having a computer attached is no more cumbersome than a video tap to monitor setup so if more realtime features require more computing power I say load it up ;)
|
@Rob (Lohman)
Sorry to disagree here with you Rob - but with modern CPUs and the newer/newest highly optimizing C/C++ compilers (eg. the Intel compiler) it is in fact better most of the time to stay in a more high level language like C++ and let the compiler do the optimization targeted for a selected CPU - MMX/SSE/SSE2 etc. all together with (slightly) different CPU designs aren't nearly as easy as i386 asm programming was. And of course high level code is much easier to maintain than messing with low level assembler code - optimized for different CPUs... Just my 2 cents Of course you have to optimize (but do profiling first - to find out _where_ optimizations make sense at all) - but assembler code doesn't automatically mean it's the fastest code possible! |
Quote:
The first version of the preview code -- using standard for() loops and array access. Naturally, with array access (no pointer arithmetic) it was extremely slow -- somewhere on the order of 0.2 fps. The second version used pointer arithmetic and was much better -- somewhere around 2-4 fps. I turned on all the optimizations I could, but IIRC this was the best the compiler could do. I then hand-coded the loop with MMX and it currently runs at around 25 fps. This just goes to show that your point about profiling -- and finding the bottlebecks -- is well taken. There is no way I'm going to code the entire application in assembly; there just isn't any point unless it really needs it. |
As I understand it the human eye population has the potential for 2400 Dots Per Inch discernment (from a close veiwing distance) this only in a small scanning region of the central vision, and individual eys in the peopulation may have a lot less potential. There is a methord to train your eye to see better, I think the book, better eye sight withoiut glasses might cover tha one, by shifting the brains region of central vision to the true centre (apparently a lot of people concentrate on an area of the eye that is not true centre, and concentrate on a lower res side region, and by practicing recognition of details (improving the brains recognition). 720p works out around 150dpi, colour vision goes to around 1200 dpi, so here is a big difference between what we (or Imax, which looks a lot better than 2K) use and the true top end. I think the problem is grain size, the smaller the grain the slower it is, so previously 2K images were probably only avialble in high lit situations on regular (cheap) fiilm stocks. If you look at it the increase resolution of Imax over 2K has a lot to do with the fact that the film frame is ten times bigger and can fit in ten times more grains in the projection (but it fills more screen so the difference to us is probably 4 times, like normal theatre at 8K and 16K to us, but extra winde and high). Now even with my poor eyesight I can pick grain normal cinema and in dark scenes in Imax. So whats the max we should worry about, tricky, I would say between 1Mp 720p and 16Mp (max colour res for radiated display).
Newer technologies, like 4K projection, can have more optical quality issues (plus it depends on room lighting and screen used). Resolution is not the complete picture, so I imagine that it would be quite easy toput up a worked out 2K projector against a tobe optimised 4K one. If your wondering about the limits of a 4:3 15 inch monitor on the eyesight, it's 64Mp (actually why are we saying K it's millions of pixels in a frame) for Monochrome, you will notice that this is one quater what it should be, that is because gradiated displays effect vision and halve the resolution in each direction (16Million for colour). But you will notice diminishing returns for doubling resolution over 150dpi )where surounding pixels are starting to intergrate with general vision). This 4Mp one, was that Sony's new ribbon tech? |
Soeren, yes there was big differences when the Intel compilers came out, because most compilers were using a very limited part of the insruction set. I don't know if the Intel one sues all he instruction set properly, but I imagine that it can't compete with the carefull instruction setup and manipulation of a good programmer, that even coding in C way stops the compiler from being able to organise such alternative sequences of instrucions.
But on to the question of project size, above a certain size it becomes rapidly more difficult to program MC than high level languages, so optimising for the 90%performance regions is best, like you guys say. But programming in MC can eliminate many errors, and what happens when programming in MC makes code 10 times smaller. So project size is everything. Even though much that was said about doing capture completely in MC was figuative, in reality only for embedded custom systems where capture canbe much smaller, due to lack of major OS and simplified standardised hardware model ;) but still major project. For Windows PC, optimise crucial sections in MC, optimise rest in C, as Rob says, with obviouse results. I would like to do my OS in MC because complete Windows completing OS could fit in 1-10MB (data structures etc) but this will require team of programmers and lots of money to do in yeart time frame. I have new programming team stratergy worked out (computer science) to prodcue best quality with top MC programmers (that don't normally work in team mode) exciting stuff, there is a sepcific benefit, I could even patent the stratergy, but in reality I might have o do in C then transfer it to VOS code, which is a high level form of low level language, so easier than C. |
David Newman
David you mentioned the cineform visually lossless codec for Bayer with 4:1 compression. I read your normal codec has a range of 6:1-10:1 for visually lossless, what is the range for the bayer codec, can we get to 10:1 reliably with multi generation on bayer? I have worked out that 4:! bayer 720p is close to a 50Mb/s stream which is close to 10:1 4:4:4 3 chip, which is very useful.
Obviously in camera compression we need true lossless. visually lossless and down to the quality of HDV2 50MB's stream for outside pro work (maybe equivalent to your codec at 10:1) for different jobs. Have you thought of licensing your codec to camera manufacturers (like Sumix, Drake, SI, Micron, Rockwell, Sony etc etc) in FPGA design (that canbe converted to high speed cheap custom silicon core reasonably easily, if anyboidy wants to mass market chips based on it)? Thanks Wayne. |
I think we are all on the same page in regards to hand crafting
certain pieces of code in assembly. This is only done after some profiling and after implementation with some good testing to make sure it increases the throughput enough. But as Rob's example shows it is clearly a win situation on such demanding applications. Compilers have gotten far, FAR better at optimizing code them- selves, but it still has troubles in optimally using registeres and memory in my opinion which is one of the major places to speed code up. But as I said, I think we are all on the same page in that regard! |
Don't forget intiutive leaps to accuratelly using odd little used instructions in interesting sequences that somehow abstractly speed up realtime performance, instead of the more obviouse compiler sequences ;)
|
What is the best resulotion for movie and cinema?
I think ARRI, as one of the best (film) camera produces since beginning of cinema, know the answer. So lets look a little bit inside of they first digital cinema camera D20: http://www.arri.de/news/newsletter/a...211103/d20.htm In this newsletter are some details about the chip, pixels and data outputs. You can also read some things between the lines. Thats the goal, why not? |
There is also the Panavision Genesis beside the ARRI. I've done
a bit of information gathering: Arri D20: + sensor: single 35mm 12 bit CMOS max 150 fps + sampling: standard bayer GR/BG + resolution: 3018 x 2200 + framerates: 1 - 60 fps including 23.976 and 29.97 fps + shutter: mirror + electronic + mount: 54mm PL + internal bus: 10 Gb/s (gbit?) + power consumption: 54 W @ 24 fps (without viewfinder) + video mode: - 2880 x 1620 sampling (16:9) - 1920 x 1080 output (16:9) - YUV 4:2:2 10 bit (single HD-SDI) - RGB 4:4:4 10 bit (dual HD-SDI) - Super 35 HDTV aperture size + film mode: - 3018 x 2200 sampling (4:3) - raw bayer output 12 bit - up to ANSI Super 35 aperture http://www.arri.de/news/newsletter/a...211103/d20.htm http://www.arri.de/prod/cam/d_20/articles.htm http://www.arri.de/prod/cam/d_20/tech_spec.htm Panavision Genesis: + sensor: 35mm (probably 3 or foveon?) + sampling: full RGB + resolution: 12.4 mega pixel + framerates: 1 - 50 fps + 10 bit log output (1920 x 1080?) + 4:2:2 single HD-SDI out + 4:4:4 dual HD-SDI out http://www.panavision.com/product_de...e=c0,c202,c203 Unfortunately there is almost no information available on the technical specs of the Pana Genesis. Too bad. At least we know that in film mode with the ARRI you are supposed to crop to your favorite resolution. So we get: 16:9 => 3018 x 1698 (22.82% loss) 1.85 => 3018 x 1630 (25.91% loss) 2.35 => 3018 x 1284 (41.64% loss) However, if they where to attach an anamorphic lens creating a pixel aspect ratio of 1.78 or an output resolution of 3910 x 2200. This is already 16:9 so no loss for that, for the others: 1.85 => 3910 x 2114 (03.91% loss) 2.35 => 3910 x 1664 (24.36% loss) The ARRI article had an interesting discussion on de-bayering: Quote:
|
Your quote above won't turn up in the fourm quote reply function,must be a bug.
Recreate them accurately, sure! Lets see some spectral overlap=impurity (I've thought about this type of technique before), taking a punt at solving the impurity and selling it as an advantage is restoring missing colour, OK but impurity also=reduced accuracy on the origional colour. So how much impurity and how much origional colour you need, if impurity is low it only gives few bits of sccuracy, not 12bits, but enough to tell of a major swing and interpolate a more accurate replacement. But how much of the accurate primary colour for that pixel is left, 10, 8 bit. But if you look at what they say, they estimate from image principles in nature (like bayer does that chroma tends to follow), so good guessing based on approx. Now I would also like to say, lets see them do that with SD resolution frame, as I say before the increase resolution over 1080, hides much (artifacts and miss-approximation). As I say before 720p is a territory where pixels start blurring in with each other, so unless they look for it, casual viewer may not mentally notice as much, even if picture appears to be of less quality than accurate 3 chip SHD picture to them. So the you can say the impressiveness of picture then becomes sublinimal (??) noticable but not enough to put finger on for most of audiance. After upscaling these malformations could be smoothed out, making the picture a little softer, but imperfections/details start to dissapear. Why I wanted to go to 3 chip 720p as a minium instead of SD, or 2160p in bayer. The truth of these pixel resolutions of these sensors they are using might be a technological limitation and marketable advantage over film than real audience limtis. I think with three chip you can get more light and need less resolution because it is probably better to resolution upscale. I'm going to take a punt, get three SD pal chips and offset them on a prism to obtain a calculable HD image ;). How much worse would this be than one chip bayer? Maybe so :) |
i hate to say it Rob but if your supporting the 1300 I really think it is a waste of your time...that chip just can't cut it for a professional camera...the smear is that bad I think..I would say keep going though because the 3300 is not much to change code-wise to make it work..just WAY more data :)
Rob have you thought about sending the 1300 back and getting a 3300rgb? |
Obin in 1300:
You could be right. I defer to people who know the application on this one. Maybe broadcast? Wayne on color overlap: Go to any color camera or sensor datasheet and you should see a spectral response curve. This is the cutoff for the bandpass color filter array (CFA) that is put on the sensor for single chip Bayer color. As in audio filtering, the rising edges are not vertical, they slope. This causes an overlap at the R-G and G-B borders of the wavelengths - the color impurity. You probably need this, now that I think about it. Otherwise, a green object would look the same green (just different intensities as the filter response changes) as it changed in wavelength until it crossed over a filter boundary. Hmm, so it is this overlap that allows you to have a continuous color response. You learn something new every day. |
All times are GMT -6. The time now is 03:50 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network