|
|||||||||
|
Thread Tools | Search this Thread |
November 9th, 2004, 10:05 AM | #2056 |
RED Code Chef
Join Date: Oct 2001
Location: Holland
Posts: 12,514
|
Steve: that was my middle point. At first I just lopped off the
complete high byte giving the problems. I know see Jason said LSB and not HSB, my bad (I confused it with my own mistake). However, it still gives you a bad representation of the dynamic range which a LUT should be able to fix.
__________________
Rob Lohman, visuar@iname.com DV Info Wrangler & RED Code Chef Join the DV Challenge | Lady X Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors |
November 9th, 2004, 11:06 AM | #2057 |
Regular Crew
Join Date: Mar 2004
Posts: 65
|
I just saw this tiny quiet pc on gizmodo:
http://www.akibalive.com/archives/000576.html |
November 9th, 2004, 11:13 AM | #2058 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Most of these issues are things I discussed 6 months ago. Rob, it is good to see that you are taking such care handcrafting MC, I didn't realise that you have the history with the 386. Something I can tell everybody is that most programmers don't have these skills, they might know enough about C? to get it running reasonably, but in apps like this that is NOTHING. I finshed well ahead of the rest of my class in Uni and I would not trust most of them to do a job like this. I don't know the status with CPU load, but just recording is pretty low bandwidth, even simple preview is, but the problems you will get is handeling/programming architechture (not just the architechture itself) efficiently, the same with the OS, and stalling the CPU/memory systems. Doing any of these wrong will stall the hardware OS and/or CPU and appear to be massive CPU load. Anybody that depends on simple CPU stats will probably be scratching their heads trying to figure out what is wrong and where all the CPU is going. Do the calculation (well in MC you can calculate reasonable cycle consumption) trace through, test point the program and they will find the stalling regions. There is lots of realtime oriented drivers and BIOS's that are buggy (let alone OS's) that is why there are so many updates (for years) so even those high end programmers can't get it right. I could go to plenty of programming companies and ask them to write something like this, a lot would say fine they can do it like it isn't much of a worry, but few would be capable of getting the last 80% of performance. I am brutal, but I am sick of this stuff stuffing up the coimputer industry for everybody. One reason for my OS project is too permanment fix ALL these problems, to me many computer companies act like idiots drunk out of their minds. There is no reason that computers and apps couldn't be made to operate as reliably as most Microwaves, washing machines, TV's, and DVD's (well good ones) eventually. In the embedded/consumer electronics realtime industry the reliability of PC's is held with a bit of bewilderment, I'm afraid.
ggrrrrrrrrrrrrr.................................. If you can look up and down load the Stella, Atari 2600/VCS Programming manual, Lynx progframking manual and look at what emulator writters do. This stripps away all the ... OS/BIOS and leaves you with basic hardare and you will find out how come it is so difficult to program efficeintly. A lot of it involves minute timing differences as circuits stabilise and windows or opportunities, doing it wrong (or if somebody else makes a compatible circuit) can wreck performance or results. On a complex circuit blindly programming can set up a chain reaction of performance failures through related circuits (it is not Object Oriented programming). Running it on another chipset/Mainbaord/processor, can do the same as the underlying timing of the "compatible" hardware maybe different. So it is best to pick good hardware, with good drivers (like that Gige driver Steve mentions that DOESN'T come with windows) and abstraction layers (like DirectX+drivers does for different video hardware) in between you and all the different hardware versions, as much as is reasonable then setup OS/system propperly, and program carefully to these known to be good software layers. The good drivers and abstraction layers should take care of most of the timing (though you might get a bit better for certain/particular hardware by bypassing the abstraction layer this probably won't help a system with different versions of the hardware). Never assume that the abstraction driver is perfect, or good, some maybe, others need to be tested etc etc. In this situation all you need is to follow this simple formular with expertise, as long as you code C very efficiently with reference to hardware issues aswell, and maybe do the critical section in MC, you should be able to get within 80% of he maximun performance in fast systems (probably much less in slow systems that would do the job otherwise). |
November 9th, 2004, 11:50 AM | #2059 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Ronald, yes the Filmsream is the SD Indie camera, versus our HD projects.
What I find interesting about the film scan 2K, or 4K, what happens when they scan 2.35:1, is the 2K fitted to the frame or is it still 1.78:1 frame (meaning actual frame now is 1.5K, or so, pixel resolution. Do hey go to 4K, or do they use 4K on 1.78:1 as well? I think it would be a matter of quality and film stocks aswell. I see a lot of stuff at the cinema that has grain that even my poor vision can pickup, even without glasses (even picked some grain on Imax scenes). So I think 4K or even 8K would be better. But what about just shooting at 2K and resolution upscalling to 8K ;) I think it is good for our purposes, buy digital theater may only accept 2K footage :( . Now Barco CRT projectors were purer blacks than DLP, but were also weaker brightness (pluss maintance is more). With newer Crt phosor technology I don't know. But one technology to watch (well there are a few) is the Laser Crt (L-crt and many other names) invented in Russia. I research this because I wanted to make small laser display device (much to big at moment), it seems to have purity and performance to possibly beat DLP. Had picture of a demo of a guy holding one of the tubes standing infront of 50foot test pattern being projected, nice(little washout ;). What happens now is that film has to be transfered to digital and digital to film to show it to all people :( costly. I think in the future there might be a market for somebody to open up cinemas (or just change existing independent theatre) just to project digital only. All the independent film makers can get together a distribution chain and these cinemas show it. Now no transfering to film, unless it is popular and film company wants to distribute to film theatres. Most indies will not get much of a screening at conventional company chain theatre, because they want blockbusters (small indie theatre maybe), so they have to go somewhere else. In my local city we had two theatres, one owned by the biggest chin in the country, the other independent ex-bourdville theatre (one of the nicest I have ever been in). They couldn't get the latest blockbusters ubntil after major theatre has finished, so they went to old films and small films, and closed. The rival theatre chain wanted to open theatres a couple of times, the fisrt theatre chain protested (from vague memory that) that there was too many theatres and they also converted to 5 theatres. But then they open up another 8 theatres very near by, and then 6 or more theatres at one of the location the cheaper rival wanted to open up. Now their theatres not that full, the origionaly redeveloped 5 cinema site is now used for small films and some independents festivals. In this situation there is small potential for people to show non mian stream major films, as more and more independent cinemas close down. With indie distribution company and website it would be much easier, as most of the marketing costs canbe replaced by site comments and reveiws, and local theatre promote. Now we get to the interesting bits. People (say the indie crowd) can preview on the internet (low res version, pay per veiw for broadband), they than vote and comment. Theatre owners go to indie distribution site and veiw comments find good movie and show it. If people want full version they go to theatre listed at the site, or order full copy protected version on HD/DVD from site. It will send the studio's crazy, as they loose control and the indies gain control (and cost efficeincy from few indie owned distro sites) andf Indies get low cost marketing replacement. I find some hard to follow but I hope this is what you were after. <<<-- Originally posted by Ronald Biese : Dear Wayne, hm that is great, a bit more resolution but no compression at all and 4:4:4 out that is just perfect for PAL or NTSC. With an Argus or so great, the Tv indie cam up to now...Hurra..voila.-->>> Yes that is the SD Indie camera, in these threads we are concentrating on the HD indie camera). |
November 9th, 2004, 12:00 PM | #2060 |
Major Player
Join Date: Jan 2004
Location: Bordeaux, going to Bangkok, 2011
Posts: 232
|
Dear Wayne, Primo, I got this email:
Ronald; I spoke with JVC regarding the camera your inquired about. It has not become available yet due to a delay from the manufacturer's of the CMOS chips, Rockwell. They are not expecting delivery until January 2005. Targeted list price is around 20K U.S. dollars Best regards Tom so a bluefish dual HDSDI card not cheap, an adapter for Nikon or PL mount lenses a DIY shoulderpad and some connector for Bauer or any other Ni Pack, Bluefish has Linux support so the NLE could be free and Bluefish can save Tiff avi or so plus a SCSI Array controler and I do guess 4 big SCSI 320 disks, voila a powerstation as backpack. Me as a bit outdated but still running harcore realtime nut why not doing something as Kreines and inventing the real again. A tiny board a gig Gig Etehrnet controler and piggyback a 104 bord that has some smal reltime OS kernel in rom doing nothing as waiting what comes from the Gig E controiler and writing it to a stack of Toshiba 2.5 inch disks. On the "host" where the gig E controller sits in the only PCI slot there is running what you like and there you can send a message to the Gig E controler that will the piggy pack Array controler start or stop. On the host someting could run to control the Camera so the host is totally independend from the Array controler. The arry controler is nothing else like a controler in a dish/washer start stop and reading the Gig E it has a memory so that it can hold about 2 to 4 seconds images in a loop so evn if something happens before recording it's not lost as the camera is up and running, like Edit cam does. The app running on the piggyback is nothing as a packed sniffer and does io operation to write it on the disk array. if it's sound totally stupid send an email |
November 9th, 2004, 12:17 PM | #2061 |
Major Player
Join Date: Jun 2004
Location: Houston, TX
Posts: 356
|
Looks very cool to me, Obin.
North Carolina, represent! |
November 9th, 2004, 12:29 PM | #2062 |
RED Code Chef
Join Date: Oct 2001
Location: Holland
Posts: 12,514
|
Thanks Wayne: although I am a bit rusty (didn't do much asm
programming in the last 5 years) and never gotten around to MMX/SSE (catching up on that now!) I have a pretty extensive history in lowlevel computer programming. I know how the nuts and bolds of computers exactly work including BIOS, OS, I/O, Windows and all sort of other stuff. I do fully agree that most programmers have no idea how all of it works "under the hood" at the low level. If I remember correctly this year at IBC we had a prototype 4K projector, but it didn't look better than the high-end 2K stuff to my eyes. They presented Shrek 2 on the "regular" 2K projector. The year before they had a 2K or 1K (think it was this) projector that showed Pirates of the Carribean. So for now the 1920x1080 resolution should be enough for our needs I'd say. I think the 2K/4K resolution is the full size of the frame in vertical and horizontal, so that would probably mean their pixel aspect is off? Like to get the correct size in square pixel it would be 3K x 2K or something?
__________________
Rob Lohman, visuar@iname.com DV Info Wrangler & RED Code Chef Join the DV Challenge | Lady X Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors |
November 9th, 2004, 12:35 PM | #2063 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Thanks, yes as we feared, JVC too expensive.
Your suggestion is very good, but also expensive (but if completely custom then manufacture in bulk canbe cheap) and this is the reason we went for small PC to cut back on costs. It is late and I am finding it hard to follow your post, but I have solution in mind that by passes cost, only two people know here, and maybe you would be interested. Now for camera to use to save money on capture end, Sumix was talking of doing a compressed camera. Using that with PC decompression (for preview) should allow very low end PC to be used. But any compression should do at least lossless, but preferably also visual lossless down to 50Mb's codec. This will alow very good qulaity and space savings (50Mb's is HDV2 territory, maybe just viable for cinema production to large screen). |
November 9th, 2004, 12:45 PM | #2064 |
Major Player
Join Date: Nov 2003
Location: cambridge ma
Posts: 247
|
indie theatres
the digtal projector scene is moving faster than even the hi def cameras . I had just 4 years ago a$38,000 barco crt projector that could output 720p
you can now for about $5,ooo get a benq dlp .that does 720p and looks almost as good. this is a link to a theatre chain that went digital . when finished we could shoot in 720p and display our work for small venues on a laptop and benq projector on a 8 foot wide screenhttp://www.microsoft.com/presspass/p...TheatresPR.asp |
November 9th, 2004, 01:12 PM | #2065 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Thanks Richard
I forgot to include another interesting thing about hardware. Even silicon chip designers have design rules to protect them from chip structural process based timing and electrical effects. One person in the group I was involved in gained at least 10 times+ processing speed increase (I think I mentioned this before somewhere, sorry if I repeat) by bypassing design rules. He might have been the only person in the commercial processing industry doing that (difficult). |
November 9th, 2004, 01:46 PM | #2066 | |
Major Player
Join Date: May 2004
Location: Knoxville, TN (USA)
Posts: 358
|
Quote:
Of course, I could be completely wrong due to the higher resolution of the 3300 and the smear issues of the 1300 ... but the 64-bit frame grabber you'll need with the 3300 will also jack up the price. Bottom line at this point -- Unless someone out there can finance a 3300 for me to use, I am not going to be able to support it. |
|
November 9th, 2004, 02:51 PM | #2067 |
Major Player
Join Date: Jun 2004
Location: Houston, TX
Posts: 356
|
If I remember correctly this year at IBC we had a prototype 4K
projector, but it didn't look better than the high-end 2K stuff to my eyes. They presented Shrek 2 on the "regular" 2K projector. That's why they don't bother to film out at 4k either. The human eye is basically incapable if discerning a difference in picture quality greater than 2K. Above 2K everything still looks like 2K to the naked eye. While that difference in resolution maybe important to a computer in certain technological or scientific endeavours, it matters not in the least for film work. It's just an added, unnecessary, expense. |
November 9th, 2004, 04:00 PM | #2068 |
Regular Crew
Join Date: Jun 2004
Location: Pavilion, USA
Posts: 64
|
Hey Obin, I wasn't exactly speaking of cramming 2 MBs in a single box ( i meant a dual processor board) but that is actually a pretty good idea. The only problem would be 2 seperate power supplies, but that really wouldn't be hard to solve. Looking at things from a film workflow, having a computer attached is no more cumbersome than a video tap to monitor setup so if more realtime features require more computing power I say load it up ;)
__________________
Whatever works |
November 9th, 2004, 04:37 PM | #2069 |
Regular Crew
Join Date: Mar 2004
Location: Düsseldorf, Germany / Denver, CO
Posts: 137
|
@Rob (Lohman)
Sorry to disagree here with you Rob - but with modern CPUs and the newer/newest highly optimizing C/C++ compilers (eg. the Intel compiler) it is in fact better most of the time to stay in a more high level language like C++ and let the compiler do the optimization targeted for a selected CPU - MMX/SSE/SSE2 etc. all together with (slightly) different CPU designs aren't nearly as easy as i386 asm programming was. And of course high level code is much easier to maintain than messing with low level assembler code - optimized for different CPUs... Just my 2 cents Of course you have to optimize (but do profiling first - to find out _where_ optimizations make sense at all) - but assembler code doesn't automatically mean it's the fastest code possible! |
November 9th, 2004, 08:32 PM | #2070 | |
Major Player
Join Date: May 2004
Location: Knoxville, TN (USA)
Posts: 358
|
Quote:
The first version of the preview code -- using standard for() loops and array access. Naturally, with array access (no pointer arithmetic) it was extremely slow -- somewhere on the order of 0.2 fps. The second version used pointer arithmetic and was much better -- somewhere around 2-4 fps. I turned on all the optimizations I could, but IIRC this was the best the compiler could do. I then hand-coded the loop with MMX and it currently runs at around 25 fps. This just goes to show that your point about profiling -- and finding the bottlebecks -- is well taken. There is no way I'm going to code the entire application in assembly; there just isn't any point unless it really needs it. |
|
| ||||||
|
|