![]() |
Crap!
Thats new card is smaller than Compact Flash and stores up to 2TB at 120MB/s!!! Of course they're finalizing the spec next year, so I don't think we're going to see that until 2006 with our 6Ghz dual-core Pentiums and PPC's. |
WOWWWWWWWWWWWWW!!!!
Is this the death of Hard drives?? BTW, could someone point me to a raw capture sequence of at least 3 frames, to begin testing of a convertion and compression applet I'm working on? |
Quote:
|
Too long.
"Mass production for the new card will begin in early 2005." but it is still going to cost way too much, unless, you backup to tape or something. That 2TB is probably going to take years to get hear.
Also reading the new Scientist lately somebody has got the 3D 1cm cube, with 2000 Gigabyte, the stated limitation was reading and writing speed, which has been the limitation for many years stopping it going to market Somebody posted this on that news article: 10 to >100 Terabytes of Storage http://www.nanonewsnet.com/index.php...ub&tid=4&pid=5 It's a bit hard to glance through but it still looks like 3 to 5 years away, but then we can finally do Surround Super Ultra HD cheap ;) Now, back to the final minutes, of the final episode, of The Apprentice. |
BTW Wayne, those small enclosures you're talking about have big AMD or P4 processors in them, so they are going to be very loud.
Try something with Pentium M, that should be quiet. |
good stuff..today we have found our problem with the SDK and are now on a goo dpath to a capture programm...ill keep you posted...automatic GAIN in teh darks!?!?!?!? that scares me bad
|
Obin, could you give ame a three frame RAW capture to work on?
|
Quote:
I would only shoot Panasonic if I didn't have enough money for Sony post production. For off-speed shooting I would try to get an uncompressed recorder for the Panasonic, like maybe the RaveHD, that would solve a lot of the problems of F-REC cause then you're recording a 10-bit uncompressed output from the camera. |
BTW Obin,
Are you able to get locked frame-rates with your capture app (this question also applied to Rob too)? At least at 24fps acurately so that there's sound sync? |
Quote:
Also: update to my development blog. |
<<<-- Originally posted by Jason Rodriguez : BTW Wayne, those small enclosures you're talking about have big AMD or P4 processors in them, so they are going to be very loud.
Try something with Pentium M, that should be quiet. -->>> I didn't say I agreed with it, but some people want this sort of thing (but now Pent M is here maybe that will change, but even then the M&M combination of good Pent M may be outweighed by processing hungry Microsoft Windows or C). I can backtrack it, I'm running a fairly low niose system for hometheatre (and I find it a lot easier to work with low niose). I think it is likely to be around the same as mine if a 2.4Ghz Pent M was available, still too loud but tolerable for a transportable case) around 23-26db , and you really need 17db or less for shoulder mount. Bingo, a tansportable case with in built 720p/1080i capable lcd. I forgot about them, BSI computers was a top maker, but there are others (I think Dolch was one). With a modern LCD it could be very good. They can fit standard MB's and cards, they are around the size of a small desktop (not as small as cube), you would sit a reference monitor for shooting (this is not a portable solution but for those with big rigs). If anybody wants to do lot's of res, and lots of drives (or just flash disks) then you can go better with the YY (Yeong Yang) legendary Cube Case (something like a 43cm square black cube server case). And before anybody says so, thats not most of us. Looked them up, www.bsicomputer.com, styling looks worse, not 16:9, not stereo speakered ;(. But they do have singleboard computers (aroudn the size of a full length add in card, rugged portables, and panel PC's, if that interests anybody. The tabvlet model looks really nice. Some of the models almost looks like pro video equipment in styling. Ohh yes, an Award from Nasa, that sure blows hopes of it being as cheap as a laptop, at least when you see an astronaut float by with one on TV you can tell your family it might be a BSI model ;):) You know there is a local tender center (Cairns) that has to compaq SCSI2-Wide raid drive towers (I was offered one for something like $30). Are these things any use for us, and anybody interested near me interested?. |
Quote:
The Aaton S-16 XTRProd is rated at 19db, while the A-minima is 29db, and those are both sync-sound cameras, and used for sound shooting all the time. |
I am not familiar with those products. I know if you can shoot the niose away from the user (which I plan to do) then it shouldn't do too much to the ambient niose level and also be suitable for shoulder mount. As a mounted camera, or box on the floor, there shouldn't be any problems with 19db, even 29db will fade in a niosy environment or open space, but I would like to aim for good natural sound sampling in quiet areas for film and doco's.
Wayne. |
I wrote a little debayer commandline program...
I wrote a little program the other day. It does debayering in the way described by rob, or was it ben?
It is written in C++ and it's probably very slow (this isn't because of C++, but because of bad/fast coding). I don't really know, because my computer is 500MHz. It works on 8 bit TGA files, so no 16 bit at the moment. Sorry about that- The interesting bit is that it can open Obins .RAW files, debayer them and save them to .TGA. I'm not quite sure about Obins RAW files, but I used chars to open the data and to my knowledge chars are 8 bit. So maybe I'm missing something here... Also the program works from the commandline, and it has this totally useless SDL/OPENGL preview, which you can disable ofcourse. The downside to all this is the fact that, since my computer broke down I haven't had Windows. So I wrote it in Linux, but it should be totally portable and compile fine on any compiler, and a person who knows how to use one. The only external library you need to compile it for Windows is LibSDL <http://www.libsdl.org> So if someone could get it to compile, you could use it. When you get it to compile you can use it on a sequence of files named picture0001.raw, picture0002.raw... When you run it you get some kind of a help, describing the syntax of the commandline instructions. You can get the GPL:d sourcecode from <http://pupuedit.sourceforge.net/camera/pihlajadeb.zip> And remember that GPL means that you can't use the source code in your proprietary programs, but you can use it for anything, as long as you make the result GPL. The code could be a lot faster, but the debayering looks quite good to me. It doesn't have any softening so it has some jagged pixels (or what do you call them...). But I actually think the result is exactly the same as Ben's method. Another thing is the mounting of cameras. I found this interesting future thing: <http://www.four-thirds.org/en/index.html> It could be the future sensor size, and the mounting system of choise in a couple of years. Joonas Kiviharju |
SI-3300
I finally did some testing with the SI-3300. I put some images in this directory:
http://www.siliconimaging.com/Samples/SI3300/ This includes a 10 frame raw sequence at 1920x1080, 12 bit. This was done with Epix so the camera is 10 bit, padded two to the right, 4 to the left with zeros. Some warnings: I'm not too sure about the number of bits of the images - PaintShop pro seemed confused by them but opened them as 8 bit. The file sizes seem to indicate larger files for the 12 bit files. The color images were colorized by the Epix software - no promises for what Bayer algorithm used. The lens was a Canon zoom. OK but not great. There was no correction done on these at all - no white balance, black offset or gain. I purposely left some hot spots in the image so you could see the lack of smearing. The Epix software was smart enough to know this camera doesn't come in monochrome so I switched to a different model after setting up the image. This gave me less control (OK, no control). The monochrome images have some trash around the 1920x1080. I also posted a couple of 3.2Mpix images. Remember all, this is a 1/2" format camera with small (3.2Micron) pixels. It will run at a *max* of 24fps at 1920x1080. |
Re: I wrote a little debayer commandline program...
Quote:
So you're using linear interpolation? If I can get it to work with a REAL (floating point) data type I might try using it in the Convert app (which I am also releasing under the GPL). Interestingly, I was planning to use libSDL for that one too. Thanks! --- EDIT --- By the way, my comment about a real data type wasn't meant to be insulting. I just meant real numbers vs. integer (I can hear it now -- dang it, we use REAL numbers, not those toy "char" things! :-) |
Re: SI-3300
Steve, I took a quick look at the 3Mpix 16 bit uncorrected, and it looks like it may have less than 8 bits. Does this camera have 2 taps? The reason I ask isi that the histograms for rgb have 2 distinct and seperate shapes to them, like we are looking at 2 pictures interlaced or something!
I like the resolution. Maybe the image can be flattened with software, but leaving how many bits is the question. -Les <<<-- Originally posted by Steve Nordhauser : Some warnings: I'm not too sure about the number of bits of the images - PaintShop pro seemed confused by them but opened them as 8 bit. The file sizes seem to indicate larger files for the 12 bit files. ->>> |
I'm with you Les Dit.
I was expecting a dark image like the ones from SLR (just debayered not gamma corrected or anything else) Steve ,Do you remember the example Jason posted a long time ago of a Digital SLR image? |
Re: I wrote a little debayer commandline program...
<<<-- Originally posted by Rob Scott :
So you're using linear interpolation? If I can get it to work with a REAL (floating point) data type I might try using it in the Convert app. ... By the way, my comment about a real data type wasn't meant to be insulting. I just meant real numbers vs. integer (I can hear it now -- dang it, we use REAL numbers, not those toy "char" things! :-) -->>> Yes, I understood you about the real data type. I think it is very easy to just make the internal data floats, and maybe get an external library to write 16 bit TIFFs. I might be doing that myself, if I'll get the time... Later on I plan to put it all in my pre-alpha non-linear editor called Pihlaja. It will propably be using GStreamer, so it will be a plugin for that. But this is propably a couple of years from now (because I don't do it professionally)... I don't really know about linear interpolation. I'm really not a programming professional so I don't know the terms. (I think I used linear interpolation when I was making a 3d engine that filled polygons.) As hinted in the Bens wiki entry about debayering, I just averaged the 2 nearest neighboring pixels to get the values for RGB for every pixel. First horizontally, then vertically. I'm not sure if there is some kind of a better method than just pure averaging: [i-1] + [i+1] / 2. And I'm not sure if Ben's method in the plugin actually is this simple. So I meant that, my method is the same as in the wiki. |
Quote:
The file is 62.914.560 bytes long. Or 31.457.280 pixels. If you divide this by 10 (for frames) you get 3.145.728 pixel image. That is much more than 1920x1080 (which is only 2.073.600 pixels). The math doesn't add up any way I try it. If you divide the filesize by 1920 x 1080 x 2 (4.147.200) you get 15.17 frames which is fractional ?! Also, I looked in the file with a hex-editor and I find numbers that have their last bit turned on! The first row is: A9 00 8B 00 A9 00 95 00 - 9F 00 95 00 A9 00 8B 00 That's: 0x00A9 = 169 (?) 0x008B = 139 (?) 0x0095 = 149 (?) 0x009F = 159 (?) How can this be two bits shifted to the left? It can't be 0xA900 either because that would mean the high 4 bits are set as well which can't be either. So neither the encoding format nor the filesize seem to match up! |
Just after posting the previous blurb I figured it out. This is not
1920x1080 but the full chips resolution of 2048x1536! 2048 x 1536 = 3.145.728 So it is 2048 x 1536 for 10 frames. BUT, that still leaves the pixel packing/encoding "problem"... Steve: could you read this chip out at 2048 x 1152? |
BTW Rob,
What algorithms are you still planning on using? Last I remember it was Variable Number of Gradients, Pattern Recognition, and I guess Linear would be a nice addition if you need something quick for preview/offline. For anything being blown-up or put on a big screen though, I think we need a good heavy-duty algorithm like Variable Number of Gradients (which from the Stanford paper will interpolate fully both Red, Green, and Blue channels to the best of it's ability). |
Jason: what algorithms in what regard? To do what? Are you
talking about de-bayering? Basically it doesn't matter. A de-bayer algorithm is fairly quickly developed and integrated and we will probably have quick and more complex ones. Ben has made one, Rob S. and myself have made a near neighbour and half resolution model (preview) etc. The main focus is working with the camera and getting that code to run as fast as possible and get everything stored to disk etc. Rob S. and myself are also working on the convert/processing application but that will just be a basic version when it hits the GPL/opensource state and will be developed futher. Anyone can join in on that to develop things like bayer algorithms and whatnot. My personal spear points is working out the digital-negative format, seeing if we can incorporate at least some lossless on the fly compression to lower bandwidth usage and getting the data to disk. After that I'll turn to the post-processing algorithms and other stuff if others haven't finished that yet. |
Re: I wrote a little debayer commandline program...
<<<-- Originally posted by Joonas Kiviharju :Another thing is the mounting of cameras. I found this interesting future thing: <http://www.four-thirds.org/en/index.html>
It could be the future sensor size, and the mounting system of choise in a couple of years. Joonas Kiviharju -->>> Joonas, thanks for posting this. It looked disappointing at first, but as I read further, about straightening oblique light rays (at what cost to light) and making the three primaries focus at the same distance not the three film layers (is that good for the Foveon x3), it looked promising, and further on it looked disappointing again. This looks like a cheapened excuse not to pay for, and manufacture, 35mm sensors, and to get us to upgrade our lens systems. How bogus can they get, and we (the consumer) will probably have to be dragged along with it. They claim that the lens needs only be half as long to get the same image size and brightness, that is an indication of a sensor that is half the size getting half the light (but over half the area it is the same. But what of DOF, or what of convergent lines etc? Their graphics depict it as neatly nestled between different abilities (which means flat out compromise). Now to get more confusing, they mention a number of formats. 4/3 (four thirds) what? 4/3rd inches (a inch and a quarter or 34mm) that would be good compared to 35mm film, nope, it is 18mm across thats nearly 19mm about 3/4 of an inch. No it's aspect ratio, 4 across and three down 18*13.5mm. But if they said 18mm format people would immediately tweak it is inferior to 35mm, and they should have developed a Digital-35mm format instead. Here I was under the naive notion that 2/3rd lens meant 2/3rd and inch, but is much smaller (anybody with a link to a good format guide, I need to brush up). One small step for us, one great leap for camera manufacturers profits getting us to upgrade our lenses. Why don' t they do Digital 35mm and digital 18 (16 better) instead, and make the 35mm conform to one of the existing SLR lens mounts with lens refinements for digital use. With an adaptor to condense and straighten the D35MM rays and feed it to the D18mm, or a straight 35mm format adaptor, and adapt it for pro video use, and cinema camera use. Another words a twin standard across film and video. But of course that would flatten profits from pro video and cinematic lenses. Anyway maurandering rant over (I just don't like slick marketing stuff that doesn't really help). |
This post is only for completness, as it does not suit some people.
I have been chatting to VIA about the ITX products. From what I can gather there is nothing really suitable for us until next year. So the Pent M ITX boards are still the best for now. But he has told me of a chipset that could be used as a development platform until then. It has twin 8 bit video inputs, the chipset also mentions a 12-bit capture, but my source only mentioned 8-bit and DVI input but I don't know where ever they are HD or not. If it is HD then it is good for acquiring third party 8 bit footage to use in programs. It has 4 drive sata Raid, 8 Gigabyte memory support, gigabit ethernet support, and the processor maxes at 1.4GHZ (Though I expect more before the end of the year but I don't know where ever this board would take it, my comment, not their's). It's is a consumer HDTV type thing, so has all that support and a dsp for Mpeg4/2 decoding and deblocking, not compression but some sub funtions may help. http://www.via.com.tw/en/c-series/cn400.jsp This chipset has been mentioned before, and is not really up our alley for live lossless compression (unless the dsp's are reprogrammable and helpful). |
BTW guys, that 5.25" board that I was looking at from Axiomtek with Penitum M (Dothan), PCI-X, and SATA is $489.
I was also looking around at powersources, and it seems like the best bet for the size/weight is the the Anton Bauer HyTRON 120 which can sustain 175W output for 120W/hrs at 9-16V (normal operations at 14.4V). So that should easily power this board with all the fix'ns :-) |
Good on you Jason, we need more of this. With the 51/4 inch computers, car computers, ISA card upgrade computers, and Mini/Nano ITX single boards we have a wide selection of chioces. That VIA chipset mentioned something about 2.5W power consumption as well, but I've closed the window now.
Thanks Wayne. |
Also I was doing some number crunching, and you guys will have to tell me if I'm wrong or right on this.
Basically with 12-bits linear, you can cram in a maximum of around 10 stops without banding. 9 stops to play it safe and keep the noise levels down depending on how noisy the chip is. Hopefully the Altasens will have hardly any noise, giving us around 10 stops of total dyanamic range (1000:1). This is based on 10 stops being around 1000:1 contrast ratio. When you split that up between 4096 levels, you get around 409 leves for the bottom 100% of the image, which is where the black and white cards lie (90% of the total image is the white card, with the extra 10 percent of perceptual brightness being superwhites, but those superwhite areas actually should occupy 90% of a digital linear image). At 409 levels, you're sort of playing it close with banding issues and noise, so if you increase that to 800 levels for the 100:1 contrast range, then you have removed a stop from the top, so what was at 500:1 is now at 1000:1. So you've dropped from covering 10 stops to covering 9 stops. If the bottom stop is too noisey, then you'll have to increase exposure some in the capture process, or set the black-point higher, so you are now at 8 stops. With 10-bits, you only have around 100 levels for the 100:1 contrast range in a linear image. This will not do, so you have to double it twice to get to 400, and if you want 800 steps, you have to double that three times, so you've lost 3 stops. That now gives you 7 stops captured in a 10-bit linear file. Since the Micron 1300 has the streaking problem, you have to crush the blacks, so on the bottom you're losing another stop or two, which leaves us with the 5-6 (maximum) observed stops that we're currently seeing. If anyone is looking for reference to what I'm talking about, read "Digital Compositing for Film and Video" on the Chapters concerning Log-Linear conversions and Log versus Linear files/encoding issues. |
SI-3300 again....
OK, I went to the clue shop and got about half a clue. I've put new images at: http://www.siliconimaging.com/Samples/SI3300/
The 10 frame sequence (12 bit mono, 1920x1080) is now 40MB, which makes sense. The color images are 12MB (2Mpix * 2 bytes per color * 3 color, I hope) and the monochrome (color camera in mono mode) is 4MB per pixel. The only correction done on these (all done in the sensor, not postprocessed so I think of these as RAW) was to add blue gain to balance the Bayer response and adjusting the black level offset. I used a Canon zoom and a bizarre mix of tungsten, halogen and florescent lighting. I'm shooting a calendar so there is some limitation there (I don't have my Macbeth right now). What is confusing me is that PSP7 still thinks that these are 8 bit images, but is also says the size is on 2MB on the 4MB file. |
Feasibility of the whole Workflow
Hey everyone, this is my first time chiming in on this giant (and amazing) thread. For starters I guess I should explain that I consider myself quite "techy", but this stuff is way beyond me. So excuse me for any ignorance to the complexities involved, but has anyone figured out the amount of storage required to make these cameras an option in the field? I would assume that one would have a sort of cartridge system of hard drives...possibly in raid pairs?...that they would use interchangeably just as they do with tapes. At night the raw footage could then be transferred to a storage raid array and the recording HDDs reused the next day. Does that sound about right?
Also, I'm just starting to wrap my head around all the HD formats and the hardware/software needed for editing with them. Reading through the various trade magazines, Post, Film&Video, DV, etc. about the workflow of the film (i should say video) Collateral has been quite informative. Shot on 2 Vipers and a couple Sony F900s, Collateral used hard drive recording in some instances, but preffered using sony's SRW-5000 decks, which introduced compression to the Viper's 4:4:4 RGB. The result was still perceptually lossless and therefore was an acceptable compromise for the flexibility it allowed the DP and the art of the project. Now I saw the film, video.... whatever, in theaters last weekend projected on regular 35mm film (nowhere near had a DLP projection of it) and it looked fantastic. So, I guess my question is where is the point of diminishing returns when it comes to quality/ease of use/affordability? When do our dreams of perfect 4:4:4 uncompressed 12bit become an "unreality" for people of our monitary/equipment situation? I read in these magazines about smoke, quantel, and avid systems prepped for uncompromised HD, as well as the hardware necessary for such a thing and its all a tad overwhelming...especially in cost. How do we plan to do all this on our standard PCs and Macs, and will we have the same output capabilities of those huge, way out of budget systems? The reason I'm bringing all of this up is that myself and a large crew are gearing up to pitch a feature film to investors which we hope to shoot next summer. We figure the budget will allow video, but we hope to have the option of going to film if distribution is found, so miniDV is not our preffered format. So, for a group of independent film making college students, (we're actually video-production majors, not film guys), what is the feasibility of shooting, capturing, storing, editing, and outputting these massive formats? The workflow is certainly a scary one, and I need to be able to pitch this as a viable, and doable option. I am the would be editor for the project and am trying to initiate as much learning/planning for the format choice early on in pre-pro so we are set up to go to whatever format our hearts desire (depending on our budget of course. ;) ) Sorry for that huge post, I hope I made some sound arguements/questions? Oh yeah, and thanks goes to all of the DVinfo community...this is certainly a mind-blowing idea, and it seems to actually be working!! Keep it up guys. Spencer Houck EDIT: I should add that i'm a pc guy workin' with Adobe Premiere Pro currently, but am unhappy with its current stability. Do you think a Mac with FCP HD should be included in the budget to work with this media, or will I be ok with my PC and possibly a future version of Adobe's PPro? I don't have any experience working with Quicktime files in Adobe on my PC, is that all good to go? |
Also forgot,
We'll probably loose 1/2 to a full stop from the 12-bit image trying to white-balance, so it looks like we'll be around 9 2/3 stops, same as the Viper before white-balance, so theoretically we should get a little better than the Viper dynamic-range wise (the Viper's S/N ratio is only 54db whereas the Altasens is supposedly 68db). |
Well this isn't going to be like editing video-tape, but I don't think it'll be quite the beast that it looks like initially.
If you store your files as RAW files with unpadded bits, you actually have fairly small file-sizes. 1080/24p will only be 74.6MB/s at 12-bit, around half-that for 720/24p. So if you're going for 720p, then you're looking at around the same storage needs as uncompressed 10-bit 4:2:2 NTSC. There will be a lengthy "processing" time, to convert the raw files to a 16-bit TIFF, but that's not any similar to developing film neg, so while this isn't a "shoot and here's your tape" workflow, it's not quite that bad either. Once in 16-bit TIFF, go out to whatever quicktime/avi format you want. The only problems I see right now are some sort of house-keeping functionality you'll need to incorporate to keep track of your file sequences. But once you get to a Quicktime format (such as the 10-bit Uncompressed RGB codec from Blackmagic), you should be fine. Just have plenty of fast disks and lots of offline firewire/USB drives. |
Quote:
Soon I hope we can directly output to any QuickTime codec you like. There's no reason to use hard-drive-space-sucking 16-bit TIFF intermediates when you don't want TIFF as your final format. |
Just curious - I have never quite understood how this was supposed to work. Clearly we will need to assemble the images into a movie file of some sort once captured. How do we go about doing this, especially if we use a raid system where these files may be sequentially written upon various drives?
|
Quote:
|
The general idea is to have the camera as a seperate system
from your processing computer so you kind of download the things you shot from your camera (much like you capture now) and then clear the harddisks for your next recording. Let's consider this. If you where to record in 1280x720 @ 24 fps in 10 bit the camera is sending us 16 bits which gives us a frame rate of 42.19 MB/s. However, we will at least pack this before it goes to the harddisk and this will yield 26.37 MB/s. Now you put a 200 GB harddisk in your system (or two 100 GB drives for exampl) and you can then record for 7766 seconds or 129 minutes which is just over 2 hours. I am working on a fast and simple compression algorithm to hopefully lower the bandwidth a bit to perhaps allow recording without raid and a bit longer recording times. Just to give you an idea of the massive amount of data: if we where to decode this to FULL RGB without ANY COMPRESSION then you where looking at a datarate of 126.56 MB/s or for 200 GB recorded will expand to 960 GB [2 hours]. So that basically rules out uncompressed for final edit use as well and we will need to look at some form of compression there as well. Some people have opted to use real-time bayer in a codec, but I'm not sure about the quality vs. speed on that one. |
Spencer: personally I would not pitch such a system at this point
in time. It isn't ready and it will not be in a while to come. It is all in the development stages and nobody has even shot a movie with any of this. It is not ready for primetime at this point in time. As you correctly identified the workflow is still somewhat of a problem. Steve: regarding the SI-3330, can you read this chip out at a resolution of 2048 x 1152? |
Rob:
Native is 2048x1536 so anything less can be read out. Frame rate is the problem. You would have to run the chip at over 70MHz to get to 24fps. I'll try it later today if I can. |
Spencer,
For what it's worth, I was editing on a PC platform for 7 years and just now bought a G5 with FCP HD . . . and let me tell you . . . . It's like I died and went to heaven. Stability is rock solid. User-friendliness is awesome. Capabilities are awesome. Speed is awesome. Macs are awesome, only 2 months with it and I am a complete and total convert. If you can afford it, the Mac is WELL WORTH IT. Just IMHO. |
Re: Feasibility of the whole Workflow
Quote:
In fact, this weekend, a cousin of mine (actually a cousin-in-law, who oddly enough is also named Rob!) is going to be in town and we're going to try out some clips with FCP HD. I'll let you know how it goes. Unless i can get native QuickTime support in there in the next few days, we'll probably be transcoding from 16-bit TIFF sequence into QuickTime. --- EDIT --- Minor update to the development blog |
All times are GMT -6. The time now is 02:37 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network