Need for SPEED! (Plus single project point...) at DVinfo.net
DV Info Net

Go Back   DV Info Net > Cross-Platform Post Production Solutions > CineForm Software Showcase
Register FAQ Today's Posts Buyer's Guides

CineForm Software Showcase
Cross platform digital intermediates for independent filmmakers.

Reply
 
Thread Tools Search this Thread
Old November 6th, 2009, 11:16 AM   #1
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Need for SPEED! (Plus single project point...)

To any here with experience, I posted this on the Premiere format too, as some don't hang out here.

If any could give me their two bits on these questions, I'd sure appreciate it:

Three of us work as a team on all video productions, with 3 workstations now and probably a fourth later. All are quads, all have Production Premium CS3 or CS4, all use Cineform HD, all are in a Gb peer-to-peer network and all will soon use 64bit Win OS (2-x64 one 32bit still). All productions are 1920x1080p. Since these are long term productions with a number of target distribution channels, and subtitled in 21 languages (to be dubbed later), regular backup and near-online storage of all material is necessary for corrections and updating. One person does all AE comps and color grading, I am the main editor, and one is our second editor/flash/website/etc, with another soon to join the team for FX.

Main prob is continuously having to duplicate all projects and material, and backup each ws. We have been seriously looking for a unified workgroup solution that has the throughput and ease of a single project point for all filesharing and backup. Gb ethernet is really way too slow, but we are a non-profit org. so funds can also be a problem.

Any suggestions? Throughput speed and stability between ws is absolutely essential, as is compatibility between us. Program duplication is necessary, as we often use any unused system for rendering when available, then use laptops for emails, research and smaller stuff.
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply With Quote
Old November 7th, 2009, 05:42 PM   #2
Regular Crew
 
Join Date: Jan 2009
Location: Edmonton, Canada
Posts: 91
I have a GB network as well...with a server and 3 systems
Believe it or not...from one of my workstations to another one...(both RAID 0) I get transfer rates of 100+MB...megabytes...not bits....
and a minimum of about 80MB..

when I reverse the flow....it drops off to 35MB/sec...
I havn't investigated much but there are a lot of factors. Obviousely when transfering from computer A-B gets 100MB/sec and from B-A gets 35 there is some wierdness going on.
But the point is a good GB ethernet setup can give you 80+MB/sec

That being said...I think the ideal solution right now would be a server with a few big raid sets....like 2 separate raid 5 setups and run 10GB ethernet.

10GBe is just as fast as fiber, but can run on cat6 or even cat5e....
the hardware is expensive, but you can find some cards on ebay for $300

from one computer to another you could use a crossover cable, but in your case and mine we would need some kind of switch or router that is 10GBe as well....which probably is not cheap.

And on the server side....FreeNas is ok, or windows server...as long as whatever you have has iSCSI.

This month Asus is releasing the first USB 3 card for a new motherboard of theres.....USB 3 has transfer rates of about 4.5 gigabit(about 500MB/sec) and I could see in the future setting up some kinda server network using that.....and it would be cheap...the Asus card is like $40...but only for that specific board.
Pretty soon there will be a few choices...but right now it's either 10GBe or fiber...
Mike Harrington is offline   Reply With Quote
Old November 7th, 2009, 09:19 PM   #3
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Mike, all our ws have RAID5 and we use CAT5e cabling with a Netgear GS108T switch. Even with teamed NICs, I have never seen anything close to 100MB speeds. 70MB maybe, but never higher. And just like you, it depends on whether it's pushing or pulling the data as to speeds.

But, that is a LONG ways from the speeds we need. We're talking single serving point, low-latency, with a minumum of three simultaneous streams of HD video to feed and most often more.

The only lower cost hope I've seen is using teamed GbE NICs in all systems, and feeding from something that can sustain 300MB/sec. Like you observed, eSCSI and 10GbE would be just right, but that seems to be VERY costly still. Each NIC is $300 (we'd need 6) and if we had to use a switch, there goes another $2600-$3000! Ouch!

Also like you mentioned, my hope could be in something appearing based on cheap USB3 cards.

For us, a good solution seems to be anything that looks to Premiere and CS3/4 like just an attached local HDD, but is actually a fast RAID with redunduncy, and some slots reserved for spares.

I don't think the RAID/NAS or whatever, is really even the major problem. More than that is just getting consistent speeds across the network, without having to pay FC costs. Maybe I'm dreaming, but a single fileserver with 8-12 TB and true highspeed connections to 3-4 workstations just shouldn't be that expensive. The software needs are actually very simple and most "solutions" are super overkill. I don't give a rip what protocol or hardware is used for the main connections to a single fileserving point, as long as Adobe's software sees it as local to the workstation. Normal GbE NIC's could be used for any other connections, as could the USB/eSATA drives we already have everywhere.

I counted 24 HDDs in three systems (including a few external drives) and one with 8 drives in it and another attached via USB! That's ridiculous and a stupid waste of resources and time to back them all up and have to transfer back and forth for workflow. Good grief, there has to be a better way (that doesn't cost and arm and a leg) for small workgroups to have decent workflow for unified projects.
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply With Quote
Old November 8th, 2009, 04:04 PM   #4
Major Player
 
Join Date: Feb 2008
Location: Voorheesville, NY
Posts: 433
@Stephen,

You don't say what OS you are presently using, but if you can put together the cash, I would suggest that you upgrade all your workstations to Windows 7 x64. I can tell you, that my experience with running a a Gbit network of Vista and Win XP computers, is that it is virtually impossible to routinely achieve 120 MB/sec transfers speeds (roughly a Gbit/sec) with such a network. However, when I switched over to Win 7, I found that I was coming pretty closed to gigabit speeds, using network RAID devices. I can't speak to XP, but it is a known problem with Vista that it had some built-in throttling action that limited bandwidth on Gbit networks. It was rare to obtain even 60-70MB/sec transfers speeds with Vista, even if the network RAID drive could maintain >120 MB/sec read and write speeds.

But also make sure that you do the following:

1) Limit Ethernet cable lengths, as practicable and use good quality CAT5e (or CAT6) cabling.
2) Make sure that all your network interface cards are set to 1000 Mbit/sec transfer speeds and not "Auto". Use the Jumbo frames setting for each NIC.
3) Use a GBit switch to connect all the workstations and then connect the switch to your router and the Internet.
Jay Bloomfield is offline   Reply With Quote
Old November 8th, 2009, 11:15 PM   #5
Major Player
 
Join Date: Oct 2007
Location: Northern California
Posts: 517
MetaLan Server is a software that allows your network drives to appear as local disks, and somehow usually speeds up the connection over regular file transfers. I believe that basic iSCSI stuff included in Windows should be able to accomplish most of that if configured right.
I don't really understand why you need that much speed, since CineformHD files are about 12-15MB/s so 4 or 5 streams will be 60MB/s which is sustainable over Gigabit.

One idea would be to have a single workstation with all of the storage connected to it, and a couple of multiport NICs, giving each other workstation a direct link (Full Gigabit) to the storage, and you may be able to gradually upgrade the links to your top stations to 10GbE if crossover works, using crossovers and multiple NICs to save the cost of the switch. You should be able to edit or render directly on the machine you use as the "server" as well.
If 10GbE cards are $300, then you should be able to link two workstations to the server for $1200 using this method, and using Gigibit for the rest of the computers.

Infiniband is another option that I have experimented with, but I never got it to perform as anticipated. You can get that cheap on EBay. I got a complete 12 Node 10Gb IPoIB network (Switch, cards and cables) for $3K, but I never got it running quite right, so we have invested $40K in a true Fibre solution instead.
__________________
For more information on these topics, check out my tech website at www.hd4pc.com
Mike McCarthy is offline   Reply With Quote
Old November 8th, 2009, 11:59 PM   #6
Regular Crew
 
Join Date: Jan 2009
Location: Edmonton, Canada
Posts: 91
I have a Vista system transferring to a Win 7 system that routinely gets 100MB/sec...

i had trouble believing it myself as I was usually ecstatic with anything above 50MB/sec...

I think with some tuning a decent GBe server setup is practical...as Mr McCarthy indicated cineform HD is well within the bandwith...and even redcode raw or cineform raw is usually under 50MB/sec.

That is a good idea about having multiple NIC's and crossover cables instead of piping all through a router....
Mike Harrington is offline   Reply With Quote
Old November 9th, 2009, 12:08 AM   #7
Regular Crew
 
Join Date: Jan 2009
Location: Edmonton, Canada
Posts: 91
Mike,

Just been checking out some Infiniband hardware on ebay....
what were your issues with it, as it does seem quite cheap...and in theory should do the job...

just checking cause i wasn't aware of Infiniband before this post, and it sure seems like a cheap high bandwidth solution.
Mike Harrington is offline   Reply With Quote
Old November 9th, 2009, 05:03 PM   #8
Major Player
 
Join Date: Oct 2007
Location: Northern California
Posts: 517
IB at 10Gb is internally 4 separate 2.5Gb serial links. (Think PCIe 4x) I could never get multiple links to work in parrallel, so I was limited to 2.5Gb. The lack of Vista drivers for my Mellanox cards was also an issue. I intend to pursue the issue farther when I have more time. Lots of potential, but I have yet to successfully implement it.
__________________
For more information on these topics, check out my tech website at www.hd4pc.com
Mike McCarthy is offline   Reply With Quote
Old November 10th, 2009, 08:44 AM   #9
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Quote:
Originally Posted by Mike McCarthy View Post
MetaLan Server is a software that allows your network drives to appear as local disks, and somehow usually speeds up the connection over regular file transfers. I believe that basic iSCSI stuff included in Windows should be able to accomplish most of that if configured right.
I don't really understand why you need that much speed, since CineformHD files are about 12-15MB/s so 4 or 5 streams will be 60MB/s which is sustainable over Gigabit.

One idea would be to have a single workstation with all of the storage connected to it, and a couple of multiport NICs, giving each other workstation a direct link (Full Gigabit) to the storage, and you may be able to gradually upgrade the links to your top stations to 10GbE if crossover works, using crossovers and multiple NICs to save the cost of the switch. You should be able to edit or render directly on the machine you use as the "server" as well.
If 10GbE cards are $300, then you should be able to link two workstations to the server for $1200 using this method, and using Gigibit for the rest of the computers.

Infiniband is another option that I have experimented with, but I never got it to perform as anticipated. You can get that cheap on EBay. I got a complete 12 Node 10Gb IPoIB network (Switch, cards and cables) for $3K, but I never got it running quite right, so we have invested $40K in a true Fibre solution instead.
If we had the cash we'd do the fiber solution too...

As to the multi-streams of Cineform material, remember that your 60 MB of multi-streams is just for ONE system, not 3-4! Add multi-ws to the mix and suddenly we're in true fiber territory.

As to IB, I have a friend doing custom solutions in the industrial/military area and he said to stay away from it and use FC where possible. Sounded like good advice to me. Maybe that's why it's cheap on eBay...

As to OS's, we're running XPx64 on two systems and XP 32bit still on two, while we wait for the bleeding edgers to help get Win 7 bugs worked out. I have never seen a new OS come out in 25 yrs that didn't have many bugs and workarounds to iron out. Win 7 looks pretty good already, but we'll give it a month or two more to get all the drivers out there.

As I stated earlier, everything is Jumbo-framed and cabled with good CAT5e with short cable runs. A multi-10GbE NIC solution is looking better and better though, cost-benefit wise, especially if we can get a decent rack server up with them. That way, we'd eliminate the very expensive 10Gbe switch in the middle and concentrate just on the server. We'd probably be fast enough that way to truly handle the 3 or 4 ws all feeding on multi-stream CF video. Since latency wouldn't be much of a problem, and rarely would we be actually doing 12-16 streams simultaneously, it'd sure help us "stream(line)" this outfit (sorry about that).

If you factor out the existing disks and could actually use them in the server, we'd get into it for pretty cheap compared to Mike's 40K fiber solution. When our storage needs grow beyond the server box drive capacity, hopefully drive capacities will have grown to help us out.

Now if I can pull this off with a new 6 Gb/s chipset for the server...and find stable 10GbE NICs for cheap...we'll be down the road a bit.

One more thing: If SSD prices were down, we could add an SSD as a "cache" for the current projects to gain speed, then just pull the rest of the data off the server HDDs as needed. Hmmm.

By the way, which 40K solution did you buy, Mike...and why?
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply With Quote
Old November 10th, 2009, 12:36 PM   #10
Major Player
 
Join Date: Oct 2007
Location: Northern California
Posts: 517
We have 16TB and 8TB Fibre Arrays from Rorke data, distributing data to ten systems through a QLogic 4Gb Fibre switch. We use MetaSAN to keep everything in sync.
__________________
For more information on these topics, check out my tech website at www.hd4pc.com
Mike McCarthy is offline   Reply With Quote
Old November 10th, 2009, 07:39 PM   #11
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Mike, I know you and some others here sometimes work in multi-editor, single system unified projects (MESSUP) editing environments. We do too, but have had to cludge things to accomodate not having any centralized storage solution (yet). Since I'm sure there are some good ideas floating around out there, I'll just ask questions for now.

Here's a few for you (and others here), to help us in our planning as we look ways to centralize these projects more adequately.

I'll list them to make it easier:

1. How do you work around Premiere's lack of "Version Cue" type file handling in a multi-editor, single system unified project (MESSUP) environment?

2. If you break up your productions into separate reels, how does Premiere handle the importing and nesting of an "open-by-another-editor" project file?

3. If you break up your productions (as above), are your previews still referenced okay?

4. what happens if two or more ws are trying to use the same preview files simultaneously?

I guess I could try this out myself on a faster system, but figured I'd save some work and ask first.

I'm sure I could think of a hundred more questions if it wasn't so late, but my brain is tired from too much multitasking. Editors are essentially mono-taskers by nature, and though I wear many hats, I'm really just an editor at heart. Plus, I admit it.......I made up the "multi-editor, single system unified projects (MESSUP)...
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply With Quote
Old November 10th, 2009, 08:23 PM   #12
Major Player
 
Join Date: Oct 2007
Location: Northern California
Posts: 517
We don't usually open the same project on multiple systems simultaneously. We usually capture and import the majority of our assets on a system in a non-client room. Then we open the project in one of our nice edit rooms and edit the project. Once we have a cut we want to export, we version up the project and open the previous one on a different system for the export to HD or H264 or DVD. We tried the whole, trade sequences between two editors a few years ago, but found it wasn't worth the trouble. Breaking your large project into reels is the only way to simulate our multiple project workflow.

If we do break a project up, we try to avoid importing one into another. We just export flattened files to playback the segments together in a master project.

We rarely use preview files. Our systems playback most of our effects in real time, via Cineform, or Matrox Axio, etc. We do share the media cache on the SAN, but I wouldn't over a network, even with iSCSI. Probably better to have each system keep their temp files local on a data drive for low latency. It took forever for us to get Premiere to properly share the media cache files on the SAN, but it is great now that it usually works.
__________________
For more information on these topics, check out my tech website at www.hd4pc.com
Mike McCarthy is offline   Reply With Quote
Old November 11th, 2009, 07:36 AM   #13
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Ok Mike, I was hoping you had discovered something different, but I guess that's pretty much how things have to be done.

Our workflow routine is much the same, except I've been trying some short nested projects again, after giving up on them early in the show. They're a bit improved now, but still risky sometimes. For final output, we always do flattened video in a new project file, but always with separate audio for voice, music and afx. Since we're working with 21 different languages and more to come, that way we can easily sweeten the audio in Audition and patch over any late video changes, yet still easily output new masters. To complicate matters more, we have to keep one master for English, then a multi-language master for the rest. All the titles and any text vfx (alpha layers) stay with the multilanguage master project. Whew! That's why unifying things with storage is a priority now. And like you, we do usually have a number of other (usually smaller) projects in the pipeline as well.

Well, I guess the only thing we really need is a fast, secure SAN. Rest is same-o same-o. I'd really like it to be a point-to-point (switchless) setup, as that will help latency and speed. Seems iSCSI with nearline SAS drives would be the best for capacity, but true SAS drives will give us more speed and MTBF. 10GbE NICs are still looking nice (but are $500), but teamed 1GbE NICs could get us there for thousands cheaper, albeit much more slowly.

Any comments on the switchless SAN idea?
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply With Quote
Old November 11th, 2009, 05:05 PM   #14
Major Player
 
Join Date: Oct 2007
Location: Northern California
Posts: 517
I am unaware of Fibre switches adding any meaningful latency to the process. Fibre is the most bandwidth efficient and consistent protocol I have ever used. The only issues we ever have with our SAN are related to MetaSAN, which is synchronizing the writes to disk between machines. The Fibre itself is amazingly fast and responsive. I found a 20 port Fibre switch on Ebay for $2K (MSRP $8K) which I keep as a spare. I also found PCI-X HBAs for $275 (MSRP $1200) so the hardware can be gotten relatively cheap, but I see no way round paying for shared SAN software licenses. (Unfortunately I didn't find those deals until after our initial purchase, but the recent additions to our SAN have been quite cheap)

2Gb Fibre gear is REALLY cheap on ebay, and fast enough for what you need, (Use dual channel to get to 400MB/s) but MetaSAN is around $1000 a seat, and I know of no real alternative for true shared SAN that is any cheaper. (I hear StorNex is $2500 a seat). And of course you would need a Fibre Array.
__________________
For more information on these topics, check out my tech website at www.hd4pc.com
Mike McCarthy is offline   Reply With Quote
Old November 14th, 2009, 06:44 AM   #15
Major Player
 
Join Date: Dec 2005
Location: Natal, RN, Brasil
Posts: 900
Quote:
Originally Posted by Mike McCarthy View Post
I am unaware of Fibre switches adding any meaningful latency to the process. Fibre is the most bandwidth efficient and consistent protocol I have ever used. The only issues we ever have with our SAN are related to MetaSAN, which is synchronizing the writes to disk between machines. The Fibre itself is amazingly fast and responsive. I found a 20 port Fibre switch on Ebay for $2K (MSRP $8K) which I keep as a spare. I also found PCI-X HBAs for $275 (MSRP $1200) so the hardware can be gotten relatively cheap, but I see no way round paying for shared SAN software licenses. (Unfortunately I didn't find those deals until after our initial purchase, but the recent additions to our SAN have been quite cheap)

2Gb Fibre gear is REALLY cheap on ebay, and fast enough for what you need, (Use dual channel to get to 400MB/s) but MetaSAN is around $1000 a seat, and I know of no real alternative for true shared SAN that is any cheaper. (I hear StorNex is $2500 a seat). And of course you would need a Fibre Array.
Mike, excuse my ignorance, but would it not be possible for us to simply use a four-port card in the server and access any shared storage, without the super overkill of MetaSAN?

I like the idea of the 2Gb FC array with low/no latency issues and it would be enough speed that way, but is it necessary to use such spendy software four a 4-seat config? Surely there is a way to bypass that $1K/seat MetaSAN setup. $4K for software seems like a tremendous amount to spend for bi-directional feeds off a direct FC server link.

Why would a switch even be necessary, as the server would be juggling the feeds, wouldn´t it? And the RAID card would sync the read/writes, wouldn't it, so why the need of MetaSAN?

Educate me.
__________________
http://lightinaction.org
"All in the view of the LION"
Stephen Armour is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Cross-Platform Post Production Solutions > CineForm Software Showcase


 



All times are GMT -6. The time now is 12:24 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network