View Full Version : Network setup for workgroup editing
Stephen Armour November 6th, 2009, 11:05 AM I have 12 disks, 1 TB each in a raid30 on an Areca ARC-1680iX-12 controller with 2 GB cache and a BBM (battery backup module) with expansion capabilities for another 4 disks. Due to the nature of raid30 (2 raid3 arrays of 6 disks each and then striped to form a raid30) of course you lose 2 disks for parity, so the effective net space is only 10 TB.
Harm and Pete and any others, this is very off-post, but if you guys could give me your two bits on this question in a separate post or email, I'd sure appreciate it:
Three of us work as a team on all video productions, with 3 workstations now and probably a fourth later. All are quads, all have Production Premium CS3 or CS4, all use Cineform HD, all are in a Gb peer-to-peer network and all use 64bit Win OS (x64 still). All productions are 1920x1080p. Since these are long term productions with a number of target distribution channels, and sub-titled in 21 languages, regular backup and near-oline storage of all material is necessary for corrections and updating. One person does all AE comps and color grading, I am the main editor, and one is our second editor/flash/website/etc, with another soon to join the team.
Main prob is continuously having to duplicate projects and material and backup each ws. We have been seriously looking for a unified workgroup solution that has the throughput and ease of a single project point for all filesharing and backup. Gb ethernet is too slow, but we are a non-profit org. so funds can also be a problem.
Any suggestions? Throughput speed and stability between ws is absolutely essential, as is compatibility between us. Program duplication is necessary, as we often use any empty system for rendering when available, then use laptops for emails, research and smaller stuff.
Sorry again for the offpost, but I've got your attention here and need experienced suggestions with this to try to find a better solution.
Harm Millaard November 6th, 2009, 11:43 AM Stephen,
I dropped this question with my son, who is a network consultant. We'll both discuss it and I will get back to you ASAP.
Steve Kalle November 6th, 2009, 12:41 PM Stephen: if you had the money, a fiber channel SAN would be the fastest. However, something you can do easily and for little money is this: (assuming each pc has a PCI-Express x4 or greater slot or PCI-X slot available) get an Intel Dual-Port Gigabit network card(NIC) and then you can "Team" the 2 connections together to double the throughput. I also highly recommend the D-Link Gamer Gigabit Router and Gigabit Switch. A few months ago, I setup the IT infrastructure for a small office; I built some pc's and bought the rest from Dell; and I used the D-Link router and switch. I was blown away at the speed from the Dell's to the server - 70-80MB/s. This was just one gigabit port.
1) What are your current speeds if you were to copy a file from one pc to another?
2) Ethernet cable - Cat 5, 5e or 6? You need Cat 6 for maximum throughput, so you might need to rerun the cables.
3) What is the current layout of your computers?
I should add that you can also get 2 single-port Intel NICs in either PCI-E x1 and/or PCI if you don't have a PCI-E x4/PCI-X slot available.
To have a centralized workflow, you need a server. I don't recommend a NAS or something like the Drobo because their speed is always limited. For the prior mentioned IT project, I used Acronis Workstation on all computers and had them backing up to the server. With Acronis, I could also backup to a 2nd destination with that being on the host pc.
If this sounds good to you, let me know and I can answer more questions.
Harm Millaard November 6th, 2009, 01:11 PM Stephen,
Just as Steve figured, the (not easily affordable) solution would be to have a look at fiber SAN switches and SAN solution like HP MSA Storageworks 1500. However, this is based on your remark about IP being too slow. In addition to Steve's remarks about teaming, you may also have a look at jumboframes to ease the overhead on your network, but your NIC's and switches need to support that. If that is achievable, you can also have a look at some iSCSI NAS solutions like Thecus.
Fiber SAN would be optimal but costly. A very basic switch starts at around € 2500...
or look here: http://www.superwarehouse.com/Fiber_Channel_Switches/c3/2178
Stephen Armour November 6th, 2009, 02:40 PM Stephen: if you had the money, a fiber channel SAN would be the fastest. However, something you can do easily and for little money is this: (assuming each pc has a PCI-Express x4 or greater slot or PCI-X slot available) get an Intel Dual-Port Gigabit network card(NIC) and then you can "Team" the 2 connections together to double the throughput. I also highly recommend the D-Link Gamer Gigabit Router and Gigabit Switch. A few months ago, I setup the IT infrastructure for a small office; I built some pc's and bought the rest from Dell; and I used the D-Link router and switch. I was blown away at the speed from the Dell's to the server - 70-80MB/s. This was just one gigabit port.
1) What are your current speeds if you were to copy a file from one pc to another?
2) Ethernet cable - Cat 5, 5e or 6? You need Cat 6 for maximum throughput, so you might need to rerun the cables.
3) What is the current layout of your computers?
I should add that you can also get 2 single-port Intel NICs in either PCI-E x1 and/or PCI if you don't have a PCI-E x4/PCI-X slot available.
To have a centralized workflow, you need a server. I don't recommend a NAS or something like the Drobo because their speed is always limited. For the prior mentioned IT project, I used Acronis Workstation on all computers and had them backing up to the server. With Acronis, I could also backup to a 2nd destination with that being on the host pc.
If this sounds good to you, let me know and I can answer more questions.
Stephen,
Just as Steve figured, the (not easily affordable) solution would be to have a look at fiber SAN switches and SAN solution like HP MSA Storageworks 1500. However, this is based on your remark about IP being too slow. In addition to Steve's remarks about teaming, you may also have a look at jumboframes to ease the overhead on your network, but your NIC's and switches need to support that. If that is achievable, you can also have a look at some iSCSI NAS solutions like Thecus.
Fiber SAN would be optimal but costly. A very basic switch starts at around € 2500...
or look here: Fiber Channel Switches (http://www.superwarehouse.com/Fiber_Channel_Switches/c3/2178)
Thanks much for the comebacks. (If I could move this post to a separate thread, I would...sorry!)
I have experimented a bit with teaming single port Intel NIC's, as they are fairly cheap, but really didn't see the gain I was expecting. What I did gain was better reliability.
As to true throughput, it varies quite a bit, as right now, we have a mix of two Gigabyte boards (x38 and x48) and one Intel server/ws board. I shouldn't complain about some of our throughput, as sometimes it's pretty fast (+500 MBps sustained, ie 70+MBps).
For sure, the bottleneck for a single-access point server would be the GbE, though. FC is definitely superior, but costs shoot straight up very fast with enterprise drives and all.
Sigh.
I guess the real bottom line is to sell a car and get something like WinchesterSystems stuff? That RAID 6 security of theirs sure looks good, but I know I could build something "almost" as good for less than half probably.
It's hard to be in the "tween zone" of needing "enterprise class speed and security"...but being little guys...and working non-profit in Brazil to boot.
I really appreciate any more info you might have for this scenario though. If the conditions come soon for a good upgrade, I want to be ready to jump, and only guys like you with experience can see all the factors at stake.
Just one more comment. It seems that an SSD in the server/RAID mix somewhere could surely boast things significantly if it was serving as a "cache" for current projects? We could take the hit with older projects off the HDDs with speed, but get enormous speed gains with current cached material off the SSD.
Any light on this or other possibilities? (thanks much)
Oh, forgot to answer Steve's questions: all cabling is CAT5 and we are within 10-15 metres of our Netgear GS 108T switch. Jumbo frames are ON (4K), though I saw no gain with larger frames than 4K. All systems have a least a single RAID 5 and one has dual RAIDs (RAID0 and RAID5), with single, mirrored bootdrives, and single disk mirrors for the RAID0 (cheaper) via MirrorFolder.
Steve Kalle November 6th, 2009, 04:05 PM Thanks Harm for reminding me about iSCSI. I just read a whitepaper comparing 1Gb iSCSI vs 2Gb Fiber, and it appears that you should be able to combine ethernet ports for iSCSI to increase throughput.
Harm's son should know more but I find this very interesting as I always like to make things go faster.
Steve Kalle November 6th, 2009, 05:02 PM Check this out:
EnhanceRAID RS8 IP (iSCSI) Benchmark (http://www.enhance-tech.com/products/ultrastor/RS8ip_benchmark.html)
The 2nd benchmark with MPIO(Microsoft Mulitpath I/O) which doubles the throughput by using 2 ports and increases reliability.
Stephen Armour November 6th, 2009, 05:02 PM I think the teamed ports could theoretically work well for small workgroups, but the latency under Windows seems to be not very good. I'm trying to imagine 3 workstations, all pulling multi-gigabyte HD files at once and somehow have never seen GbE run fast enough or well enough to do that consistently.
I added up our HDD's here and we have 16 HDDs installed (not including older 10K Cheetah's in an older dual processor ws), and often another couple attached via external USB or e-SATA to retrieve older data. It's faster that way, than our GbE SANs. They are slooooooooow bad.
If we added it all up, we'd easily have plenty of drives for a fast NAS RAID. All are 1 TB or 500 GB and 7200 rpm. But, I'd rather start from scratch with any new system and use these current disks for duplicate offsite backup. Our internet link is too slow to do it that way.
Steve Kalle November 6th, 2009, 05:09 PM The iSCSI protocol would greatly help with the latency.
With a centralized server, why would you need to download the video to each workstation?
Stephen Armour November 6th, 2009, 05:20 PM ...Fiber SAN would be optimal but costly. A very basic switch starts at around € 2500...
or look here: Fiber Channel Switches (http://www.superwarehouse.com/Fiber_Channel_Switches/c3/2178)
Harm, for that price, it might be cheaper to go to 10 gigabit ethernet? This Netgear switch has 4 of it's ports running at 10Gb:
Newegg.com - NETGEAR GSM7328FS 10/100/1000Mbps + 10 Gigabit Switch 4 x RJ-45, 24 x SFP, 4 x 10 Gigabit Ethernet/24G Stacking Module Bays, 1 x RS-232 8K MAC Address Table 334 KB embedded memory per port Buffer Memory - Switches (http://www.newegg.com/Product/Product.aspx?Item=N82E16833122208)
Not sure how much the NIC's are though, or how much you'd actually gain. Maybe it'd be cheaper and faster to just to use 10GbE NIC's and run direct to the server, bypassing the switch? Not sure how that'd work, but would be fast if it did. The server would be the switch and NAS, maybe?
Steve Kalle November 6th, 2009, 05:27 PM I was looking at 10Gbe on newegg too but I thought it was out of your budget. Do you have an idea what your total budget might be?
However, the 10Gbe latency and reliability would be similar to regular 1Gbe because they are the same protocol. Actually, 10Gbe iSCSI would be better and then all you need is the storage unit (ie no server).
Stephen Armour November 6th, 2009, 05:28 PM The iSCSI protocol would greatly help with the latency.
With a centralized server, why would you need to download the video to each workstation?
You wouldn't "download" for sure. But you'd have to have enough throughput to create previews locally or try to run them from the server. Can't imagine how that'd work with Premiere, though. If you can fool it into seeing the server as a "local drive", it'd work I guess.
I've never tried to run multiple workstations with Premiere from the same location on remote network drives (I've never had enough speed to do that), but can imagine Adobe somehow blocking that. Plus all those needing 3-5 streams of video for any "realtime" output under Cineform...
Stephen Armour November 6th, 2009, 05:36 PM I was looking at 10Gbe on newegg too but I thought it was out of your budget. Do you have an idea what your total budget might be?
However, the 10Gbe latency and reliability would be similar to regular 1Gbe because they are the same protocol. Actually, 10Gbe iSCSI would be better and then all you need is the storage unit (ie no server).
We're jumping each other's posts. I can surely see the advantages for iSCSI if it's direct-to-storage, as Premiere (and all apps) would then see it as a "local" drive.
Our "budget" is completely relative. If we really get against the wall, we'll let interested parties know of our major needs and sometimes the resources will come in. I'd need some true TCO figures to even know if it was something we could shoot for.
Any other good links on solutions like that are appreciated. Especially any heads up comparisons.
Steve Kalle November 6th, 2009, 06:12 PM Free Data Storage Software V6 Lite (DSS V6 Lite) (http://www.open-e.com/products/open-e-dss-v6-lite/features/)
This is free to download and try out with a maximum 2TB. Allows you to test iSCSI on your current workstations as well as using two gigabit ports in MPIO(don't know if they must be identical). I remember seeing one benchmark with the Open-E server getting 100-110MB/s on a single Intel NIC.
It seems this software/OS allows you to use regular hardware rather than buying a premade iSCSI storage system which should save some money.
Stephen Armour November 7th, 2009, 06:36 AM Free Data Storage Software V6 Lite (DSS V6 Lite) (http://www.open-e.com/products/open-e-dss-v6-lite/features/)
This is free to download and try out with a maximum 2TB. Allows you to test iSCSI on your current workstations as well as using two gigabit ports in MPIO(don't know if they must be identical). I remember seeing one benchmark with the Open-E server getting 100-110MB/s on a single Intel NIC.
It seems this software/OS allows you to use regular hardware rather than buying a premade iSCSI storage system which should save some money.
For sure something that robust would give great flexibility to us. I think it comes bundled with a pretty great variety of hardware SAN and NAS vendors, though, so is cheaper that way too.
Might be overkill for a 4 seat outfit, though. Although sometimes the main feature sets are worth getting, even though you get all the bells and whistles too, so it's a candidate for keeping in mind for sure.
At this point, the most important thing for us, is the cost/benefit. If the gain is great enough, the pain of robbing Peter to pay Paul is sometimes worth it. It's getting that true picture of what "gain" is, that is hard. For us, it seems to be consolidated speed, accessibility, long term data security, TCO, and reliability. That way, no matter when a ws goes down or when we fry another HDD, we can keep on truckin...
Harm Millaard November 8th, 2009, 02:11 PM Stephen,
I just heard from my son. His comments:
I can only give you good advise if I know the budget…
In an ideal situation they would go for a Netapp storage solution with snapshotting at the block level for backup, so older files can be restored instantly. Such equipment starts at around € 20,000 and may run into the millions... Look here: http://www.netapp.com/us/products/storage-systems/fas2000/fas2000-tech-specs.html
I can't really give an advise without knowing the budget available, but I do think that fiber channel is the best solution by far, but again, the equipment to use is largely budget dependant...
Jiri Fiala November 8th, 2009, 05:11 PM How does Premiere fare with networked editing, anyway?
Harm Millaard November 8th, 2009, 05:29 PM Depends on who you are asking, but generally: lousy.
Adam Gold November 8th, 2009, 09:09 PM Adobe says not to even try it, but I'm not sure they were envisioning anything like this...
Steve Kalle November 8th, 2009, 09:45 PM Network editing=lousy. Well, sort of. If you use regular network drive mapping, then yes it is lousy. But if you use iSCSI, its far better. Network drive mapping shows the drive as \\Server\Video whereas iSCSI shows the drive as a normal E:\ (or whatever letter is assigned to it). iSCSI is designed for large workgroups with better reliability.
Stephen: I think I'm stating the obvious here when I say that you need centralized storage; so, its just a matter of how much you want to spend and how fast you need it to be.
If your editors need maximum speed, they can always download the video files to their workstation during that project. The Premiere and AE project files would be saved to the server so you could open it on your workstation to CC and whatever else rather easily.
I was just looking on B&H, and they have a good amount of fibre chanel equipment. FC is not as expensive as I thought.
How much storage space do you think you need excluding backup?
John Mitchell November 11th, 2009, 08:22 PM I would seriously look at Avid if your business keeps growing. No one does network editing better than Avid and their Unity product. Sure it is expensive but you save in the long run in productivity.
Unity works with other solutions as well (including Adobe and Apple) but clearly works best with Avid MC as you get true project sharing, even down to bin level (a bin that one editor has open gets locked for another.)
You could also look at third party Avid solutions mentioned in the thread, (EditShare and Facilis Terrablock) but I found them to be about the same cost for the actual network storage if a little cheaper per seat that Avid Unity.
The problem with other untried San solutions is dropped frames - only really important during capture and mastering, but most solutions built around a san will drop the occasional frame.
Stephen Armour November 18th, 2009, 11:44 PM I appreciate the comebacks!
As to editing in a networked environment, I never would have considered it with Premiere...ever...except the market is changing and so is Premiere. It appears the integration with CS5 will be even greater, and their 64bit changeover will certainly give them greater headroom.
Now, what remains to be seen, is if the 500 lb gorilla Adobe can actually do something to address this glaring weakness of the Premiere workflow. In the meantime, it seems we better suffer a few more months with our current cludge of x64s, multi-RAIDs, large file shuffling and too much disk redundancy, before jumping ship to something else. Any major moves now could cost a pretty penny.
The idea of a fast iSCSI SAN is quite appealing however, due to the fact it would then be our central file serve point and help to avoid some of the file duplication/confusion of multiple workstations all working on the same projects. All previews would still remain on each local machine, and we'd still have to break things into reels for editing, but it seems to work. The secret is getting Adobe to see that SAN as a "local disk".
At least our AE man won't go crazy on who has what and when, and all updated material will be in the same location.
Any "budget" questions for non-profits are usually met with a grim stare, but what I'm trying to gather here is an idea of what to shoot for. If I know the low, mid and high options, we can present those to interested parties in this venture.
As to storage needs, these are a long-term projects in multiple languages, each with a "master" and it's associated project files. The episodes aren't that long (25-28 minutes), but they often have quite a lot of VFX, a few stills, quite a number of text/titles, and a language dubbing track along with AFX and music. The English masters are the most complex and from them we generate the more flattened "masters" for the multilanguage projects. We use a mix of AE generated alpha layered text files and plain Premiere generated text files.
Sooooo.....to finally answer Steve's question: we have a moving target as we add languages. Since we need to have these at least near-online as our project output languages grow (21 now and maybe adding another 26 possibly in the near future). We need storage not only for master editing projects, but for generating additionally synced dubs and "burned-in text" sub-masters...all in HD. I would say 12 TB is easily where we'll be this time next year. We could possibly offline some of that, but who knows by then.
Makes me tired just typing it! Big HDD's are pretty cheap though, so aside from the hassle of having to plug in a drive to correct something and having to duplicate everything for security, we just limp along with CS3/4, Cineform, and lots of needs.
Steve Kalle November 19th, 2009, 12:51 AM When you start to look for the hardware, look into a rack case so you can use rack-mounted servers and storage. I suggest this because adding rack mounted storage down the road is very simple. As you might know, you can install Adobe on 2 computers so here is another idea: part of the rack idea, 1U rack servers using the new Intel 1156 socket with i7 cpu is very cheap. So, you could install a couple of these and use them for rendering so your editors can continue to work. I recently looked into a rack case and servers to build a small render farm for Cinema 4D and found I could get a 1U server for $500 not including hard drive, and this was with the Intel i5 750 or add another $120 for the Xeon i7 equivalent plus ECC ram(which is safer for long rendering). If you want the links to the hardware(all on newegg), let me know.
As to Premiere seeing the iSCSI drive, it would be just like seeing any drive installed in that pc. One of the main benefits of iSCSI is having the drive 'appear' as a local drive to the OS and programs.
Stephen Armour November 19th, 2009, 11:15 AM ... If you want the links to the hardware(all on newegg), let me know.
As to Premiere seeing the iSCSI drive, it would be just like seeing any drive installed in that pc. One of the main benefits of iSCSI is having the drive 'appear' as a local drive to the OS and programs.
Souinds good Steve. Maybe if you post them here, someone else can benefit as well. Thanks. All the info I can get at this point is good. Appreciate it.
Steve Kalle November 19th, 2009, 12:58 PM Here is what I found: you can save $110 by getting the case and motherboard separately.
$370 - Supermicro Barebones Chassis w/Motherboard & PSU
Newegg.com - SUPERMICRO SYS-5016I-MR 1U Rackmount Barebone Server Intel 3400 LGA 1156 Intel Xeon X3400/L3400 series - Server Barebones (http://www.newegg.com/Product/Product.aspx?Item=N82E16816101280)
OR
$90 - Supermicro Chassis & PSU (Identical to Barebones but without motherboard)
Newegg.com - SUPERMICRO CSE-512L-260B Black 14" Mini 1U Rackmount Server Case w/ 260W Power Supply - Server Chassis (http://www.newegg.com/Product/Product.aspx?Item=N82E16811152087)
$170 - Same Supermicro motherboard as above Barebones
Newegg.com - SUPERMICRO MBD-X8SIL-O LGA 1156 Intel 3400 Micro ATX Intel Xeon X3400/L3400 series Server Motherboard - Server Motherboards (http://www.newegg.com/Product/Product.aspx?Item=N82E16813182212)
+
$240 - Intel X3440 Xeon 2.53GHz
Newegg.com - Intel Xeon X3440 Lynnfield 2.53GHz 8MB L3 Cache LGA 1156 95W Quad-Core Server Processor - Processors - Servers (http://www.newegg.com/Product/Product.aspx?Item=N82E16819117225)
$110 - Kingston DDR3-1333 4GB (2x2GB)
Newegg.com - Kingston 4GB (2 x 2GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory Model KVR1333D3E9SK2/4GI - Server Memory (http://www.newegg.com/Product/Product.aspx?Item=N82E16820139041)
So, $90+170+240+110=$610 All is needed is a hard drive.
For a regular motherboard($100) and Intel i7 CPU and non-ECC ram($70 4GB-2x2GB),
$90+100+290+70=$550
Intel i7 860 2.8GHz - $290 @ newegg or $220 @ microcenter
You still need a Rack case, which I found an open rack for under $200 and another few hundred $ for KVM switch and other accessories. And probably $150 for Windows 7 Pro x64 for each node.
Stephen, for your main server, you can get a 4U case with 16 HDD bays in the front.
Oh yeah, your AE guy might like the added render nodes so he could continue working on his workstation.
I would go for the server parts as they are generally built better and designed for heavy use.
Stephen Armour November 19th, 2009, 02:24 PM And controller card, drive, and OS suggestions? Just fishin...
Steve Kalle November 19th, 2009, 11:18 PM And controller card, drive, and OS suggestions? Just fishin...
Well, what I listed was just for render nodes. I would use something a bit bigger and more robust for the main file server, which could also be used as a render node depending on the OS.
For the File Server, you could use that Open-E OS which gives you iSCSI with regular NICs but no render node.
For an iSCSI hardware only solution, Newegg.com - Enhance Technology RS16 IP-4 Quad Intelligent iSCSI RAID Storage System - Server RAID Systems (http://www.newegg.com/Product/Product.aspx?Item=N82E16816201048&cm_re=iSCSI-_-16-201-048-_-Product)
It has 4 gigabit ports for iSCSI and 16 drive bays. What you can do is take those 4 gigabit ports and make them into pairs (ie team or bond) and set it up so you have 2 'drives' for each workstation to see. (ie one for masters and one for source)
With the Enhance Technology iSCSI, then you could get a render node or two.
I will refine what I suggested earlier:
Newegg.com - SUPERMICRO CSE-822T-400LPB Black 2U Rackmount Server Case w/ 400W Power Supply 1 External 5.25" Drive Bays - Server Chassis (http://www.newegg.com/Product/Product.aspx?Item=N82E16811152109) or
Newegg.com - SUPERMICRO CSE-743i-650B Black 4U Rackmount Server Chassis w/ 650W Power Supply 2 External 5.25" Drive Bays - Server Chassis (http://www.newegg.com/Product/Product.aspx?Item=N82E16811152062)
Both have larger power supplies, can take server size motherboards, have more drive bays and allow for more future upgradeability. Each Premiere/AE render node requires an OS - Windows 7 Pro x64 which is $140-150 for each pc. Having multiple server nodes requires more hardware like KVM switches and larger network switches. I assume you could get by with one dedicated render node so maybe spend a little extra and go with a dual-Xeon setup.
|
|