|
|||||||||
|
Thread Tools | Search this Thread |
February 17th, 2010, 09:01 AM | #1 |
New Boot
Join Date: Nov 2008
Location: Sydney
Posts: 16
|
Best RAID for my needs. Please Help!
Hi all great and helpful members!
I've been searching for quiet a long time to try and get my head around this raid thing, but there is too much info that don't exactly help me decide what "I" need, so I apologize for posting yet another thread about "Best Raid" topic! We are a wedding production company in Sydney, We shoot mainly in SD but trying to move to HDV only, so far its been every machine for itself so we thought to improve work-flow to increase: 1 - Efficiency: all computers can access any wedding to for editing and encoding etc. 2 - Backup. I'm sure we've all been there when finished weddings get lost before finalizing. So my questions are: (Bare in mind we need about 4-5 TB of space in total for all workstations excluding any backup space, & budget wise we're trying to stay under 2k). Our estimated budget came from checking out ready built systems. 1 - Considering 4-5 machines editing and/or encoding HDV for DVD at the same time, what's the minimum throughput speed I need the raid to have and which raid system would you recommend i.e Lacie, Western Digital, Buffalo, Seagate, Iomega or G-Technology (8TB Raids) or any other you care to share? 2 - Would you recommend we buy a system or build our own. If build, what specs/hardware? 3 - What type of raid would best suit us? E.I raid 5,6,10? Finally, thank you so much in advance for helping me sort this dilemma! Edit: Guys, please don't hijack the thread? :) Last edited by Rani Korkise; February 17th, 2010 at 09:24 PM. |
February 17th, 2010, 11:55 AM | #2 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Pretty generic, but maybe helpful: Adobe Forums: To RAID or not to RAID, that is the...
|
February 17th, 2010, 01:58 PM | #3 |
Regular Crew
Join Date: Apr 2006
Location: Forest Park, IL
Posts: 108
|
Very impressive article Harm! Thanks for pointing to it. I was impressed not only by the substance of the main article, but also the discussion and tips that followed.
My question is derived from that discussion, using the basic configuration you recommended to KLFI as the starting point. Let us say you were going to expand parts of that setup into the use of arrays. C: would do well to be in a Raid 1 D: what do you think of the use of a pair of 150 Gb raptor drives in a raid 0 for this? E: is this where you would use the Areca controlled Raid 3? With how many 1Tb drives? F: How much capacity is needed here for maximum system utility? I am not sure how you would make use of a 12 drive Raid 30 array in an editing setup. Can you help me expand my vision through understanding? |
February 17th, 2010, 04:04 PM | #4 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
A couple of basic assumptions that you started out with: 4-5 computers, a $ 2K- budget and 5 TB required space (total or for each workstation?). Now these factors combined make it challenging to say the least if not impossible, because Adobe does not work nicely off network drives.
I don't know what switches are in use nor what storage capacity you currently have that can be reused, nor if each workstation has dual NIC's. What is the degree to which each workstation needs to access the same files/clips possibly at the same time? Do you already use a NAS and if so, is it iSCSI? If you can give those details, maybe someone here can come up with a brilliant idea. |
February 17th, 2010, 04:29 PM | #5 | |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Quote:
D: Good idea, but keep the costs in mind. E: The Areca controller is not exactly cheap. The one I have is around $ 1,200, including BBM and cache. If you want around 5 TB effective storage space, I would suggest 7 disks of 1 TB in a raid3, of which one as hot-spare, so you have 5 TB net space and are safeguarded against 2 disk failures in that array. The alternative is a 7 disk raid6, but it is not quite as fast as a raid3, even though with those arrays the difference will not mean much in performance. F: I'm not quite sure what you mean by this. On my system, which is not exactly run-of-the-mill, I found that I get best performance with all my temp files on my D drive (2 x 1 TB Raid0) and everything project related on my main array (E: 12 x 1 TB Raid30). Last edited by Harm Millaard; February 17th, 2010 at 05:27 PM. |
|
February 17th, 2010, 09:24 PM | #6 | ||||
New Boot
Join Date: Nov 2008
Location: Sydney
Posts: 16
|
First of all Harm I want to congratulate you on an awesome article! I'll be sure to go through it thoroughly to get a better Idea about raid systems especially to the fact it being a very recent article.
Quote:
My budget estimates came from checking on already built raids (8TB Raids) Quote:
Quote:
Quote:
Hope this gives you a better idea. Thank you once again! Last edited by Rani Korkise; February 17th, 2010 at 10:53 PM. |
||||
February 18th, 2010, 11:49 AM | #7 |
Trustee
Join Date: Aug 2006
Location: Rotterdam, Netherlands
Posts: 1,832
|
Rani,
Your answers cleared up a lot. The LaCie unit may serve you well, even when LaCie does not have a stellar reputation. LaCie | 8TB 4big Quadra External Hard Drive RAID Array | 301435U It is one of the few units that fit in your indicated budget and that has the required eSATA connector and has hot-swappable bays and has raid3 support. All of the key ingredients I would look at. And most importantly, it is easy for you to install, because if you don't know about switches and NIC's, we better keep it in the KISS realm. I hope this helps you. |
February 20th, 2010, 01:56 AM | #8 |
Trustee
Join Date: Aug 2009
Location: Chicago, IL
Posts: 1,554
|
I highly recommend using iSCSI instead of regular network shares due to: 1) much better connection reliablility (so you don't have to reboot computers and/or routers/switches) and 2) faster transfer speeds between workstations and file server 3) iSCSI 'network' drives show as an attached drive on each workstation so the OS and programs treat it as one - this gets past Premiere's poor network share performance 5) no need to be connected to a workstation - connects to a switch/router
One of the best iSCSI Raid solutions is this from Enhance. Supports Raid 3, 5, 6, 30... Newegg.com - Enhance Technology T8 IP iSCSI-to-SATA Desktop RAID Storage - Server RAID Systems Here is a benchmark - 1st is with only 1 NIC and 2nd is with 2 NICs/Teaming enabled(MPIO) EnhanceRAID RS8 IP (iSCSI) Benchmark The first few paragraphs explain iSCSI a bit better. Maximo PC The cost will be a bit more than $2k but will provide far better performance and far better reliability. The other parts to get are the NICs for each workstation. You can either get 1 or 2 for each depending on your budget and performance needs. Newegg.com - Intel PILA8460M 10/ 100Mbps PCI PRO/100 M Desktop Adapter 1 x RJ45 - Network Interface Cards Newegg.com - Intel PWLA8391GT 10/ 100/ 1000Mbps PCI PRO/1000 GT Desktop Adapter 1 x RJ45 - Network Interface Cards And a 'Gigabit' switch : Newegg.com - D-Link DGS-2208 10/100/1000Mbps 8-Port Desktop Green Ethernet Switch 8 x RJ45 8K MAC Address Table 144KB per Device Packet Buffer Memory Buffer Memory - Switches This is the fastest switch I have used. These Samsung 1TB drives, if you can find them in stock. Newegg.com - SAMSUNG Spinpoint F3 HD103SJ 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive Hope I haven't confused you too much. PS If you need this data to be accessed by all computers, then that Lacie does not work as well because the connection to the other computers is totally reliant on the computer its connected to. Furthermore, multiple computers accessing the Lacie will create a bottleneck and greatly reduce performance because there is only one ethernet/network connection to handle all the data. |
February 20th, 2010, 09:55 PM | #9 | |
New Boot
Join Date: Nov 2008
Location: Sydney
Posts: 16
|
Hi Guys. First of all I wanted to thank you for your great and helpful advice.
Things have changed, I have a friend that is an IT senior engineer, so don't worry about me purchasing/config switches or routers, He can do all that. Harm: I just realized that the LaCie | 8TB 4big Quadra External Hard Drive RAID Array | 301435U has no NIC, We don't think it would be able to handle multiple workstations unless its done by different connection e.g FW/USB, which then creates a bottleneck and we will be expanding our workstations to 6+ in the near future. :) Hence our point of view is that unit does not meet our requirements? Steve: Quote:
2 - Even though the unit you recommended has only 2GbE Ports, does that mean I can have a switch/router connected to one of these ports so I can connect more workstations depending on the size of the switch/router and if so would it still be able to cater the throughput needed for at least 5 workstations editing/encoding HDV? We think its best that we stick to a network solution as it will support our future growth? Plus the performance is much better compare to other connections unless you have a better idea... Thanks in advance! |
|
February 21st, 2010, 01:58 AM | #10 |
Trustee
Join Date: Aug 2009
Location: Chicago, IL
Posts: 1,554
|
1) I have found that the typical network sharing has issues with workstations losing connection with the sharing computer, which can be caused by different operating systems and/or routers & switches, just to name a few causes. This leads to people rebooting computers and/or routers/switches. iSCSI is more reliable due to its architecture, mainly being SCSI over IP.
2) With the 2 Gbe ports, you have several choices in how they are used: ie., connecting both ports to an 8 port switch, these 2 ports can distribute the bandwidth from the file server to the workstations. For optimum performance, I would either: 1) divide the 8 drives and separate the encoding to its own set of drives or 2) keep the encoding output local to each workstation and transfer the files once encoding is finished. Another performance option is installing 2 of those Intel NICs in each workstation and 'Team' or Bond them. This will allow full bandwidth for each workstation, which is a bit over 200MB/s to that Enhance iSCSI I linked to. However, having 2 ethernet connections from each workstation requires another switch to be added (so, +$45). Oh yeah, you want Cat 6 cable for optimum speed/bandwidth; although, I have used quality Cat 5e and reached very good speeds. And for future expansion: it will be as simple as connecting to the switch and setup the iSCSI on the new computer plus adding 1-2 Intel NICs. (the Intel NICs are far better than any onboard/built in NICs) If you choose this path, the next aspect to discuss is the best Raid setup to use, which depends on how you will use the SAN/File server and the number of drives you get. Just to begin, I really like the Samsung F3 1TB because 1) its the fastest and 2) the reliability seems to be outstanding according to the newegg ratings - if there are many people with bad drives, they will score accordingly, and this drive has gotten better scores than all others. I have been a Seagate fan (I have 8 of their 7200.12 1TB drives and 7 1-2yr older drives) but I am looking to add 4-6 Samsung F3's. |
March 11th, 2010, 07:37 AM | #11 |
New Boot
Join Date: Nov 2008
Location: Sydney
Posts: 16
|
Hi Steve.
First of all I would like to apologize for taking long to reply to your great post, been waiting to get together with my friend to type the following questions. 1 - What recommendations on which path to take in choosing switch, smart switch or a router? and why? 2 - What level of RAID would you recommend? From what we gathered wouldn't a RAID10 be the most suitable? If you suggest other type of raid, please explain why. 3 - Since EnhanceRAID T8 IP Desktop RAID 6 Desktop Storage System is what you strongly recommend, we checked at the manufacturers website, they only seem to recommend/support Hitachi, Seagate Barracuda and Maxtor Diamond MAX HDD's. The Samsung F3 1TB doesn't seem to be supported? 4 - In terms of expansion: The system has only 8 trays and supports HDD's upto 1.5TB's so after RAID will give me a total of 6TB available for storage, what if we will require to add more or larger HDD's? Buy another system? 5 - Most importantly, now that I'm interested in buying the Enhance T8, I can't seem to find it anywhere else, not on newegg, enhance-tech.com or their suppliers... Are there other systems that are similar in performance/reliability and hopefully around the same price range? Once again, thank you for your kind and great help! |
March 13th, 2010, 02:00 AM | #12 |
Trustee
Join Date: Aug 2009
Location: Chicago, IL
Posts: 1,554
|
1) For link aggregation (aka bonding/teaming) to work, the switch must be capable which requires a smart or managed switch, so this Newegg.com - NETGEAR GS108T 10/100/1000Mbps Gigabit Smart Switch 8 x RJ45 8,000 media access control (MAC) addresses per system MAC Address Table 128 KB embedded memory per unit Buffer Memory - Switches
Of the 8 ports, 2 to iSCSI array, 1 to each workstation OR add another switch and 2 gigabit NICs to each workstation. 2) Personally, I like Raid 10 and have used it on my last 3 workstations. However, I don't know if it suits your needs because you lose half your drives. One idea I have: using 1TB drives, take 6 and put them in Raid 5, giving you 5TB. Then use the last 2 drives in Raid 1 to store the encodes. 3+4) I looked into it and found that the Hitachi and WD 2TB drives are supported. It seems their website isn't updated often enough with new info. Also, Enhance Tech is a technology partner with Seagate and Hitachi. For the money, the Hitachi 2TB drive is only $160-170 and is 7200rpm. Your network connection is the bottleneck so it really doesn't matter how fast the drives are. The only downside to the Hitachis is they use a bit more power and run a little warmer. If there is good airflow, I don't think there would be a problem. How much space do you need now, and how much do you think will be added within a year? 5) There is this: Newegg.com - Enhance Technology R8 IP 2U 8-Drive iSCSI-to-SATA RAID Storage System - Server RAID Systems Its the rackmount version and is identical. You don't need a rack, just a flat surface to sit it on. However, I found the T8 IP on pcmall.com Enhance Technology ENHANCERAID T8 IP SYSTEMS FEATURES GIGB T8-IP I have researched the Thecus N8800Pro, which is another iSCSI/NAS but it uses different file systems: ZFS, XFS & ext3. From what I have read, XFS would need to be used due to various reasons. There is also the N8800 and N7700Pro. I like the N8800 because it has 2 PCI-e x1 slots to add 2 more Intel Gbe NICs, which should increase performance if needed. |
March 13th, 2010, 09:17 AM | #13 |
Trustee
Join Date: Dec 2006
Location: Central Coast - NSW, Australia
Posts: 1,606
|
too many options
I'm looking at using 3 1.5tb drives in a raid 5 for project storage - each wedding gets dumped and is secure until it's ready for editing.
I'm setting up 2 1tb drives in a raid 0 and active projects get moved to this raid while I'm working on them. Archived projects get moved to separate drives in another computer at the other end of the house. operating system & programs are on their own 1tb drive |
March 20th, 2010, 09:46 AM | #14 |
Regular Crew
Join Date: Nov 2008
Location: Maryland
Posts: 68
|
I built a Athlon X2 4gb ram system in a 4U Rackmount case to house my 6 1.5TB Seagates in RAID 10 (aka RAID 1+0) last year. For connectivity I did as many have suggested and bonded 2 Intel gigabit cards, providing over 200mb/sec sustained, which is still the bottleneck from a read perspective as the array gets over 400mb/sec read sustained, even more if the data is in the raid controller's memory. I went with the 8 channel Highpoint RocketRaid 4320 as they are daisy chainable to 128 drives and was 50% off last Spring during a promo. For an OS drive I just bought an 80gb WD Scorpio Blue laptop drive, since the OS drive is rarely hit and the 2.5" drives use less power.
I didn't see it mentioned in this thread, but be sure to enable Jumbo Frames on your NICs, larger file transfers benefit from it immensely. Also if possible use Windows 2008 R2 for the file server and Windows 7 for the workstations, the rewrite of the network stack in the Windows 7/2008 R2 Kernel is much improved over XP/Vista.
__________________
Motion Blur Studios - Canon XL H1, XL2, nanoFlash, 2xFS-4 HDs, Custom 9TB RAID 10 Array HPT RocketRaid 4320, GeForce GTX 480, Adobe CS5 Production Suite |
March 23rd, 2010, 10:41 AM | #15 |
Major Player
Join Date: Mar 2005
Location: Neenah, WI
Posts: 547
|
I'll also add my compliments to Harm on a very solid article.
One thing he wrote on the bottom hasn't been addressed yet (or I missed it) and that's an uninterruptible power supply...a battery backup. I have dual power supplies on my array and have a UPS for each side. One thing you need to look at is the power output FROM BATTERY. These things typically have a maximum load spec for when they're simply conveying/filtering shore power and a separate and always LOWER maximum load when running from the battery. Dedicated Arrays use some juice, and a battery backup that kicks in and supplies inadequate power could damage your system worse than a clean power interruption would. For this thread, networked storage may or may not be a good solution. It will certainly be more expense and potentially a bigger headache for someone who is trying to edit and isn't stimulated by computer networking issues... Granted, once it's set up it should run, but without IT experience (and even pretty savvy IT guys are typically a little out of their element when they see what we need to pull through a network in sustained data rate), troubleshooting will bring your whole facility to its knees if all edit systems are using a central network storage system for media. My recommendation would be to get a reasonable, inexpensive RAID system for each station, as was suggested. (removable drives are always a plus) If one drive system has a problem, the other stations are still running... You don't have a single-system glitch that takes your whole office dark. You could still have a network and get an inexpensive, self-contained, network attached storage drive for standard resources like graphics and music libraries and anything else that each station may need centralized access to...but I think that media local to the workstations is a better way to manage risk. Keeping a given wedding on a given disk seems like a pretty simple way to keep files straight. Encoding for DVD/BluRay will typically not tax disk throughput as the processing takes time and you could easily have an encoding station set up that you take the drive to, connect it and start the encode... It's not sexy, but with very little effort, you could probably gain redundancy and project delineation by using self-contained portable disks as master backups for each project as you edit them on a small RAID system on the edit station. Also, keep in mind that you're planning on editing HDV...it's the same datarate as DV. You'll still need about 12GB/hour to store it and about the same drive bandwidth to play it out. What you'll need is bigger processors to encode/decode it. It's a bigger task than editing DV in that sense. Faster drives are always good...don't misunderstand me, but I think your potential for growth may not be as hobbled as you think with self-contained or local drives/RAID media storage if you create a system for tracking files and backing up. ...just a devil's advocate angle on the discussion.
__________________
TimK Kolb Productions |
| ||||||
|
|