|
|||||||||
|
Thread Tools | Search this Thread |
March 22nd, 2010, 12:58 PM | #1 |
New Boot
Join Date: Feb 2010
Location: Oakhurst, CA
Posts: 19
|
Benefits of Raid card for a small array
I am building a new system with Raid 0 on 2 1TB drives. It has been recommended that I get a $478 PCIe Raid controller for these as opposed to using the on-board controller on the P6TD Mobo (See system build here). I've been running Raid off my old Mobo for almost 3 years without a problem.
Is getting a special "Raid controller card" going to be that much faster than on-board? If so, would I be able to get something cheaper to achieve similar performance? A card like that adds a lot of $ to the build. Thanks for the help! |
March 22nd, 2010, 11:02 PM | #2 |
Major Player
Join Date: Sep 2008
Location: East Bay Cali
Posts: 563
|
i was looking into this a Little this last week. ( i have allways had Seperate raid devices before Sata existed).
primo SATA Raid cards with all the juice $1500 and up, Simple sata raid like the cheap promice PCI-e $200. sombody at overclockers mentioned that Intel ICH10 chipsets Onboard raid was a good choise when buying a new mamaboard. They showed Wowser speeds , but that tells me nothing, speed tests are often ONE thing happening, ONE way, and nothing real. i run a ICH8 chipset raid now, and with 2 disks raid0 i basically do get ~2X speed vrses my singles, but it really isnt offloading the work type fast, it is just numbers fast. I find 2 limitations, Sata can really do fast transfer, but "platter" speed is always the limitations, so it dont matter if you got Mass Buffer cache junk, and super speed across the buss, because the speed they can peel stuff off the platters themselves counts most. Unless your repeating the same stuff over and over again :-P wow the fast train to snailsville, stops at repetition city, Soooo what! the second limitation is everything is always trying to go through the same chipset, same buss, same hub thing. when we had a simple PCI board raid system , WITH full onboard buffering and hardware raid stuff (about $400 then) , it seems to Offload a lot of work off my buss and cpu. well nowdays internal drives dont take a lot of CPU, but they sure LOAD the system pipes down, and other stuff that i dont understand. another thing i did when we had raid, was we had 2 controllers (raid and Not) so i could pass stuff from one controller to different controller VERY fast, wheras regular controllers talking to themselves or cheap raid talking to itself seemed to bottleneck. my board actually has 2 raids on it, the Intel and the thing that does the old PATA stuff that happen to do raid too. i always wondered what would happen if i split the drives out onto the other controller. so i think any seperate card, with some buffers and some processing capability of its own, can get the Disk To OTHER disk speeds going much better, when its ON a different controller/chipset. then say for video, that is somehow fast rendering, your source video and output video are on different disks on different controllers. then if you got $1500-2400 for real raid you can get about any disk to any disk really fast. to me the Big huge numbers meant NOTHING, whoopie i can get 300MPS from a program pushing RAM data into a Hard drive, or reading the hard drive into Nothing. What i would prefer is 200MPS (at least) out of one and Into another at the same time without the whole computer feeling like i just flushed the toilet inside it. PIPELINES, more of them, and more offloading of the buss. i am thinking of trying out the cheaper promice psudo raid sata controller, and see if i cant free things up a bit, a little laxative in the system :-) most of the cheap stuff is "software raid" meaning lots of the work is done in your main computer instead of on the raid board itself. Chances are any sub $500 one is "software" raid. when you have 2-4 simeltaneous "Platters" working via a controlling item that handles the real work, then a path in/out of that that doesnt slow the rest of the computer down, that is what i want again.
__________________
----------------sig----------------- Re-learning everything all over again, one more time. Last edited by Marty Welk; March 22nd, 2010 at 11:34 PM. |
March 24th, 2010, 08:14 AM | #3 |
Regular Crew
Join Date: Nov 2006
Location: Lincoln, NE
Posts: 162
|
Nice post, Marty. Packed with good info.
My two cents... Is the extra cash equal to the added performance? Depends on your business model and workflow. Me, I general work small projects, maybe switching between two or three projects at a time. I do one big project every 18 months or so. $500+ for a hardware controller won't give me $500 a performance in my workflow. In my case that $500 is better spent on faster processors and more ram and storage. Now, if I were managing a large number of projects and moving things around between multiple arrays that might be different. I'm running two 1TB drives in a raid0 from the onboard controller on a now old AMD AM2+ quadcore with 8gb ram capturing and editing 720p and 1080p Cineform files. My bottle neck is processor during AE renders and export encoding, not transfer speed. |
| ||||||
|
|