View Full Version : RAID and hard drive set, couple of questions.
Jeff Troiano December 11th, 2010, 09:12 AM I'm trying to figure out the best way to go about setting up a RAID, for my new computer.
This is what I want to do.
Single main drive (C:), possible SSD
2 seperate RAID 0 setups, 2x2 7200rpm drives (1 for project files, and 1 for cache)
Then another drive (something like a 1 or 2 tb drive, for storage)
first, would this be an ideal set up for using with CS5?, 2nd, do I need 2 separate RAID controller cards to make the 2 separate RAID0 configurations? Would anyone suggest what to look for in a RAID card, or recommend a card?
Any detailed feedback would be very helpful. Have my future system almost blue printed, with the exception of the drive systems.
Thank you,
Jeff
Mike McCarthy December 11th, 2010, 10:57 AM I am not sure why you would ever want two separate RAID0 arrays. If you are concerned about maximum performance, then a single array of all 4 drives would be much more effective. Use a large stripe size for video files and the cache.
I have always had solid performance with a single OS and apps drive, and then a media array for everything else. 2-10 drive in a RAID0 or RAID5 depending on the system and the budget. Spreading your files out across many different volumes makes file management and backups much harder.
Adam Gold December 11th, 2010, 12:02 PM I do the same as Mike. On my main editing rig, I have apps and OS on a fast C: drive, then everything else goes on a 7 x 1 TB RAID3, with nightly auto backup to a 4 x 2TB RAID5. Project files as well as both source and rendered video and audio. My secondary editing box is the same, but the workdrive is 4 x 2TB RAID5.
This keeps things simple for file management as well as being fairly fast and safe. Project Manger in Premiere isn't always perfect, so by having all disks set to "Same As Project" it's a simple matter to drag the entire project folder off to a hard drive for archiving when I'm done with the project.
Jeff Troiano December 11th, 2010, 04:04 PM Can anyone explain the difference between RAID0 and RAID5? is it just that in a RAID5, on of the drives can fail, and the RAID will still function? I had planned on keeping an external to back up too on a nightly basis.
Truethfully, I qanat sure about having to RAID settle. But someone else mentioned something like that in another thread. So I started a raid specific thread to get info. I like the idea of just one raid set up for performance.
Thanks,
Jeff
Paul Mailath December 11th, 2010, 10:31 PM RAID - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/RAID)
that should tell you everything you want to know - raid0 is fast but it really shouldn't be called raid because there's no redundancy. If a drive fails on a raid0 and you don't have a backup - you're buggered.
Steve Kalle December 13th, 2010, 12:32 AM I am a RAID freak and love to use Raid except with SSDs because they are inherently more reliable. With source video that can be reloaded from a backup, having redundancy is not crucial unless the time spent reloading footage is crucial to your work. However, with project files including PPro, PSD, AE,etc..., these files can and do change throughout the work day. Imagine working 8+ hours on a Premiere project and the drive on which it resides decides to die....imagine having a very tight deadline....
With the Media Cache, I always allocate a separate drive.
Here is the setup in my studio PC:
- Intel X25 80GB for OS + Apps
- Velociraptor 150GB for Media Cache
connected to Areca 1680ix - 6x2TB Seagate Constellation ES in Raid 5 for all source & project files
connected to LSI 1068E SAS - 4x1TB Seagate 7200.12 in Raid 10 for encoded files
Several 2TB Seagate LP 5900rpm in external eSata cases and hot-swap cages for backup.
A bit of advice and warning: do not use Western Digital drives due to their firmware which makes the drives spin down constantly. This constant spinning down causes 3-5 second lags when editing and also causes major problems when used in Raid.
Randall Leong December 13th, 2010, 10:51 AM A bit of advice and warning: do not use Western Digital drives due to their firmware which makes the drives spin down constantly. This constant spinning down causes 3-5 second lags when editing and also causes major problems when used in Raid.
That may be true currently, but eventually all of the remaining brands of desktop hard drives will do the same thing.
My experience with Seagate has been thus far somewhat less than enthusiastic. My two current 1TB 7200.12 drives have significantly different access speeds: One with firmware version CC37 has a 16ms access time while the other with firmware CC38 has only 13.6 ms access time. This difference in access speeds makes the two drives less than optimal in a RAID 0 array, with overall disk performance tests (as per the PPBM5's Disk Test benchmark) somewhat slower than expected.
Steve Kalle December 13th, 2010, 01:25 PM That may be true currently, but eventually all of the remaining brands of desktop hard drives will do the same thing.
My experience with Seagate.....
Its amazing how you can predict the future of the entire hard drive industry.
Go buy 2 7200.12 drives from Microcenter and I bet they will work just fine. I have bought almost 30 Seagate drives in the last year from Provantage, Newegg and Microcenter and not a single one has been bad, caused any issues or spins down like WD drives.
Randall Leong December 13th, 2010, 03:48 PM Its amazing how you can predict the future of the entire hard drive industry.
Go buy 2 7200.12 drives from Microcenter and I bet they will work just fine. I have bought almost 30 Seagate drives in the last year from Provantage, Newegg and Microcenter and not a single one has been bad, caused any issues or spins down like WD drives.
One of the three drives developed S.M.A.R.T. failure after less than three months of use. It was RMA'd for a drive that turned out to be of a different (earlier) firmware revision than the original. Both of the Seagates that I currently have are working well, but the mismatch in access speeds resulted in a somewhat slower than expected performance.
Craig Coston December 13th, 2010, 04:39 PM Randall, were they 7200.11 or 7200.12 drives? I've had many issues with .11 drives in RAID 5 arrays, the .11 drives randomly dropping out of the array. I now only run them in RAID 0 mode or as individual drives for storage only.
Randall Leong December 13th, 2010, 05:26 PM Randall, were they 7200.11 or 7200.12 drives? I've had many issues with .11 drives in RAID 5 arrays, the .11 drives randomly dropping out of the array. I now only run them in RAID 0 mode or as individual drives for storage only.
Mine were the 7200.12 drives. When one of the drives in the RAID 0 started failing, at least the Intel software let me know that the array's performance had been degraded.
The problem with dropping out of RAID 5 arrays occur with the 7200.11 series - and to a lesser extent the 7200.12s.
Taky Cheung December 14th, 2010, 01:15 AM My RAID-0 is 4 x 1TB 7200rpm drives. It works great for video editing purpose. However, i have 400% higher chance of drive failure in such case. So I have another system running separately as encoding station, record TV, web server, and, backup server. I ran some backup utility automatically to back up my RAID drive every 4 hours. In case of drive failure, I won't lost everything.
Randall Leong December 28th, 2010, 06:08 PM One of the three drives developed S.M.A.R.T. failure after less than three months of use. It was RMA'd for a drive that turned out to be of a different (earlier) firmware revision than the original. Both of the Seagates that I currently have are working well, but the mismatch in access speeds resulted in a somewhat slower than expected performance.
I retested my two working 7200.12 drives again, and discovered the read performance to be much closer to one another than what I previously believed. It turned out that the drive benchmarking software I used (HD Tune 2.55) could not accurately determine the random access speeds of 1TB and larger drives - and its access time test worked at all only for the first 1024 GiB (1 TiB) of larger drives. Using a registered version of HD Tune Pro 4.60, I discovered that both of the drives had a random access time of just over 14 ms.
Randall Leong January 15th, 2011, 10:20 AM A bit of advice and warning: do not use Western Digital drives due to their firmware which makes the drives spin down constantly. This constant spinning down causes 3-5 second lags when editing and also causes major problems when used in Raid.
I discovered from further reading and further testing that this advice is valid if you are using a hardware RAID controller: The Western Digital drives that lack TLER support drop out of the array(s) controlled by the discrete hardware RAID controller (these include higher-end cards from the likes of Areca and 3Ware) even in RAID 0 or RAID 1 mode. These drives work fine, however, on an on-motherboard SATA controller with software RAID support (these on-mobo controllers, even the Intel ICH and PCH RAID, are not true hardware RAID controllers).
Steve Kalle January 15th, 2011, 01:42 PM I discovered from further reading and further testing that this advice is valid if you are using a hardware RAID controller: The Western Digital drives that lack TLER support drop out of the array(s) controlled by the discrete hardware RAID controller (these include higher-end cards from the likes of Areca and 3Ware) even in RAID 0 or RAID 1 mode. These drives work fine, however, on an on-motherboard SATA controller with software RAID support (these on-mobo controllers, even the Intel ICH and PCH RAID, are not true hardware RAID controllers).
Hi Randall,
I strongly disagree with this statement about compatibility with the Intel ICH raid. I have lost data due to the Intel raid dropping a WD drive from a Raid 0 array even though the drive was fine. The Intel software Raid cannot distinguish between a bad drive and a drive taking too long to respond. It drops the drive and a rebuild is required or data is lost if Raid 0. Also, a pro hardware raid controller can automatically reallocate bad sectors when writing to an array while the Intel software raid cannot. I became aware of this when my 3ware's alarm went off notifying me that a few bad sectors were being written to, so, it was able to hold the unwritten data in cache and find new sectors for this data. This happened about 3 years ago, so, I might not be remembering everything the 3ware software said. However, I recall investigating it and do remember that the Intel raid could not do the same thing.
Randall Leong January 15th, 2011, 02:05 PM Hi Randall,
I strongly disagree with this statement about compatibility with the Intel ICH raid. I have lost data due to the Intel raid dropping a WD drive from a Raid 0 array even though the drive was fine. The Intel software Raid cannot distinguish between a bad drive and a drive taking too long to respond. It drops the drive and a rebuild is required or data is lost if Raid 0. Also, a pro hardware raid controller can automatically reallocate bad sectors when writing to an array while the Intel software raid cannot. I became aware of this when my 3ware's alarm went off notifying me that a few bad sectors were being written to, so, it was able to hold the unwritten data in cache and find new sectors for this data. This happened about 3 years ago, so, I might not be remembering everything the 3ware software said. However, I recall investigating it and do remember that the Intel raid could not do the same thing.
I only went by what was elsewhere on the Web. I have never run any WD drives in any RAID array.
By the way, I have moved the two Seagate drives to my auxiliary rig when I acquired two 1TB Samsung F3 hard drives for my main rig. The only Western Digital hard drive in that main rig is a single 2TB Black, which is used as my output drive. That drive is not in a RAID array. (Two of the three Samsungs are in a RAID 0 array in my main system.)
The 1TB Samsung F3 is also available in a RAID-ready version, the F3R, for $10 more than the plain F3. Only buy that F3R for RAID 3/5/6 or for any combination RAID level involving one of those three parity RAID levels.
My auxiliary rig does have a single 1TB Black - as the system drive. The two Seagate drives are in RAID 0.
Panagiotis Raris January 18th, 2011, 01:17 PM i had no problems with WD drives (500GB units from 2007; identical) in RAID0 on a G33 ICH9R intel motherboard. Same with a pair of Seagate 7200.11 (the drive notorious for randomly dying) in RAID0, i didnt have any issues for about 6 months until one of the WD's developed bad sectors and eventually kicked the bucket. However, everyone else claims problems with the WD's, so i avoid them for RAID, and use them for storage only or backups.
i have had a WD Caviar Black and Caviar Green suddenly die, and the aforementioned 'bad sector' WD that was part of the RAID0 array.
all of my RAID array drives are identical Fujitsu notebook 7200RPM drives (and i keep spares around), and all storage drives are 1 yr old Seagates, while boot drives are Intel X25M's. This has proven to be the most reliable and efficient and also pretty cost effective.
Scott Chichelli January 25th, 2011, 08:45 AM I'm trying to figure out the best way to go about setting up a RAID, for my new computer.
This is what I want to do.
Single main drive (C:), possible SSD
2 seperate RAID 0 setups, 2x2 7200rpm drives (1 for project files, and 1 for cache)
Then another drive (something like a 1 or 2 tb drive, for storage)
first, would this be an ideal set up for using with CS5?, 2nd, do I need 2 separate RAID controller cards to make the 2 separate RAID0 configurations? Would anyone suggest what to look for in a RAID card, or recommend a card?
Any detailed feedback would be very helpful. Have my future system almost blue printed, with the exception of the drive systems.
Thank you,
Jeff
i am sure by now you have completed you build.. but for future reference.
OS drive (never raided bad news and pointless)
2 drives raid 0 capture/media etc
2 drives raid 0 export/render.
(you never want to read and write from the same drive array unless doing a large 8 or more drive raid 5,6)
something for external back up.
as far as drive failure, raid failure etc..
WD/Seagate etc can all fail. both are great drives. raid arrays do not require Enterprise drives when in 0
raid 3.5.6 need enterprise drives.
yeah you can get away with not using them but drop out rate will increase without them in a parity raid.
Scott
ADK
Steve Kalle January 25th, 2011, 11:36 AM I want to add a very important point: if you have clients sitting with you during editing, then you should NEVER use Raid 0 - its just not worth the risk.
Randall Leong January 25th, 2011, 12:12 PM I want to add a very important point: if you have clients sitting with you during editing, then you should NEVER use Raid 0 - its just not worth the risk.
RAID 0 does work well as a media cache/preview/pagefile drive. That much I'd give.
Otherwise, if a RAID array is required because a single drive is too slow or for whatever miscellaneous reason, you're better off spending the $500 or so for an 8-port PCI-e RAID card plus the cost of whichever additional drives that are required.
I stated the above because of my experience with Seagate and Samsung 7200 rpm hard drives. Although none of the drives currently active failed, the 1TB 7200.12 is actually slower - sequential-speed-wise - than even a 5900 rpm 2TB Green (the newest SATA 6 Gbps revision with 64MB cache) of the same brand. And the Samsung F3 1TB worked no better than the 7200.12 in RAID 0 despite the F3's 15 MB/s higher sequential transfer speed than the 7200.12. In this instance, both models of drives - particularly the Samsung F3 - are limited by the cache performance.
Scott Chichelli January 25th, 2011, 01:45 PM I want to add a very important point: if you have clients sitting with you during editing, then you should NEVER use Raid 0 - its just not worth the risk.
thats a rather silly statement dont you think?
99% of the time you get a warning that a drive is failing.
the majority of systems i ship are set up with 2x 2 drive raid 0.
rarely does a client have a failure/data loss.
Scott
ADK
Scott Chichelli January 25th, 2011, 01:51 PM .
Otherwise, if a RAID array is required because a single drive is too slow or for whatever miscellaneous reason, you're better off spending the $500 or so for an 8-port PCI-e RAID card plus the cost of whichever additional drives that are required.
ideally a raid 5,6 is nice but overkill for most peoples needs and budget.
an easy statement to make when its not your $.
[quote]
I stated the above because of my experience with Seagate and Samsung 7200 rpm hard drives. Although none of the drives currently active failed, the 1TB 7200.12 is actually slower - sequential-speed-wise - than even a 5900 rpm 2TB Green (the newest SATA 6 Gbps revision with 64MB cache) of the same brand. QUOTE]
something is definately not right there..
the Seagate 12 with 32meg cache is faster then the green drives with 64.
what are you using to test with...
also Sata 300 vs 600 bears little fruit until you get up into the higher end raids..
raid 0 should be 80% plus more thruput than standard
some recent #s
WD 1 TB 64 meg Cache Sata 600 Drives
Intel onboard Controller
Single drive - 112MB
2 Drive Raid 0 - 196MB
4 Drive raid 0 - 364MB
3 Drive Raid 5 - 108MB
4 Drive Raid 5 - 296MB
Marvell 6G controller
Single Drive - 111MB
2 Drive Raid 0 - 215MB
Intel SAS RS2BL080 8 Port controller with 512 DDR2 Ram.
Single Drive - 111MB
2 Drive Raid 0 - 215MB
4 Drive Rqaid 0 - 429MB
3 Drive Raid 5 - 219MB
4 Drive Raid 5 - 315MB
Seagate 1TB 32 Meg Cache Drives
Onboard Intel Controller
Single Drive - 103MB
2 Drive Raid 0 - 206MB
4 Drive Raid 0 - 395MB
3 Drive Raid 5 - 202MB
4 Drive Raid 5 - 299MB
Scott
ADK
Randall Leong January 25th, 2011, 01:58 PM ideally a raid 5,6 is nice but overkill for most peoples needs and budget.
an easy statement to make when its not your $.
something is definately not right there..
the Seagate 12 with 32meg cache is faster then the green drives with 64.
what are you using to test with...
also Sata 300 vs 600 bears little fruit until you get up into the higher end raids..
raid 0 should be 80% plus more thruput than standard
some recent #s
WD 1 TB 64 meg Cache Sata 600 Drives
Intel onboard Controller
Single drive - 112MB
2 Drive Raid 0 - 196MB
4 Drive raid 0 - 364MB
3 Drive Raid 5 - 108MB
4 Drive Raid 5 - 296MB
Marvell 6G controller
Single Drive - 111MB
2 Drive Raid 0 - 215MB
Intel SAS RS2BL080 8 Port controller with 512 DDR2 Ram.
Single Drive - 111MB
2 Drive Raid 0 - 215MB
4 Drive Rqaid 0 - 429MB
3 Drive Raid 5 - 219MB
4 Drive Raid 5 - 315MB
Seagate 1TB 32 Meg Cache Drives
Onboard Intel Controller
Single Drive - 103MB
2 Drive Raid 0 - 206MB
4 Drive Raid 0 - 395MB
3 Drive Raid 5 - 202MB
4 Drive Raid 5 - 299MB
Scott
ADK
Actually, I got an average transfer rate of 108 MB/s with the 5900 rpm drive (and that drive is an external USB 3.0 drive kit that's connected to the onboard USB 3.0 controller on my system's motherboard, to boot). But the average transfer rate of my particular 7200.12 barely touches 100 MB/s (and that is on the primary Intel ICH10R SATA controller on the motherboard). Trust me, I benchmarked each of the drives individually. Something is fishy with my particular drives, especially since my Samsung F3s average close to 115 MB/s on that same Intel ICH. The only thing that's slower on the 5900 rpm Green drive than on my 7200.12 is the random access speed: It took 19 ms on the Green, versus 14 ms on the 7200.12.
By the way, your results clearly showed that the 1TB WD Black SATA 6 Gbps drives do not work as well as the Seagate 7200.12 drives in a RAID array using the onboard Intel ICH10R controller.
UPDATE: I have tested two Samsung 1TB F3 drives in RAID 0 connected to my system's onboard Intel ICH10R controller. My result: 222 MB/s average. However, it provided no real-world performance increase over two Seagate 7200.12 drives due to the differences in the caching alogarithms in the two models of drives.
Steve Kalle January 25th, 2011, 08:29 PM thats a rather silly statement dont you think?
99% of the time you get a warning that a drive is failing.
the majority of systems i ship are set up with 2x 2 drive raid 0.
rarely does a client have a failure/data loss.
Scott
ADK
Out of 6 failed Raptors & Velociraptors, NONE of them gave a warning unless you consider your computer freezing and upon hard reboot, you see that a drive is designated dead. Furthermore, 99% of people here who use Raid 0, use the onboard Raid controller (ie Intel ICH) and that doesn't provide any sort of warning for anything until a drive is dead.
Randall Leong January 25th, 2011, 10:47 PM Out of 6 failed Raptors & Velociraptors, NONE of them gave a warning unless you consider your computer freezing and upon hard reboot, you see that a drive is designated dead. Furthermore, 99% of people here who use Raid 0, use the onboard Raid controller (ie Intel ICH) and that doesn't provide any sort of warning for anything until a drive is dead.
Actually, the ICH gives you no warning until the next reboot - and that's when I found out that one of the three Seagate 7200.12 drives was close to early failure. The reboot indicated the array containing the failing drive as "degraded".
Tim Polster January 26th, 2011, 10:00 AM I think the way to "see" these RAID 0 setups is as one drive and to view them as temporary storage.
I do not put anything on a RAID 0 drive (or any drive for that matter) that is not already backed up somewhere. Redundant systems help you sleep at night as well as save your butt at times. Hot swap trays are better than sliced bread.
I think the quality control for all of the drives on the market has fallen short of a lot of folk's expectations.
The idea of mini-RAIDs can really speed up workflow. Writing, encoding or building images from RAID to RAID is about the fastest way to work.
I just did some testing with Adobe Media Encoder CS5 as well Encore CS5 and the output speed of working from a source file on a RAID 0 compared to a single drive is a lot faster.
Scott Chichelli January 27th, 2011, 10:42 AM +1
raid 0 sets are the ideal way to work
even on a raid 5,6 one is very foolish to not back up..
every drive i have had fail has let me know in some way. errors/noise pop up from intel raid matrix.
i did have one lose its MBR however out of the blue due to a bad ext firewire controller.
which was my back up oddly..
now i use raid 5 NAS. and even its backed up to a 2tb eSata :-)
Scott
ADK
Steve Kalle January 27th, 2011, 04:16 PM +1
raid 0 sets are the ideal way to work
If you want to risk losing hours of work and then hours of reloading data or losing valuable face time with a client because you must stop working.
Another issue that can pop up and cause data loss is a Sata cable or Sata port going bad which happened to a client of mine a few weeks ago. Because I setup his PC with Raid 10 (using Intel ICH10r), he had no data loss and no downtime due to a bad sata port.
For most people here editing XDCAM, HDV and H264, a single drive is good enough and a 2 drive Raid 1 is even better because you still get an increase in read speed and read access times.
Randall Leong January 27th, 2011, 04:41 PM For most people here editing XDCAM, HDV and H264, a single drive is good enough and a 2 drive Raid 1 is even better because you still get an increase in read speed and read access times.
Only if the drives are using a full-duplex interface such as SCSI and SAS and connected to the appropriate SCSI or SAS controller. SATA, on the other hand, is only a half-duplex interface - which means that data can travel in only one direction at a time (a single SATA device cannot read and write simultaneously because the interface and host controller would not allow it to happen). The half-duplex nature of (S)ATA can slow down disk performance so much that today's fastest hard drives would become effectively as slow as the maximum practical transfer speed of a five-year-old drive in sequential transfers. That is why I'd recommend a minimum of three hard drives even if one is not using a RAID.
Scott Chichelli January 28th, 2011, 08:26 AM If you want to risk losing hours of work and then hours of reloading data or losing valuable face time with a client because you must stop working.
Another issue that can pop up and cause data loss is a Sata cable or Sata port going bad which happened to a client of mine a few weeks ago. Because I setup his PC with Raid 10 (using Intel ICH10r), he had no data loss and no downtime due to a bad sata port.
For most people here editing XDCAM, HDV and H264, a single drive is good enough and a 2 drive Raid 1 is even better because you still get an increase in read speed and read access times.
hey if thats the way you feel by all means..
but its not what i recoomend or sell.. nor is your experiance the norm. we rarely have a client with data loss or raid 0 dying.
while raid 5,6 is the best way not everyone can afford it or wants to.
i have a hard enough time convincing them they need a back up plan at times.
FYI raid 1/10 i refuse to sell or support. take about thinking your butt is covered only to find out when you try to rebuild the array it fails... and talk about time? raid1 or 10 takes a good long time and you cant work whilst doing it..
you can copy over your back up vastly faster than trying to rebuild that array..
but yes DV/HDV has no need for raid arrays (amazed people are still using this)
XDcam/avchd/h264 can most certainly benefit from raid0 particularly when increased layers/effects
for many time is money. and the render time is what most people hate, this is the single biggest question i get! how can i decrease my render times..
raid 0 increases this regardless of what codec. (well when having 2 sets of balanced raid arrays)
Scott
ADK
Steve Kalle January 28th, 2011, 12:20 PM hey if thats the way you feel by all means..
but its not what i recoomend or sell.. nor is your experiance the norm. we rarely have a client with data loss or raid 0 dying.
while raid 5,6 is the best way not everyone can afford it or wants to.
i have a hard enough time convincing them they need a back up plan at times.
FYI raid 1/10 i refuse to sell or support. take about thinking your butt is covered only to find out when you try to rebuild the array it fails... and talk about time? raid1 or 10 takes a good long time and you cant work whilst doing it..
you can copy over your back up vastly faster than trying to rebuild that array..
Scott
ADK
Your experience seems way off base! Raid 1 & 10 don't take much time at all to rebuild using the Intel ICH10r and it is as simple as copying data. There is NO parity data to calculate which is how R5 arrays go down and cause data loss. First, you say that you have no drives in R0 going bad but all of a sudden, drives in R1 and R10 do go bad?
My client with the bad sata port - his PC took all of 15 mins to rebuild the array and he could still use it while it was rebuilding; so, I really don't know where you get long rebuild times and an inaccessible computer. I used to use 4 Raptors in Raid 10 for my OS prior to Intel SSDs, and I lost 2 within a month, but I could still use the PC during each rebuild.
Talking about long rebuild times, then Raid 5 & 6 come to mind as they can take days.
Scott Chichelli January 28th, 2011, 05:48 PM rebuild time vs copying your back up... i was refering to a comment you mad about how long it take to copy from back up...
:-)
Scott
ADK
Randall Leong January 29th, 2011, 10:39 AM Talking about long rebuild times, then Raid 5 & 6 come to mind as they can take days.
Yes, rebuilding a RAID 5 array can take days, especially on that onboard SATA controller. At least with a discrete PCI-e x8 RAID card, you can still work on the computer for other tasks that don't require the use of the RAID array while that RAID is rebuilding. The onboard SATA RAID controller, on the other hand, eats up a relatively high amount of CPU power during the rebuilding, which may prevent you from using that system at all during the rebuild.
Randall Leong March 24th, 2011, 08:52 AM Hi Randall,
I strongly disagree with this statement about compatibility with the Intel ICH raid. I have lost data due to the Intel raid dropping a WD drive from a Raid 0 array even though the drive was fine. The Intel software Raid cannot distinguish between a bad drive and a drive taking too long to respond. It drops the drive and a rebuild is required or data is lost if Raid 0. Also, a pro hardware raid controller can automatically reallocate bad sectors when writing to an array while the Intel software raid cannot. I became aware of this when my 3ware's alarm went off notifying me that a few bad sectors were being written to, so, it was able to hold the unwritten data in cache and find new sectors for this data. This happened about 3 years ago, so, I might not be remembering everything the 3ware software said. However, I recall investigating it and do remember that the Intel raid could not do the same thing.
I have investigated further, and found that using an enterprise hard drive with TLER (or CCTL or ERC) permanently enabled in a single-disk or a RAID 0 configuration can lead to data corruption or loss. All recently-shipping WD Caviar RE-series drives have their TLER permanently enabled and fixed at 7 seconds - and the TLER cannot be disabled at all whatsoever. That makes the drive suitable only for those RAID configurations which involves data redundancy, such as RAID 1, RAID 3, RAID 5, RAID 6 or a multi-level RAID.
I also learned the hard way that I had been getting corrupted data and all sorts of weird issues when I ran the Samsung F3 1TB hard drive with firmware version 1AJ10002. which has CCTL enabled and set to 7.5 seconds by default, as my OS drive. (Oddly, the 1AJ10002 firmware (March 2010) is actually older than the 1AJ10001 version (September 2010) that is im my other two Samsung F3 drives that I'm currently using in RAID 0; the 1AJ10001 firmware version supports CCTL but the feature is disabled by default.) Switching to a drive that either lacks TLER/CCTL/ERC or have such a feature disabled by default alleviated the problem. So, if I had to use a drive that has TLER/ERC/CCTL permanently enabled or enabled by default as an OS drive, I would have had to use two of those drives configured in a RAID 1 configuration as the boot/OS drive.
Blane Nelson May 31st, 2012, 02:10 PM Great thread everyone!
So after my system drive crashed last Monday, I have rebuilt the system, and am ready to reconfigure the rest of the machine.
I am running one 500gb velocoraptor as the system drive, and 2 RAID 1s with 2 each HGST Deskstar 7K1000.C 0F10383 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s hdds.
I am also running a 4tb (4x1tb drives) Glyph Forte RAID box via e sata that I use for backup and media storage.
I am looking for protection as much as I am for speed, so will this configuration work, or should I go back to R0 for the raid volumes?
|
|