View Full Version : eSATA RAID 0 question
Scott Auerbach May 16th, 2006, 11:59 AM Pardon my ignorance on RAID issues, but I'm still trying to figure some of this geeky stuff out...
If I were to get an external SATA box (say, for example, the mini-G 4-drive box) and stripe 4 drives as RAID 0 for fast throughput, can that box hang off a single eSATA port on a MacBook Pro, or does it require 4 eSATA channels?
I've seen RAID expresscard controllers for the G5/desktop world that have multiple channels on one card, but I can't seem to determine if such a card is ~necessary~ for a RAID stripe. I wouldn't think so, given the eSATA II high-speed protocol... but I can't figure out if the multi-lane cards are just a way to take the RAID processing overhead off the computer, or if they're essential.
And if you can hang a 4-way stripe on a single eSATA port, does the external box need its own RAID controller hardware, or can it be just a bunch o' drives?
All this is on hold, of course, until there's an express/34 eSATA card, but that appears to be on the horizon.
Rob McCardle May 16th, 2006, 02:43 PM Hi Scott - the rule of thumb with raid 0 configs (and this only relates to raw data throughput) is to have the drives on seperate channels - yes, despite what the enclosure manufacturers will tell you - you will get faster throughput when your drives are each on their own dedicated channel and then combined in a raid set.
That is why the controller cards for the towers have all those channels, each channel is on it's own bus - no head banging (I/O data bashing) in the bus pipelines.
I can't comment on eSata performance with a MBP (later this year for me when Merom comes out).
I hope that Apple will add an eSata port so that you could add a card and stripe 2 drives RAID 0 like we can now with fw 800 on the pb's.
eg. <link removed, sorry check barefeats search for eSata and a reveiw of the FirmTek 4 drive enclosure> - for this to fire optimally you would have a 4 channel card in a tower and run the leads. You would not run one lead and stripe the drives.
Also here - <link removed, sorry check barefeats and an 8 port PCIx card> - ok he's yapping on about pciX but the principle is the same.
Hope that helps.
edit: oh yeah, should add (I'm sure you know but just in case you don't) - striping is utterly great and can give the sheer speed required as long as you realise it's a total crash and burn type of deal.
Lose one drive and you're toast - so have a good backup plan in place - regularly.
Scott Auerbach May 16th, 2006, 03:38 PM Hi Scott - the rule of thumb with raid 0 configs (and this only relates to raw data throughput) is to have the drives on seperate channels - yes, despite what the enclosure manufacturers will tell you - you will get faster throughput when your drives are each on their own dedicated channel and then combined in a raid set.
That is why the controller cards for the towers have all those channels, each channel is on it's own bus - no head banging (I/O data bashing) in the bus pipelines.
Thanks...I figured performance would be better with a multi-lane setup...but that's not an option on the MacBookPro. The best we can hope for is the upcoming 2-lane express/34 card. That being the case, can a 4-way stripe be hung off a single eSATA channel? Even with diminished performance? I just don't want to invest in something that won't work at all. I don't expect ideal performance, but if it'll work...and significantly speed up the process of offloading media from the P2 cards...then it's worth the investment.
As for the crash-n-burn... yeah, been there done that on Avid stripes. Argh.
Rob McCardle May 16th, 2006, 03:58 PM Scott - you're on the bleeding edge !
I'm going to send you over yonder -
<link removed, sorry check Firmteks site for their Seritek adapter>
I'll have a stab and say you'll be ok - but then again ...
Look at the data throughput/port AND on the one eSata bus 3 freak'n Gb/sec -eeekkk.
pls note: I'm not affiliated to FirmTek in way shape or form - but they make some cool stuff, IMO.
Robert Lane May 16th, 2006, 04:47 PM Scott,
As you might know, there are 2 kinds of SATA enclosures: the direct connect type or JBOD where each drive gets it's own data cable and, the port multiplier type (PM) which gangs several drives together into one eSATA cable.
For the towers that means there are 2 kinds of cards: The type that uses a single connector for each drive such as the HighPoint RocketRAID 2320 or, uses the PM type connector like the Sonnet Tempo E4P. The two types have completely different setup schemes and, right now the PM type has many hidden bugs that haven't been worked out yet. As Rob mentioned, it's "bleeding edge" technology right now still in it's infancy.
OWC makes a great enclosure that might work for the MBP: It's a single drive case (nice black color too) that has (2) eSATA connections, one for each drive. You could RAID those together in OSX as software RAID, and in a RAID 0 configuration you should get about 137 mb/s READ speed and a little less on WRITE. This assumes that you're using SATA-II drives with lots of cache on each drive.
SIIG and a few others are currently making a 2-channel Express Card 34 SATA adatper which would give you the ability to use the OWC enclosure.
The real caveat here, is that you'd still need one more external drive, say a FW400 type to use for your backups, so that if the array fails for any reason you've got a current copy of your project and media files.
Scott Auerbach May 16th, 2006, 05:56 PM Scott,
As you might know,
Hell no, Robert... that's why I ask Godzilla! <g>
Leave it to me to be bleeding edge and not even realize it...
In my techno-flailing, I'd kinda gotten the impression that there were, as you say, two ways to skin this cat. But all of the deeper articles I'd found dealt with multi-channel RAIDs...no reference at all to spitting it across a single cable.
So far I haven't found any express/34 eSATA II cards, though Vydeo has announced a two-lane card for release sometime soon.
Correct me if I'm misunderstanding, but it sounds like you're describing close to double the I/O of a single eSATA drive by doing a two-way software RAID 0 stripe. If that's the case, isn't it reasonable to assume that I could get close to 4x single-drive speeds on a 4-way stripe? That would sure help get the files off the P2 cards in PC mode without totally slowing down the shoot day. FW (400 at best possible performance) out of the camera, and eSATA at 250-ish speeds onto the RAID array.
I definitely understand the "lose-one-lose-all" issue with RAIDs (having experienced that twice in 10 years on Avids), and the need for some kind of backup...that's not a problem. At this point, I'd just like to get the footage offloaded while on location as fast as possible...I anticipate doing a lot of SELLING to get the budgets for a three-man crew (me as DP, camera assist/data manager/focus puller/grip, and audio tech). If the sales job on having the camera assist includes the increased production speed by not having to do my own media management, then I have to be able to keep the camera up and shooting as much as possible. Especially once there's some kind of P2 external reader...I have 3 8GB cards in hopes that I can get media off card A while B and C are shooting. If I can get 1GB off in significantly under a minute, 3 cards is all I'll ever need.
Beyond the "will this work?" question: I can't find any OSX Tiger documentation/help text on doing a RAID 0 stripe. Anyone have a Kbase link?
Rob McCardle May 16th, 2006, 08:30 PM <link removed, sorry check barefeats front page, scroll a little to the review of the Seritek adapter>
Check the graphs - that is what you can expect.
Also note the raid stripe - 2 ports active, 2 drives in 1 enclosure.
The issue for transfer/copy will not be bandwidth for eSata (there's bandwidth for miles there) it will be fw400 - that will be the slowest link in the chain. Not much you can do about that ...
Sorry Chris - I had links pasted in there - forgot about policy - rush of blood to the brain ...
Scott Auerbach May 16th, 2006, 09:17 PM Will do... thanks!!!
Scott Auerbach May 16th, 2006, 09:32 PM Beyond the "will this work?" question: I can't find any OSX Tiger documentation/help text on doing a RAID 0 stripe. Anyone have a Kbase link?
Gee...quoting myself here. I found the RAID stripe utility in DiskTools. Shoulda thunk to look there.
David Tamés May 16th, 2006, 09:55 PM I found the RAID stripe utility in DiskTools
My suggestion for any Macintosh user that's setting up a software RAID is to consider doing it with SoftRAID (http://www.softraid.com/) rather than using Apple's Disk Utility for setting up a software RAID. This is what I've been using since I discovered that the friendly Apple interface does not extend into error reporting and disk recovery. SoftRAID allows you to set up a RAID 0 (striping) or RAID 1 (mirroring) with any disk that's connected to the Mac via Fibre Channel, SCSI, ATA, SATA, FireWire, or USB 2.0. I hope in a future version they will also support RAID 10 (a.k.a. RAID 1+0), which Apple added to their Disk Utility in Tiger.
Disk Utility provides a no-frills interface but you have to use the command line interface to rebuild in the event of the failure of a mirror disk, and a RAID configured with Disk Utility is less reliable because Apple does not offer the level of error checking and reporting that SoftRAID does, letting you see I/O transactions, error counts, and volume status through a very Mac-like interface that makes it a snap to create and maintain RAID volumes. A SoftRAID vs. Apple Disk Utility Comparison (http://www.softraid.com/vsapple.html) is available on the SoftRAID web site.
Robert Lane May 19th, 2006, 08:31 AM Scott,
One last thing to consider about any SATA-type RAID setup:
Unless you use a RAID card that has on-board cache on the controller (I only know of one and it's pricey) you'll find that as you work on video files the RAID starts to slow down as you work.
This is because as you import, render etc. the only cache available to help process the large video files lives on the HDD's themselves, which is why it's always best to use drives that have "big" cache on-board - no less than 16mb per drive.
So, as you work the cache on each drive fills up immediately while it x-fers the info to the disk and back to the system and so forth. This overloads the on-board drive cache to the point where it literally can't keep up.
Case in point:
Using the KONA System tester I put a 2-disk RAID 0 array to the test. Using the DVCPRO codec, 128mb file test the first test showed about 140 mb/s. After the 5th consecutive test the best READ speed was only 109.7 mb/s. This accurately represents how the drives would perform in a video editing environment moving large video files - which would be greater than 128MB I might add - back and forth.
Those results are a very important thing to note, because all the tests posted by AMUG, BARE-FEATS and others don't address the degradation-over-time issue, they simply show what the maximum *possible* throughput is for the various setups that have been tested.
So while SATA and eSATA enclosures are showing up on the market they're not exactly optimized for a video editing platform. They will work (once the PM bugs have been worked out) and they will perform faster than a JBOD or any single drive, but they do have a built-in "peformance degradation factor" that can't be overcome with 98% of the setups currently offered.
Scott Auerbach May 19th, 2006, 02:18 PM Godzilla squash my hopes! <g>
Thanks, once again (I feel like I need to do that daily) for yet another incredibly important nugget that has gone completely overlooked in all the material I've read (here and elsewhere) on P2 workflow. I was always assuming a 4-way SATA stripe (minimum) would be the way to go, but clearly a 2-way would offer even more iffy performance than I realized.
So tell me... has anyone ventured an opinion about whether it's even physically possible to cram the electronics for a 4-drive eSATA raid controller on an express/34 card? I know they're all PCI cards at the moment, but it looks to me like it's a whole lot of circuit board with not much on it...
Robert Lane May 21st, 2006, 12:02 PM Keep in mind, that while SATA and eSATA setups do lose speed as you progress in your editing, they still perform better than a single drive. It's just that a 2-drive stripe really isn't much faster than a single drive over time, which doesn't really make it a cost effective solution.
The Express Card adapters that are coming so far only allow 2 drive connectors. I haven't done any research into what else is coming down the pike.
My suggestion, is that if you intend to do most of your edits on a MacBook Pro that you use multiple external FW drives instead of the SATA striping method. It's more stable, less expensive and you'll be working with proven technology rather than putting your work on the backbone of "bleeding edge" tech that really needs to be refined for video use.
I don't know how much storage you need or what your project requirements are, but you could use (3) FW drives to accomplish fast and stable edits on the MBP: One for your capture scratch, one for stock video, music, SFX, Waveform and Thumbnail cache files, and one to save your project files and to using Media Manager to create your backups.
At some point within the next 6 months to a year the SATA RAID setups will get better and there will be controller cards with on-board cache to eliminate the time degradation factor, but for now you're better off with FW externals for the MBP.
Scott Auerbach May 23rd, 2006, 11:38 AM My suggestion, is that if you intend to do most of your edits on a MacBook Pro that you use multiple external FW drives instead of the SATA striping method.
I don't know how much storage you need or what your project requirements are, but you could use (3) FW drives to accomplish fast and stable edits on the MBP: One for your capture scratch, one for stock video, music, SFX, Waveform and Thumbnail cache files, and one to save your project files and to using Media Manager to create your backups.
Thanks, Robert. Realistically, it'll probably be 6 months before I'm doing a whole lot on FCP anyway, so the technology will be farther along, I hope... I'm an Avid editor and have regular access to Adrenaline, but bought FCP to be able to wean myself from that relationship...and to offer some degree of in-field editing.
That said...I'm a bit confused by your quote above. I see the advantage of splitting the files up like this, to avoid hardcore head-bashing running between project and media files... but wouldn't a single FW drive for media end up being too slow for HD100? I thought the 7200 rpm drives were still only delivering 60-70Mbs sustained read rates...? I thought I'd need at least a 2-drive stripe no matter what the I/O protocol was. Am I (probably) wrong?
Drew Harty May 23rd, 2006, 12:25 PM One last thing to consider about any SATA-type RAID setup:
Unless you use a RAID card that has on-board cache on the controller (I only know of one and it's pricey) you'll find that as you work on video files the RAID starts to slow down as you work.
This is because as you import, render etc. the only cache available to help process the large video files lives on the HDD's themselves, which is why it's always best to use drives that have "big" cache on-board - no less than 16mb per drive.
So, as you work the cache on each drive fills up immediately while it x-fers the info to the disk and back to the system and so forth. This overloads the on-board drive cache to the point where it literally can't keep up.
Case in point:
Using the KONA System tester I put a 2-disk RAID 0 array to the test. Using the DVCPRO codec, 128mb file test the first test showed about 140 mb/s. After the 5th consecutive test the best READ speed was only 109.7 mb/s. This accurately represents how the drives would perform in a video editing environment moving large video files - which would be greater than 128MB I might add - back and forth.
Hello Robert,
Doesn't the throughput of all software controlled raid 0 arrays, whether SATA, firewire, or SCSI, decrease as the drives fill up? (Unless, as you say, there is a hardware controller like Medea uses.) From test I have seen a decrease in read/write speeds of 30%-40% is typical. Is there something about the SATA or eSATA connectors that makes throughput decrease more than a firewire 800 connector? Doesn't DVC Pro HD only require 13 mbs throughput?
I have used a SATA raid O on my desktop for a couple years editing uncompressed 10 bit SD with no problems and was going to use two SATA drives with my MBP (when Firmtek releases it new express 34 card) in a raid 1 for instant backup using Soft Raid to record interviews directly from my HVX in 720 24p. Do you think this will be a problem? Are you suggesting firewire drives would perform better in this use?
thanks,
Drew Harty
Kevin Shaw May 23rd, 2006, 12:41 PM wouldn't a single FW drive for media end up being too slow for HD100? I thought the 7200 rpm drives were still only delivering 60-70Mbs sustained read rates...?
HD100 is 100 MegaBITS per second, which is only 12.5 MB/sec (because 1 MB = 8 Mb), hence easily sustainable on any modern hard drive. I like the idea of splitting source files over different (non-RAID) drives but haven't tested that yet; in theory it should help compared to having everything on one drive.
Robert Lane May 23rd, 2006, 06:50 PM I have used a SATA raid O on my desktop for a couple years editing uncompressed 10 bit SD with no problems and was going to use two SATA drives with my MBP (when Firmtek releases it new express 34 card) in a raid 1 for instant backup using Soft Raid to record interviews directly from my HVX in 720 24p. Do you think this will be a problem? Are you suggesting firewire drives would perform better in this use?
My suggestion is that there isn't any performance increase for the added cost of SATA in the proposed setup vs. using existing external FW drives.
Robert Lane May 23rd, 2006, 06:56 PM I thought the 7200 rpm drives were still only delivering 60-70Mbs sustained read rates...? I thought I'd need at least a 2-drive stripe no matter what the I/O protocol was. Am I (probably) wrong?
The throughput only becomes an issue when you start mulitplying how many streams of HD100 you've put into any timeline.
You're correct, that the average single drive regardless if it's IDE/FW or SATA based is about 65 mb/s. That's plenty for about 3-4 streams of DVCPRO 720p.
With regard to drive optimization, I highly recommend this book for ANYONE who wants to maximze throughput on any Mac-based system:
"Optimizing your Final Cut Pro System", part of the Apple Pro Training series. It deals with literally everything you need to know about the hardware/software environment for editing.
It has chapters that specifically deal with drive types, configurations, SAN/RAID/XSAN setups, video codecs, bitrates - you name it, it's covered.
Scott Auerbach May 24th, 2006, 08:12 AM Fantastic suggestion, Robert...thanks. I hadn't run across that book.
At the risk of overextending my welcome with Godzilla, how do you manage multiple 720p streams from a single drive? If 1080i is 100Mbs, then I assume 720pN30 is roughly 50 Mbs (extrapolating from the P2 capacities). At that bitrate, it seems like 1 stream is all a single drive could possibly handle. What part of the miracle of non-linear editing (and it IS a miracle...I go back to 1" CMX days..<shudder> ) am I overlooking? Even on Avids, I thought the minimum stripe for uncompressed SD is a 4-way of 7200 rpm drives. Is FCP buffering material off onto the scratch disks or something?
Robert Lane May 24th, 2006, 08:29 AM Fantastic suggestion, Robert...thanks. I hadn't run across that book.
At the risk of overextending my welcome with Godzilla, how do you manage multiple 720p streams from a single drive?
Taken from Page 410 of the aforementioned book:
DV100 720p60 mb/second: 13.92 mb/minute: 835 mb/hour: 50.1
Multiply the mb/s rate by the number of streams and you'll figure out the maximum streams you can handle.
Now we're talking about RAW footage, not pre-renders or filters mixed in the timeline. Add effects of any kind and you'll decrease the single-drive ability to handle RT playback.
This is where on a NON-RAID setup that multiple drives comes into play - something the book goes into great detail about. As mentioned in my previous post, when you keep application, render, original clips, cache and project files physically separate from each other you dramatically increase the ability of the system to handle multiple streams - of any compressed codec.
With a 4-external drive setup you can very easily do a 4-stream mix of DV100 30p or even 60p; the RT performance then will be determined mainly by how fast the CPU is and how much RAM you have. That doen'st mean that you won't get performance slowdowns during playback with a 4-stream mix, but the "older" PowerBooks (later models) can do it, especially with a 7200rpm MAIN internal drive.
Get the book; I guarantee after 30 mintues of browsing chapters you'll have some eye-opening moments about how to handle FCP in the hardware world.
Robert Lane May 24th, 2006, 08:59 AM Hello Robert,
Doesn't the throughput of all software controlled raid 0 arrays, whether SATA, firewire, or SCSI, decrease as the drives fill up? (Unless, as you say, there is a hardware controller like Medea uses.)
Actually, there are 2 different degradation issues for the SATA drives:
- One is as you mentioned, where as the drives fill up performance drops off. This is common among all drive configurations even SCSI RAID. It's not specific to being in a RAID configuration at all - any drive slows down when it get's more than 70% full, it's just more noticeable on a video edit system.
- The second type is specific to the SATA RAID setups and occurs immediately as you edit. If you look at the results mentioned in the KONA System test, all 5 tests were done consecutively within a 2 minute period. So, you can see that in less than 5 mintues a significant amount of the maximum throughput has been lost to the drive-cache being "hammered" and full.
(NOTE: When doing this KONA System test, I noticed that the drives got really hot - hotter than normal. The supposition is that because the drive is working so incredibly hard at moving big chunks of data in and out - constantly - that the on-board cache is being pushed to it's limit, literally. This also made me very nervous about trusting an eSATA setup over a long period of time because of possible drive or drive controller failure due to the excessive heat).
There is one eSATA card that does have on-board cache to help prevent this very issue, but it's only in a PCI or PCI-X config and it's about $600. Hardly cost effective compared to external FW when you add-on the cost of enclosures, drives and eSATA cables.
Scott Auerbach May 24th, 2006, 08:52 PM Get the book; I guarantee after 30 mintues of browsing chapters you'll have some eye-opening moments about how to handle FCP in the hardware world.
Clearly! Thanks again!
|
|