View Full Version : i7 980x Now or Wait for Sandybridge?


Pages : 1 [2]

Scott Chichelli
February 5th, 2011, 02:48 PM
wd waranty
http://www.wdc.com/en/products/internal/desktop/

drive warranties with all manufacturers change over the yrs.
the only drives a few yrs ago that had 5 yr was the enterprise drives.
most were 3 yr with several being 1 yr.

at one brief period nearly all were 1 yr. (didnt last long) one of the reasons i sold Seagate (aside from being quiet and performing well) was they had a better warranty than WD.

its very possible someone with a 3yr old drive trying to RMA it today did indeed have a 3yr warrnty when purchased. in fact i can almost guarantee it.
this 5yr is new as i said other than enterprise drives and raptors.
this would also be true for seagate.

its funny how some people get based on personal experiance.
Seagate was getting a bad rep due the the 1TB drives bricking. (we had very few) yet reading around it sounds like the end of the world.
ever forum i go to there is always someome who is passionate about hating a brand.

if there was any issues with the WDs i would not be selling them..
issues cost me employee time with support
issues cost me money in shipping, both to the client and then finally back to the manufacturer.

you can argue with me all you want fact is my clients do not have these issues. nor do we.
i have a drobo in raid 5 with WDs, i have 2 servers with 8 drive raid 6 with wds in it.
several of the tech computers act as storage as well and have raids both seagate and wd.
maybe its the drives we use, maybe its how the raid is set up.
maybe we and our clients (also in the thousands) are just absurdly lucky and you are actually right?

link please for these thousands who have experianced spin down issues.
FYI its common only is ext drives.
black drives do not have this..

Scott
ADK

Steve Kalle
February 5th, 2011, 07:12 PM
Hey Scott,

I hate to do this, but I must admit I was wrong about the TLER and Raid 0. From the time I have spent reading forums today, I have come across many who say TLER only works properly when there is redundant data, ie R1, 5, 10, etc.

I appreciate your debating me. I had to test your knowledge. Just ask Harm about me constantly debating him. There aren't many here well versed in computer hardware so I like to challenge those who do including Randall (who is as knowledgeable or more so than me). I grew up around Commodore 64 and Amiga as my mother was the largest C64 seller in the midwest. Many nights of mine as a child were spent in her computer classes.

However, I still have a problem with Raid 0 for business related tasks. I am very business oriented; thus, I am very very risk averse. With drive costs and speeds these days, I can't see wasting hours of time reloading the OS or assets when I could have spent a little more money to get redundancy and thus, no downtime. I understand many who don't have tight deadlines or clients, producers and/or directors in the edit bay so they can afford to lose some downtime. BUT, isn't it Murphy's Law that states when something can go wrong at the worst possible time, it will (or something close to that :)

Just want to state that I will recommend your business to anyone needing a custom PC for video editing.

Any chance you can run some AE CS5 tests with various hardware such as the 980x and the SR-2 with ram maxed out? I would absolutely love that as I use AE everyday and would like to make my HP Z800 faster (dual 6-core, 24GB reg ecc). On a side note, I find it very funny that Premiere CS5 is now rock-solid stable and AE CS5 not at all. I have replaced the motherboard, the fans, the thermal sensor and recently, the FX3800 but AE still freezes constantly. I have also tested the same AE projects on my home PC (custom i7) and it still freezes.

Btw, I came across a WD tool that allows you to adjust the spin down time for internal drives. There has been a tool for external drives since at least 2004.

Scott Chichelli
February 7th, 2011, 09:00 AM
HI Steve,
all good man.
i enjoy a good debate :-) giving Harm a hard time is always fun...

and thanks for recommending us..

i personally agree that a good raid 5,6 is ideal. i have a hard enough time getting people to understand the need for back up and a good UPS.
much less getting them to buy a raid 5 or a good NAS storage back up.

of course once they do lose data they get it. (i have been there myself thus why i am double redundant)
everyone wants fast render times. raid 0 does the job!

i think we have some heavy AE projects we got from a client. i will see what we can do. Benchmark/test time is hard to come by around here as we are swamped more often than not.

FYI in some tests we did Quadro cards had no advantage over standard GTX for most animation programs.
i am sure you know this but..
the quadros are the same cards as the GTX and usually based on the lower GTX
only the 6000 is based on the 480. most are 460's

Scott
ADK

Steven Davis
February 7th, 2011, 09:10 AM
This is what I have building so far. I'm shooting for a bluray machine. I am certainly open to criticism of it.


Newegg.com - Once You Know, You Newegg (http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=12321974)

Randall Leong
February 7th, 2011, 09:44 AM
This is what I have building so far. I'm shooting for a bluray machine. I am certainly open to criticism of it.


Newegg.com - Once You Know, You Newegg (http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=12321974)

Two problems (and one potential problem):

1) That list does not include a case or any mass storage drives whatsoever. Do you already have a case that you can use with this build? And, do you already have some hard drives that you can use in this system?

2) Are you going to overclock the CPU? If so, that Hyper N520 may not be well-suited for much if any overclocking. You see, that HSF uses only 92mm fans - the exact same diameter as the stock Intel boxed CPU HSF. You do need an HSF with 120mm or larger fans if you're going to overclock much.

Steven Davis
February 7th, 2011, 09:50 AM
Hey Randall,

Thanks for the feed back. I'm recycling my Raptor drive for my OS and my P180 case. So I'm saving a bit of money that way. I'm not planning on overlcocking, I think the stock settings will be plenty for my needs. But thanks for the insight on the cooler. I'll look around a bit more for something with a larger fan.

Randall Leong
February 7th, 2011, 09:54 AM
Hey Randall,

Thanks for the feed back. I'm recycling my Raptor drive for my OS and my P180 case. So I'm saving a bit of money that way. I'm not planning on overlcocking, I think the stock settings will be plenty for my needs. But thanks for the insight on the cooler. I'll look around a bit more for something with a larger fan.

While you're at it, I strongly recommend getting two to four additional hard drives for your media, project and output files. The 1TB 7200 rpm hard drives are right now the sweet spot for value. I do not recommend editing video on a machine with only one hard drive because such a task requires simultaneous reads and writes, and SATA does not allow both simultaneously since it is only a half-duplex interface.

Steve Kalle
February 7th, 2011, 02:35 PM
HI Steve,
all good man.
i enjoy a good debate :-) giving Harm a hard time is always fun...

and thanks for recommending us..

i personally agree that a good raid 5,6 is ideal. i have a hard enough time getting people to understand the need for back up and a good UPS.
much less getting them to buy a raid 5 or a good NAS storage back up.

of course once they do lose data they get it. (i have been there myself thus why i am double redundant)
everyone wants fast render times. raid 0 does the job!

i think we have some heavy AE projects we got from a client. i will see what we can do. Benchmark/test time is hard to come by around here as we are swamped more often than not.

FYI in some tests we did Quadro cards had no advantage over standard GTX for most animation programs.
i am sure you know this but..
the quadros are the same cards as the GTX and usually based on the lower GTX
only the 6000 is based on the 480. most are 460's

Scott
ADK

Technically, there are a few differences between the GTX and Quadro. At least with many 3D apps excluding C4D, tests show Quadros destroying their faster GTX siblings.

If you do find time for AE tests, I prefer an i7 vs dual CPU and 2GB ram per core vs 4GB per core.

I am trying to decide whether to add another 24GB ram to my HP. Aside from the PPBM5 results, will I really see a huge increase in MPEG2 encoding speed, which is 80% of my daily rendering? Aside from dropping $20k for a Smoke on Mac, I need to make my system faster for client/producer sessions.

Scott Chichelli
February 7th, 2011, 02:56 PM
we have one of those crazy dual Xeon @ 4GHZ systems with 48 gig ram on the bench now...maybe Eric will have time.


Scott
ADK

Andrew Smith
February 11th, 2011, 06:07 AM
Spotted an early report of Sandy Bridge not being too much to get excited about ...

The first Sandy Bridge results with the i7-2600K CPU have arrived. As expected, the i7-2600K is a nice performer, but requires massive over-clocking to perform about the same as the more affordable i7-920/930/950. Still, the platform is severely handicapped by the lack of PCI-e lanes on the P67 motherboard, which prevents the installation of a raid controller. A very serious drawback for video editing.

The initial conclusion is: Very nice, but not something to get excited about.

Read the rest at Premiere Pro Benchmark for CS5 (http://ppbm5.com/News.html)

Andrew

Randall Leong
February 11th, 2011, 10:03 AM
Spotted an early report of Sandy Bridge not being too much to get excited about ...



Read the rest at Premiere Pro Benchmark for CS5 (http://ppbm5.com/News.html)

Andrew

Actually, my particular 950 requires massive over-clocking just to perform well at all. But even with a good air cooler, my 950 peters out at about 3.8GHz without having the CPU to get to near-overheating.

Sorry, but with the i7-9xx series, you get what you pay for. You really need to spend $6,000 or more on a single i7-9xx system (that includes the cost of a hardware RAID card, 12 or more hard drives, 24GB of ultra-high-speed, ultra-low-latency (DDR3-2000 or faster speed with CL6 latency timings at that elevated speed) and super-expensive liquid or liquid nitrogen CPU cooling) just to outperform the fastest of the $1,500 i7-2600K systems. Most i7-9xx configurations costing less than $3,000 do not perform very well due to compromises in memory and/or storage and/or cooling. (I confirmed my findings by simply running the PPBM5 benchmark on my current system at its stock Turbo frequency of 3.2GHz and at an overclocked 3.83GHz. The total time in PPBM5 improved by only about 40 seconds - from the 300-ish seconds at stock to 260-ish at 3.83GHz. This indicates that something is bottlenecking my system. I might have to replace the 6 x 2 GB modules to upgrade to 24GB using 6 x 4 GB modules just to improve performance. Unfortunately, all I can afford are DDR3-1333 modules with 9-9-9 timings since DDR3-1600 or faster modules with even 9-9-9 or 8-8-8 timings still cost more than I want to spend at this time.)

Randall Leong
February 15th, 2011, 07:50 PM
And when the Sandy Bridge motherboards do go back on the market, spend the little extra money for an i7-2600(K) rather than settling for an i5-2500(K): The quad-core i5 CPUs lack HyperThreading, so that performance in video editing won't be as good as with the i7 (although the one low-ranking result of an i5-2500 system in the PPBM5 list was partly due to that system having only 4GB of RAM).

Andrew Clark
February 22nd, 2011, 11:20 PM
So the P67 MB's have no capability to add in a RAID card for an external array?

Randall Leong
February 23rd, 2011, 12:26 AM
So the P67 MB's have no capability to add in a RAID card for an external array?

As I mentioned, the P67 mobos have no more than four available PCI-e lanes (after accounting for a single graphics card and all of the onboard controllers on the mobo). And the Asus P67 mobos have their PCI-e x4 slot running in x1 mode by default. Forcing that slot to run at its full x4 bandwidth on the P8P67 Pro will disable both of the mobo's PCI-e x1 slots, the USB 3.0 front-panel header and the eSATA ports. Also, one of the PCI-e x1 slots is disabled by default. Enabling it will also disable the USB 3.0 front-panel header.

Scott Chichelli
February 23rd, 2011, 08:10 AM
So the P67 MB's have no capability to add in a RAID card for an external array?

if you can find a 1x card yes.. past that no. other boards have slightly better layout than Asus.
if you need something like a sonnet array then you really should be on a Dual Xeon anyway.

Scott
ADK

Andrew Clark
February 23rd, 2011, 01:12 PM
Hmmm...OK, so what about utilizing the onboard e-SATA or USB 3.0 ports to utilize with this:

TowerRAID TR4UTBPN - 4 Bay USB 3.0 / eSATA Hardware RAID 5 Tower (Black) (http://www.sansdigital.com/towerraid-plus/tr4utbpn.html)

Would this work for and external RAID array with the P67 MB's?

Andrew Smith
February 23rd, 2011, 09:19 PM
Thanks for that link for the TowerRAID product. It's beyond me how people could spend megabucks for a "proper" RAID card when you can get something like that for a few hundred dollars and then just add your own drives.

Andrew

Steve Kalle
February 24th, 2011, 12:37 AM
Thanks for that link for the TowerRAID product. It's beyond me how people could spend megabucks for a "proper" RAID card when you can get something like that for a few hundred dollars and then just add your own drives.

Andrew

There are many reasons why a 'proper' real hardware raid controller is better. For one, this external box is a single point of failure where if one piece breaks, the entire unit must be replaced or possibly sent to Sans Digital for repair. Just because of this, I would never use it on any work computer.

Also, its performance is greatly limited and its single cable can get saturated very easily. A single Samsung F3 can almost saturate its entire bandwidth, 2 F3's can saturate 100% and a single SSD can saturate 100%.

EDIT: Taken directly from this unit's manual,"The parity calculations for R5 MODE may result in write performance that is somewhat slower than the write performance to a single drive."

After some more reading, this unit does not support hot-swapping drives. If a drive dies, you must turn off the unit's power and then replace a drive. Or if you just want to add a drive or remove a good drive, the unit's power must be turned off. There is little to no control over the settings aside from the raid level switched on the back. If you have used a real hardware raid controller, then you will understand why having access to the multitude of various settings is important. I can do and see everything from the 3ware or Areca web browser.

These units are ok if you only need storage expansion and have no room inside the computer case. As far as I can tell, most of these units have no email support if a drive dies and few have alarms. With a 'proper' Areca raid controller, I get email and a loud beeping alarm. With my external SAS cases, I can service and replace any piece very easily from local electronic stores (Microcenter, Fry's) or contact PC-Pitstop where I bought them from and get parts the next day.

The biggest factor is reliability. There is a reason these units are cheap. Most come with only 1yr warranty whereas 'proper' raid cards have 3 years. Also, not all backplanes are created equal. I have tried several cheaper 4-in-3 Sata backplanes and had problems with all of them and now only use Raidage SAS backplanes at $140 each with zero problems.

Back to performance: 'proper' raid controllers have dedicated chips to calculate parity and run various throughput algorithms which significantly increase performance when more than one stream of data is being written and/or read. These cheap external boxes usually use a combination of software and normal x86 CPUs which result in serious performance degradation when more than one read/write operation is being performed. With my Areca 1680ix and 6 2TB drives in Raid 5, I can download the new SxS-1A card via expresscard adapter (60MB/s+) and copy a different project folder to 2 other backup eSata drives at full speed all the while Premiere Pro is conforming from the same R5 array.

Also, the dedicated raid controllers are able to rebuild arrays 3-5 times faster which can take a day or more with large arrays (so software raid can take days). This unit is setup to provide host access priority during the rebuild process and if the array is being accessed during the rebuild, it will take far more time to rebuild. With my 3ware and Areca cards, I can adjust how much priority is given to the rebuild and to I/O operations.

Andrew Smith
February 24th, 2011, 03:56 AM
That makes sense to me. Thanks for explaining that. I think it would be worth it to spend the $140 and be done with it.

Can you post a link to the Raidage SAS product, please? I'm having difficulty finding it via Google.

Andrew

Andrew Clark
February 24th, 2011, 04:58 PM
Hey Steve -

Thanks for that detailed info.; kinda had a hunch that it was a "too good to be true" type of thing!!

Anyways, what are the differences between the 3ware and Areca cards?

So in order to utilize those above RAID cards that you mentioned, one would have to purchase a MB for the 980x/990x CPU's (I believe it's the 1366 socket / x58 chipset MB's)?

SandyBridge MB's / CPU's vs. the x58 MB's / CPU's....one platform better / worse than the other?

Steve Kalle
February 24th, 2011, 05:10 PM
That makes sense to me. Thanks for explaining that. I think it would be worth it to spend the $140 and be done with it.

Can you post a link to the Raidage SAS product, please? I'm having difficulty finding it via Google.


Hi Andrew Smith (other Andrew, I'll get to you in a minute :)

This is just the 4-in-3 cage. The only issue with this cage is that it requires a reverse breakout cable from 4 sata ports on the mobo to a SFF8087 on the cage - "+getMessage("iPrintVerKit")+" (http://www.pc-pitstop.com/sas_cables_adapters/F7H87D.asp)
"+getMessage("iPrintVerKit")+" (http://www.pc-pitstop.com/sas_cables_enclosures/jage34r40ms.asp)

This is the entire case I use which includes the Raidage cage. I have 2 of these 8 bay cases and 1 4 bay case - an 8 bay & 4 bay are connected to my HP Z800 and Areca 1680ix (8bay) and onboard Sata ports (4bay); other 8 bay case is connected to my home pc (custom) to a 3ware 9750 raid controller.
"+getMessage("iPrintVerKit")+" (http://www.pc-pitstop.com/sas_cables_enclosures/sas8bay.asp)

Steve Kalle
February 24th, 2011, 05:39 PM
Hey Steve -
Thanks for that detailed info.; kinda had a hunch that it was a "too good to be true" type of thing!!
Anyways, what are the differences between the 3ware and Areca cards?
So in order to utilize those above RAID cards that you mentioned, one would have to purchase a MB for the 980x/990x CPU's (I believe it's the 1366 socket / x58 chipset MB's)?
SandyBridge MB's / CPU's vs. the x58 MB's / CPU's....one platform better / worse than the other?

Areca vs 3ware: both are great. I have had a 3ware 9650-8 running 24/7 since 2006 with zero problems. Our $18,000 broadcast server came with a 3ware 9690 raid controller. The current generation is the 9750 with anywhere between 4 to 24 SAS/SATA ports. The prior generation is the 9690 and the one before that is the 9650.

Yes, you need a X58 mobo and preferably with a PCIe x8 2.0 just for the raid card (this is in addition to a PCIe x16 for the gfx card).

I looked into this Sans Digital unit about a year ago to use for backups and replace the 2 external e-Sata enclosures I have been using because I go through 2 2TB drives every 4-6 months. They are nice for the money but you have no control whatsoever. Just being able to use different stripe sizes for Raid 0 or 5 can make a huge difference in both sequential read/write and random access performance. A large stripe is beneficial to video editors due to very large file sizes whereas a smaller stripe can be beneficial to random access and smaller files. I would imagine that this Sans unit was not designed with the video editor in mind.

One VERY important aspect many people miss: get a very good UPS. I spent over 30 hours recovering 950GBs of data, which was not backed up, because of a simple power outage that corrupted a raid array. Luckily, I was able to recover 95% of the data and learned my lesson. Ever since, I have been using a $600 UPS from APC with a 2nd battery for extended run time.

With current high efficiency power supplies, the need for a 'True/Pure' Sine Wave UPS is more evident. The bad part is these UPS' cost far more. I learned about this issue last year when a regular UPS would not work properly with my HP Z800's 89% 1100w PSU. HP released a document stating that many of their 85% and up PSU's required True Sine Wave otherwise a regular UPS could still cause the computer to shut off during a power outage.

I use this $450 Cyberpower 1500w for my HP Z800.
http://www.newegg.com/Product/Product.aspx?Item=N82E16842102068

And this $520 Cyberpower 1500w rackmount for our servers.
http://www.newegg.com/Product/Product.aspx?Item=N82E16842102019

However, if you can afford it, APC makes the best UPS.

Scott Chichelli
February 25th, 2011, 08:12 AM
Hey Steve,

thanks for that..
a big +1 on the UPS. to add to it make sure the UPS is larger than your power supply and accessories attached (LCDs etc)

bad /dirty power (low ) is worse than spikes. this is the bigest killer of drives and other components in a system.
more people than not have dirty power. (less than 120v)

"line iteractive" is what you want

warning people do not be misslead by "VA" VA is not watts.

if you have a 1000W power supply you need a 1000 W UPS. 1KA (1000VA) is NOT 1000W

i prefer tripplite every bit as good as apc and less money.


as far as Raid cards i prefer Intel.
Scott
ADK

Scott Chichelli
February 25th, 2011, 08:15 AM
Hey Steve -

Thanks for that detailed info.; kinda had a hunch that it was a "too good to be true" type of thing!!

Anyways, what are the differences between the 3ware and Areca cards?

So in order to utilize those above RAID cards that you mentioned, one would have to purchase a MB for the 980x/990x CPU's (I believe it's the 1366 socket / x58 chipset MB's)?

SandyBridge MB's / CPU's vs. the x58 MB's / CPU's....one platform better / worse than the other?

what are you doing workflow wise that you think you need an 8 drive or larger raid array?
are you doing red 4K or uncompressed?
20 layers and mass effects?

Scott
ADK

Randall Leong
February 25th, 2011, 10:20 AM
And when the Sandy Bridge motherboards do go back on the market, spend the little extra money for an i7-2600(K) rather than settling for an i5-2500(K): The quad-core i5 CPUs lack HyperThreading, so that performance in video editing won't be as good as with the i7 (although the one low-ranking result of an i5-2500 system in the PPBM5 list was partly due to that system having only 4GB of RAM).

agree 100%
the 2500 does not have HT the 2600 does, this made a huge difference in benchmarks.

Agree with Scott. I've discovered another Sandy Bridge system with only 4GB of RAM on that PPBM5 list - this time, an i7-2600K overclocked to 4.2GHz - and that system performed only a tad below my i7-920 system overclocked to 3.7GHz with 6GB of RAM. More RAM would have helped both systems - the 2600K more so than the 920.

Steven Davis
March 2nd, 2011, 08:04 PM
Welp, it's a running. An I7

EVGA 01G-P3-1373-AR GeForce GTX 460 (Fermi) Superclocked EE 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support ...

EVGA X58 FTW3 132-GT-E768-KR LGA 1366 SATA 6Gb/s USB 3.0 ATX Intel Motherboard

COOLER MASTER Silent Pro RSA00-AMBAJ3-US 1000W ATX12V v2.3 / EPS12V v2.92 SLI Ready 80 PLUS BRONZE Certified Modular Active ...

Patriot Viper II Sector 7 Edition 12GB (3 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model PV7312G1600ELK

Intel Core i7-950 Bloomfield 3.06GHz LGA 1366 130W Quad-Core Processor BX80601950


Western Digital Caviar Black WD1002FAEX 1TB 7200 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive

Pioneer Black Blu-ray Burner SATA BDR-206BKS - OEM

COOLER MASTER Hyper N 520 RR-920-N520-GP 92mm Sleeve CPU Cooler Intel Core i7 compatible


Other than not having any power when I first hit the button, lol, it is running pretty good. Windows 7 is interesting, it's much better thatn VISTA.

Randall Leong
March 21st, 2011, 11:09 PM
warning people do not be misslead by "VA" VA is not watts.

if you have a 1000W power supply you need a 1000 W UPS. 1KA (1000VA) is NOT 1000W

I've learned that by reading up on VA ratings for UPSes. 1000VA is actually only 576W (for 120V/60Hz markets). For a 1000W UPS the VA rating has to be at least 1750VA.

By the way, I have decided to give Sandy a spin now that a limited number of revised B3-stepping P67/H67 motherboards are in stock (with more coming in next week). The first part (an Asus P8P67 Pro motherboard) is now in my possession. I also ordered 16GB (4 x 4GB) of RAM, and will purchase a 2600K within the next few days. I will be holding on to my current i7-950/X58 setup a little while longer because I will be running tests on both systems with the new memory. (This idea came after I made a claim that the X58 platforms really need astronomically expensive disk setups just to perform as well as a Sandy Bridge system with a more modest but still multi-disk RAID setup.) The loser between the two (in my own environment and with equal KISS disk setups) gets returned or sold.

Scott Chichelli
March 22nd, 2011, 08:08 AM
looking forward to your report.

Scott

Randall Leong
March 22nd, 2011, 10:14 PM
looking forward to your report.

Got the CPU today. Still awaiting the arrival of the 16GB of RAM in the mail...

I'm glad I got the RAM at a good price. Otherwise, I would have had to limp along with only 8GB of RAM on that Sandy Bridge system (via four 2GB modules).

That said, I will be testing both my current setup and my new setup (both stock and overclocked) with 16GB, and also test my current system at stock with 12GB (in both its current memory configuration of six 2GB modules and with three of the four 4GB modules) and my new setup with 8GB (both with four 2GB modules and with two 4GB modules).

A report on all of the results will be posted both in a future post in this thread and on the PPBM5 site.

Randall Leong
March 26th, 2011, 08:13 PM
I just received the RAM this morning.

I spent the better part of the evening putting the new system together. Will be testing it at both stock speed and some overclocked speeds. I decided to forego the 8GB tests on the new system, and stick with 16GB.

Here are the stock speed results of my new 2600K versus my old i7-950, both with 16GB of DDR3-1600 RAM (all other components except for the motherboard are the same for both systems):

i7-2600K: 220 seconds overall (90 seconds AVI, 37 seconds MPEG-2, 82 seconds AVC, 11 seconds MPE); Performance Index: 184.9
i7-950: 247 seconds overall (100 seconds AVI, 43 seconds MPEG-2, 93 seconds AVC, 11 seconds MPE); Performance Index: 202.8

And yes, the Gigabyte motherboard that I used for the i7-950 claims to support "full" triple-channel with four DIMMs of equal size.

For a fairer comparison, I re-ran the same test on the stock-speed i7-2600K system with only 8GB of RAM, and compared it with the results of an otherwise identically equipped stock-speed i7-950 system with 12GB of RAM:

i7-2600K (8GB): 288 seconds overall (89 seconds AVI, 102 seconds MPEG-2, 87 seconds AVC, 10 seconds MPE); Performance Index: 263.5
i7-950 (12GB): 297 seconds overall (99 seconds AVI, 93 seconds MPEG-2, 94 seconds AVC, 11 seconds MPE); Performance Index: 265.4

The stock i7-2600K system, in my testing, does outperform the stock i7-950 system due in large part to the higher stock clock speed of the 2600K. The i7-950 system with 16GB of RAM, although it was noticeably faster than that same system with 12GB of RAM, was hobbled slightly by its memory controller actually operating in the hybrid Flex mode in which the extra 4GB ran in single-channel mode rather than true triple-channel.

One more note: I discovered that the highest-performing stock i7-2600(K) system that's currently on the PPBM5 results list (with a total time of 264 seconds) did not have a RAID array at all. Instead, it relied on a single 2TB, 7200RPM hard drive for everything except the OS. As a result, that system was hobbled by a slow disk subsystem. For comparison, the stock i7-950 system in the PPBM5 list that Harm mentioned in the Adobe forums delivered a 243-second result mainly due to its 24GB of RAM running in true triple-channel mode despite its two 150GB 10,000RPM hard drives in a RAID 0 array underperforming several two-disk RAID 0 arrays with 7200RPM hard drives (although it was still slower than my stock-speed i7-2600K results with 16GB of RAM).

Results have been submitted for posting on the PPBM site.

Scott Chichelli
March 28th, 2011, 03:44 PM
while PPBM is ok for an idea a better benchmark it to take the same footage on both systems and render it out EG AVCHD to H264.

Scott

Randall Leong
March 28th, 2011, 04:06 PM
while PPBM is ok for an idea a better benchmark it to take the same footage on both systems and render it out EG AVCHD to H264.

To make that comparison meaningful, I have to use the exact same video and disk components on both systems. (This means that the only variables allowed are the CPU, motherboard and RAM amount.) If any of the disk or video components differ even slightly between the two systems, the results would be definitely skewed one way or the other.

The comparison that you gave between the i7-2600K and the i7-980X earlier is somewhat meaningless because there are too many variables between the two systems - disk subsystems, graphics card drivers and memory speeds, as well as the degree of overclocking. For all I know the memory in the 2600K system was running at its official DDR3-1333 speed while the memory in the i7-980X system was held back to DDR3-1066 speed. And in practice dual-channel 1333-speed memory actually delivers greater bandwidth than triple-channel 1066-speed memory.

Scott Chichelli
March 29th, 2011, 07:24 AM
all ram in my systems always runs at 1600 nothing less.
for my benchmarks we always use the same disks etc we always use 2 sets raid 0 and standard Sata OS (unless doing a drive benchmark) then everything is the same but the Drives.
we have also tested ssds as media drive, temp file drive OS drive etc (pointless)

the only variables are the Mobo, CPU
we may have a different videocard in there but really for the test we are doing it matters not.
in fact here the SB has the lower video card and still beats the 980x

I7 2600K 3.4GHZ Turbo to 4.7GHz
16GB Blackline 1600 CL 9
470GTX
3 Layer - 31:35
4 Layer - 34:35

I7 980X 4GHZ
12GB Blackline 1600 CL 9
570GTX
3 Layer - 32:30
4 Layer - 35:25

as far as ram quantity..
note the 8 gig and slower video vs the 16gig and faster video.
with the same GHZ CPU.. not a huge performance difference at all.
break it down to seconds.
40:49 = 2449
40:05= 2405
about a 1.74% better performance with double the ram and a better video card.

I7 2600 3.4GHZ Turbo to 3.9GHz
8GB Blackline 1600 CL 9
460GTX
4 WD 1Tb Sata 64 Meg Cache 600 Drives in 2 Raid 0 arrays
3 Layer - 37:35
4 Layer - 40:49

16GB Blackline 1600 CL 9
570GTX
3 Layer - 36:17
4 Layer - 40:05

now 2 things i need to comment on the AVCHD to H264 test is too lite on a system to show any serious differences.
we picked it 2 yrs ago as it was (and still is) the most common workflow. (we also do a red 4k which does show better)

i also think the PPBM is too lite and too short as well. but for a downloadable test you have no choice..

so we started adding the lightning effect and have not finished collecting numbers for everything.. (too stinking busy :-) )

Dual Xeon X5680 CPU's at 4.0GHz
48 GB DDR3 1600 Blackline at 1600
4 WD RE4 2Tb Sata 64 Meg Cache Drives in Raid 5 array
580GTX
3 Layer - 31:00
4 Layer - 31:11

3 Layer w/Lightning - 1:11:52
4 Layer w/Lightning - 1:26:36

I7 2600K 3.4GHZ Turbo to 4.5GHz
16GB Blackline 1600 CL 9
470GTX
2x raid 0
3 Layer w/Lightning - 1:46:37
4 Layer w/Lightning - 2:05:44


so again as i said you need to test both systems with YOUR normal workflow and make them as identical as possible to see which better suites you.

Scott
ADK

Randall Leong
March 29th, 2011, 09:41 PM
Thanks for the proof that the LGA 1155 platform is ill-suited for anything more than a KISS drive configuration. Although there are technically four PCIe lanes open for PCIe expansion cards on the LGA 1155 platform (after accounting for onboard USB3, PCIe-to-PCI bridge and additional SATA/IDE controllers), the few motherboards that have PCIe x4 slots actually share the bandwidth of that slot with the PCIe x1 slots on those same boards. This means that ANY card plugged into a PCIe x1 slot would have forced the x4 slot to run in x1 mode. That hurts the performance of most discrete hardware RAID controllers. The only other place to put a RAID card would have been the second PCIe "x16" graphics slot (which is bifurcated from the main PCIe x16 slot, and thus both slots would have been forced to run in x8 mode). This does hurt MPE GPU performance.

A few high-end LGA 1155 motherboards have onboard PCIe lane repeaters. Unfortunately, while they create additional PCIe lanes, they do not change the total bandwidth of the CPU's integrated PCIe controller. So instead of 16 PCIe lanes operating at full PCIe 2.0 bandwidth, you now have 32 PCIe lanes that are artificially restricted to PCIe 1.0 bandwidth -- 2.5 GT/s instead of 5.0 GT/s.

The condensed version of my above post is:

Unless someone has only an Nvidia graphics card and a discrete hardware RAID card (and no other expansion cards whatsoever), the current Sandy Bridge (LGA 1155) platforms do not allow both the graphics card and the RAID card to be run at their full speeds at the same time. Either the disk performance or the GPU performance would suffer, especially if that configuration also includes a sound card that's of better quality than the onboard one.

I was partially wrong about the above quotes. Looking at other posts in other forums, I discovered that even the PCI-e x16 slot dropping to x8 speed does not reduce the performance of current Nvidia GPUs because even the fastest of such GPUs (including the GTX 580) do not take full advantage of x8, let alone x16. This means that if one has a P67 motherboard with two x16 physical PCI-e slots that both run electrically at x8 when both slots are filled (these include the Asus P8P67 Pro or higher, the Gigabyte GA-P67A-UD4 or higher, the MSI P67A-GD## series and the Intel DP67BG), one can run a hardware RAID card from the likes of Areca with virtually no MPE performance penalty. Forget about using a hardware RAID card on a P67 or any other motherboard with only one PCI-e x16 slot; the two just won't work together. And if the secondary physical x16 slot runs in x4 or x1 mode (as is the case with the Asus P8P67 or P8P67-LE, or the Gigabyte GA-P67A-UD3 series), the disk performance (corresponding to the AVI portion of the PPBM5 benchmark) will be less than optimal - and might not be any faster than a simple two-disk RAID 0 array on the onboard Intel software RAID controller.

I've also discovered that low MPE performance with any given Nvidia GPU is due to an improperly tuned system and/or improper graphics driver settings and/or an excessive number of processes running in the background, not the limitations of the PCI-e bus in current mainstream Intel platforms.

As such, the PCI-e limitations of the LGA 1155 platform can potentially reduce performance with future components. In practice, however, the performance reduction with current components is virtually nil.

As for the replacement for the current LGA 1366, there will be none (technically). Based on ever-changing plans, there will be no desktop CPU from Intel that uses the LGA 1356 socket. All of the new Intel CPUs higher than the current 2600K will be LGA 2011 only. However, the desktop i7 Extreme LGA 2011 CPUs will be gimped to only 24 PCI-e lanes (instead of the full 40 PCI-e lanes in the server Xeon versions of the same CPU). That's only four PCI-e lanes more than the 20 PCI-e lanes in the current LGA 1155 CPUs (plus any additional PCI-e lanes available from those on the PCH). Because of this gimping, the forthcoming LGA 2011 platform replacement for the current i7-9xx series will be much less attractive than one would expect since the only CPUs that are equipped to support a hardware RAID controller would cost more than $2,000 for each CPU. The current LGA 1155/P67 platform theoretically has 28 total PCI-e lanes, of which four of those from the CPU are disabled during manufacturing (leaving only 16 PCI-e lanes available from the CPU) and anywhere from four to six from the P67/H67 PCH being eaten up by the motherboards' onboard devices (e.g. USB 3.0, an extra SATA 6.0 Gbps controller, etc.).

That said, there will be LGA 1356 CPUs on the market - but they (according to current plans) will be sold only as Xeon processors for single-CPU servers. And even if the higher-end i7s were to be available in LGA 1356, that socket would not be much if any better of a choice than LGA 1155: The LGA 1356 CPUs will be designed with only 24 PCI-e 3.0 lanes in the CPU - still not enough lanes unless the chipset can access more than 16 on-CPU lanes.