|
|||||||||
|
Thread Tools | Search this Thread |
October 14th, 2007, 06:32 AM | #1 |
Regular Crew
Join Date: Apr 2007
Location: Hertfordshire
Posts: 118
|
Network Rendering generational loss
Right, I have a good few machines lieing around the place now, and I've figured out how to network them by firewire, which in theory shoudl provide an excellent backbone for a distributed rendering system. I also have a huge project that takes some 36 hours to render on a single machine.
As of yet, I haven't actually been able to get the network rendering working, however, I have noticed that Vegas does not actually do a true distributed render to the target format. It appears it renders in chunks to a intermediate and then stitches it all together on one machine and transcodes it to the target format. Am I right in thinking this? I seems a bit odd to me, as strictly speaking, with most formats it should not matter if many machines are rendering one file, as if you start rendering from a key frame it should work just the same as rendering from one machine. For example, with a WMV two pass, it should not even matter which computer has done the first pass on any one piece of the video. If the render is truly distributed, all machines should be able to access the lookup table or whatever, and allocate the right number of bits to any part of the render. I get the feeling that what is needed is targetted routines for specific core codecs (particualr WMV and Mpeg 2) to allow for true distribution to the target format, otherwise you are liable to incur generational loss and artifacts, or require a massive server to accommodate an uncompressed intermediate. And after the massively increased bandwidth and cost that will require, you might find it works better going straight to the target format on one machine! The catch all solution should remain as a backup for the formats that are not directly supported. Or have I got this all back to front? |
October 14th, 2007, 07:03 AM | #2 |
Inner Circle
Join Date: Dec 2002
Location: Augusta Georgia
Posts: 5,421
|
I am surprised that you networked your computers via Firewire. I am assuming that you are using 1394a or Firewire 400 and not the Firewire 800 variety.
Do you have Ethernet available on all computers. If so, what are the speeds of the Ethernet? If you have Gigabit Ethernet (1000) not 100 or 10, then I would use Ethernet. You could probably upgrade to Gigabit Ethernet at a reasonable cost. Since you do not have your network actually working, I suggest that you try your existing Ethernet, even if it is a slower Ethernet. Then, upgrade as desired. It is true that Vegas splits the file up into chunks for rendering. I assume that they are doing it correctly.
__________________
Dan Keaton Augusta Georgia Last edited by Dan Keaton; October 14th, 2007 at 09:33 AM. |
October 14th, 2007, 08:45 AM | #3 |
Regular Crew
Join Date: Apr 2007
Location: Hertfordshire
Posts: 118
|
I was originally thinking about trying it via ethernet, but I actually think in this case at least, firewire is the better option. For one thing, ethernet has a terrible habit of failing over after a period of time. Most chipsets are fairly buggy. Also ethernet carries a lot of lag. In the scenario I have where there are three machines all together, taking a router out of the equation and having the master machine slave out directly to two other units is very appealing.
I've got a feeling that firewire at 400 mps may actually out perform a gigabit connection at most rendering tasks, partly because of the removal of router lag, and partly because firewire itself is not a packet based connection (although tcp/ip is). If you think about it, there is no point during the render where any one machine is going to be working fast enough to call more than 400mbps of data. Where the bottleneck is introduced is the lag from the router through two ethernet adapters whenever a machine requests a few packets of data. I might test this further later on with a few pings to see what the result is, but I am quietly confident firewire links will outperform ethernet gigabit connections considerably. |
October 14th, 2007, 09:38 AM | #4 |
Inner Circle
Join Date: Dec 2002
Location: Augusta Georgia
Posts: 5,421
|
If all the machines are local, why go through a router? Of course, you may need the router for other purposes.
A nice Ethernet switch will do the job nicely. Even a simple Ethernet hub would work in your application. Do you have your Firewire network working yet? If so, and you have the equipment to do an Ethernet test, I would be interested in the results.
__________________
Dan Keaton Augusta Georgia |
October 14th, 2007, 09:49 AM | #5 |
Dunno where you get your info, but, firewire networking is excrutiatingly S~L~O~W! I use a gigabit network for distributed rendering. The bottleneck is really in the overhead Vegas uses to re-stitch renders on different machines back together. Firewire networking is even slower and chokes. Gigabit works seamlessly. Even at that, there is very little to be gained by distributed rendering.
|
|
October 14th, 2007, 10:12 AM | #6 |
Regular Crew
Join Date: Apr 2007
Location: Hertfordshire
Posts: 118
|
This is interesting. I have had experience of networking back as far as token ring, and I have never really been a fan of ethernet as a LAN solution. I can't see how firewire could introduce a bottle neck in this scenario. Is is possible that there is a firewall issue?
The equipment to get firewire networking going is just a plain firewire cable. There is no switch hub or crossover to go through. What is more, you can access each machine directly from the master machine, rather than switching or routing. Actually it seems a much more integrated solution. I noticed that when i attached a Z1 to a spare firewire port on one of the slave machines, it appeared as usable on both the slave and the master! Unfortunately, I would not be able to test a gigabit ethernet LAN at present as I do not have a capable hub or router about. Neither have I actually got distributed rendering to work via ethernet or firewire! But this could all be academic if Vegas does indeed just transcode lots of little chunks of files into one large file in two generations instead of piecing together a single file from chunk in one generation! I'll scrap it from my proposed workflow if that is the case. |
| ||||||
|
|