Robert Wheeler
October 14th, 2007, 06:32 AM
Right, I have a good few machines lieing around the place now, and I've figured out how to network them by firewire, which in theory shoudl provide an excellent backbone for a distributed rendering system. I also have a huge project that takes some 36 hours to render on a single machine.
As of yet, I haven't actually been able to get the network rendering working, however, I have noticed that Vegas does not actually do a true distributed render to the target format. It appears it renders in chunks to a intermediate and then stitches it all together on one machine and transcodes it to the target format. Am I right in thinking this?
I seems a bit odd to me, as strictly speaking, with most formats it should not matter if many machines are rendering one file, as if you start rendering from a key frame it should work just the same as rendering from one machine. For example, with a WMV two pass, it should not even matter which computer has done the first pass on any one piece of the video. If the render is truly distributed, all machines should be able to access the lookup table or whatever, and allocate the right number of bits to any part of the render.
I get the feeling that what is needed is targetted routines for specific core codecs (particualr WMV and Mpeg 2) to allow for true distribution to the target format, otherwise you are liable to incur generational loss and artifacts, or require a massive server to accommodate an uncompressed intermediate. And after the massively increased bandwidth and cost that will require, you might find it works better going straight to the target format on one machine! The catch all solution should remain as a backup for the formats that are not directly supported.
Or have I got this all back to front?
As of yet, I haven't actually been able to get the network rendering working, however, I have noticed that Vegas does not actually do a true distributed render to the target format. It appears it renders in chunks to a intermediate and then stitches it all together on one machine and transcodes it to the target format. Am I right in thinking this?
I seems a bit odd to me, as strictly speaking, with most formats it should not matter if many machines are rendering one file, as if you start rendering from a key frame it should work just the same as rendering from one machine. For example, with a WMV two pass, it should not even matter which computer has done the first pass on any one piece of the video. If the render is truly distributed, all machines should be able to access the lookup table or whatever, and allocate the right number of bits to any part of the render.
I get the feeling that what is needed is targetted routines for specific core codecs (particualr WMV and Mpeg 2) to allow for true distribution to the target format, otherwise you are liable to incur generational loss and artifacts, or require a massive server to accommodate an uncompressed intermediate. And after the massively increased bandwidth and cost that will require, you might find it works better going straight to the target format on one machine! The catch all solution should remain as a backup for the formats that are not directly supported.
Or have I got this all back to front?