|
|||||||||
|
Thread Tools | Search this Thread |
July 20th, 2006, 12:10 PM | #1 |
Inner Circle
Join Date: May 2003
Location: Australia
Posts: 2,762
|
Article on the development of the inside of your HD1.
I completely stumbled on this article while trying to research something else:
http://neasia.nikkeibp.com/neasia/004514 Analysis: Separate chips for still and video on the HD1. The video core was developed in house has 15 RISC processors, nice. Each chip uses "several million gates" and uses 32MB of separate ram each. Unless a lot of the several million gates are static memory (but talking about gates leads me to believe that it is a gate array where such memory is either built in or inefficient) you can have a lot more processors, or better processing. Amazingly, they figured out that Noise was a bigger problem then expected on a large screen HD (I know that already). But they found trying to bust the noise by expanding motion prediction choked the data transfer rate to the dram. So, they tweaked the Mpeg4 to get optimal settings, it seems to show (in certain diagonals ;). So with all the optimal tweaking, it looks like the chip is already maxed out. It doesn't indicate that the chip itself in maxed out, though that is possible, but that the transfer rate does not have enough overhead. We might not be able to get better noise reduction (which would improve compression in lower light) or higher data rate bandwidth (though that is not conclusive) out of the present camera, but, unless the diagonal problem is hardwired, most of the problems should, hopefully be fixable by a firmware update. I personally doubt they can't do better noise removal, it depends on the level of the people you have working for you. An alternative design might have done the job, as well as chip manufacturing process, and ram speed, upgrades. So, in time this is possible on a new model. Interesting is that unless the cores are locked, Sanyo could upgrade them with a more processing and quality friendly codec, like the one I have in mind to do. Several million gates might indicate there is sufficient external circuitry to the cores. In a design like this, it pays to maximise performance by implementing application specific support circuits for performance intensive tasks that can be done economically in silicon. Somehow I don't know, relying too much of the cores that have insufficient performance can limit performance. Gate arrays come pre-configured with blank space, circuits and memory etc, and you order the patten of circuit to go on the blank space. They are cheaper to do then full custom silicon, and circuit patterns maybe available from libraries. Unless they implemented the on chip memory in gates (consuming a fair bit) it is likely a chip with enough ram spaces in the right places was used. This would narrow down the choices of chips, effecting the extent of the design that can be used. What people don't realise, is that there are much simpler ways to do things then Mpeg4, H264, or polygons, but they haven't discovered them yet. They would be amazed at what a million transistor chip could do if they new. |
July 20th, 2006, 01:58 PM | #2 |
Major Player
Join Date: Jan 2004
Location: Europe
Posts: 489
|
"By using a processor instead of a hardwired circuit, engineers were able to perform image adjustment right up until shipment"
Yes, that sounds like our camera.
__________________
www.irishfilmmaker.com |
| ||||||
|
|