View Full Version : Shooting in Raw


Dan Keaton
June 12th, 2012, 11:31 PM
Dear Friends,

We are looking for some opinions on why users choose to shoot Raw or not.

We produce a video recorder, the Gemini 4:4:4 that can record ARRIRAW from an ALEXA camera, but we are interested in RAW in general.

But specifically for the ARRI ALEXA, we find that many shoot internally in ProRes when shooting with the ALEXA, especially for eposidic television.

Of course, this makes sense to us.

But for others that want higher image quality, one can use an ALEXA with an ARRIRAW recorder to shoot in ARRIRAW.

It is reasonable to assume that one, in the past, may have choosen not to shoot ARRIRAW, due to the cost of owning or renting an ARRIRAW recorder.

Now, the cost of an ARRIRAW recorder with media, has been greatly reduced.

So, we are looking for reasons why a production would, or would not choose to shoot in RAW. And while I have mentioned ARRI ALEXA in this post, I am looking for opinons about RAW in general as there are many RAW cameras available today.

We would like to know if there are barriers, real or imagined, to shooting Raw.

Respectfully,


Dan Keaton
Convergent Design (Designers of the nanoFlash and Gemini Recorders)

Sareesh Sudhakaran
June 13th, 2012, 10:06 AM
I'd like to venture a few guesses by playing the devil's advocate:

1. The average professional photographer deals with situations where lighting conditions are terrible, if not impossible. To make matters worse, he/she has to deliver great looking photographs in print, either for fine art (300 to 720 ppi) or for magazines at 300 dpi. Even a wedding photographer has to consistently deliver great prints. 14 or 16-bit RAW is a life saver.

On the other hand, the professional videographer usually has lights at his/her disposal, and even if he/she were shooting wildlife for broadcast, the delivery specs are not that demanding. DCI for cinema is happy with JPEG compressed images. BBC is okay with interframe or intraframe codecs at 50/100 Mbps in 8 bit, and the color space/dynamic range/resolution of standard display devices/projection systems are worse than what is possible with print. Average television screens push 100-150 ppi.

Only the most demanding productions require RAW, in theory. Let me repeat, in theory. Really good green screen work is possible with 4:2:2 and 4:4:4 sampled 8-bit video. And any advantage RAW has evaporates due to the poor display technology currently available.

If I'm shooting green screen, am I better off with RAW or with an uncompressed TIFF/DPX image sequence in 32-bit float in Wide gamut RGB? Is the dynamic range of ARRIRAW considerably higher than Prores on the Alexa? I don't know. How does footage being in RAW practically translate to an advantage in the traditional video workflow?

2. Software. All still camera manufacturers release dedicated RAW software with specialized algorithms to take advantage of the unique mathematical properties of RAW files.

Is the current line up of ARRIRAW software really on par with still RAW algorithms? Even if the answer is yes, does the market know this? On the other hand, consider BM's Cinema camera and their choice of DNG - now that's a smart choice. Third party developers have an open protocol that encourages development, and has Adobe as backup. The last thing the world needs is another RAW wrapper.

3. Archival. Tape based formats are almost dead. How many more years will HDCAM SR be around? If only a few recorders can 'read' ARRIRAW, what can I expect 20 years down the line?

If I'm forced to choose a color space and image sequence format (or any other compression codec) for post and archival, why not shoot in a format that takes me there more directly? Aren't there too many steps in a video workflow anyway? Is the problem with ARRIRAW that it is too good?

What if a recorder can take ARRIRAW and out-prores or out-dnxhd the internal Alexa software engine? Or better yet, what if it can convert ARRIRAW to a DNG image sequence?

4. Price. Simple question: A production that can afford Arriraw and the hardware required to process it in post is not going to complain about the cost of the recorder, is it? Does the recorder convert ARRIRAW into a cheaper format for post (see point 3 above) that is better than the prores or dnxhd that comes out of the alexa internally?

Just out of the top of my head.

To make things clear: If I had the money, the only camera I would want to shoot with would be the Alexa. And I would shoot in Arriraw.

Chris Medico
June 13th, 2012, 10:42 AM
I have to balance cost and complexity with the budget when working on projects. To put it simply RAW costs more and doesn't always have a benefit equal to that cost for the project.

For me to shoot RAW on my average project I would have to have either enough storage to shoot till I could offload or have at least one extra person to data wrangle during a shoot. Increases cost.

When I get back to the shop I have to transfer the footage to the editing system. RAW takes much longer to transfer so I may not be able to get to work right away. I have to plan around what could take a few hours time to copy footage. Possible increase in cost.

Storage space and bandwidth demands. RAW takes up a lot more space and requires much higher bandwidth capabilities in the storage system. You aren't going to get multiple streams of RAW HD from a single drive. You are going to need RAID for your RAW. Increase in cost.

Project archiving. The other issue with the large RAW footprint. Long term storage. It isn't trivial to archive a large project SAFELY. Do we trust old hard drives or stacks of burnt optical disks? How often do we need to go through and refresh our digital archives?? The larger they are the longer this will take and the more it will cost in media. For sure an increase in cost.

My point is this - the project needs must dictate that RAW is the best option for what it is going to cost you in money and time. How many projects need this level of quality in the video? Those that do should absolutely use it. Those of us that don't need it are better served going with reasonable compression and using that capital towards equipment and expenses that help us generate better returns for our business.

Dan Keaton
June 13th, 2012, 11:47 AM
Dear Sareesh,

Thank you for taking the time to post your opinions.

Here are some of my thoughts, then I will respond to your points.

In the case of the ARRI ALEXA, recording to ProRes internally in the camera is easy, and it has proven to be good enough for episodic television.

For shooting features or high-end commercials, we are told that production companies want a higher image quality.

When recording RAW, one gets the highest quality image out of the ALEXA and one also gets lots of metadata.

When one records internally in the ALEXA, the image is debayered in the camera, and certain settings are baked in. Shooting in ARRIRAW, the image is not debayered in the camera, or a recorder like the Gemini 4:4:4. This allows much more processing power (computing power and time) to be devoted to getting the absolute best image possible.

And, Post can pull out more details, in the shadow areas, or in the highlight areas, since the image is not baked in.

An unexpected result, is that if one records HD, 1920 x 1080 10-Bit in full uncompressed 4:4:4, the file size is actually larger than recording ARRIRAW, 2800 x 1620 (photosites) in 12-Bit Log.

Thus ARRIRAW is good for archiving. And, since debayering gets better over time, one may be able to re-debayer the images in the future and get even a better image.

Now, responding to your points.

2. The better the image, the better the key for greenscreen work.
We see this all of the time, even with our nanoFlash. Recording high-quality 4:2:2 makes for a better key.

And of course, we are speaking of much better image quality when we speak of RAW.

When shooting green screen, full uncompressed or RAW should both be great.

When recording in RAW, such as ARRIRAW, one then processes the ARRIRAW to create whatever level of quality one wants. This could be DPX (Full Uncompressed), or a high quality compressed codec, maybe ProRes 4:4:4 or Avid DNxHD 440, or even something else.

This level of flexibility is great, as one can use a lower quality now, and later obtain a higher quality, just by reprocessing the RAW file.

3. Archival. For ARRIRAW specifically, one can reasonably expect programs to convert ARRIRAW to another format for many years to come. Major Hollywood pictures have been recorded in ARRIRAW. And one just needs the data, it does not have to be in any specific medium, such as tape, disk, or other medium, one just needs to be able to read and process the data.

4. Our research, has indicated that the cost of the recorder and media, in the past, which typically cost as much as the camera, was a problem. Now the cost of the recorder and media is a small fraction of what it used to be, eliminating this concern.

Sareesh, I greatly appreciate your post. Thank you for helping us in our research.

Dan Keaton
June 13th, 2012, 11:53 AM
Dear Chris,

I also want to thank you for your thoughts.

When the Gemini 4:4:4 and our new Thunderbolt capable Transfer Station, we can transfer footage from 325 to 375 Megabytes per second (MBps).

To put it differently, 60 minutes of footage takes approximately 30 minutes to upload to a Promise Technologies Pegasus Raid.

iMac with dual Thunderbolt = around 375 MBps
Mac Mini with single Thunderbolt = around 325 Mbps

We agree completely, the project should dictate the need for RAW or not.

Chris Medico
June 13th, 2012, 11:54 AM
I have to balance cost and complexity with the budget when working on projects. To put it simply RAW costs more and doesn't always have a benefit equal to that cost for the project.



Just to clarify this is in relation to the Sony 4k RAW which will have similar storage requirements as 1080 uncompressed.

Alister Chapman
June 27th, 2012, 10:40 PM
One of the key benefits of a raw workflow is that normally you will be working at a higher bit depth, at least 12 bit if not 14 bit or 16 bit. This in turn allows the use of linear capture as opposed to the log capture normally associated with conventional video.

Don't get me wrong, log capture and recording (even things like hypergammas and cinegammas are closer to log than linear) is very good and works remarkably well. But it is limited as it compresses highlight information. Each extra stop of over exposure with a linear recording will contain twice as much data as the previous, while with log each stop contains the same amount of data, so as a result each brighter stop only has half as much information as the previous. Log does allow us great scope when it comes to grading and post production image manipulation, but the higher up the exposure range you go, the less data you have to work with, so how much you can manipulate the image decreases with brightness. As our own visual system is tuned for mid tones this log behaviour goes largely un-noticed. But as modern sensors achieve greater and greater dynamic ranges log starts to struggle while linear copes much better.
It's not until you try linear raw with a camera like the Alexa or F65 that you realise just how forgiving it is. In log mode the Alexa (and other log cameras like the F3) need to be exposed accurately. Over expose and you risk not only your highlights blowing out but also it becomes harder to get good looking skin tones as these may be up in the more compressed parts of the curve. However with linear, it doesn't really matter if faces are higher up the range, jus as long as they are not actually at sensor overload.
When you shoot with a log camera it must be treated like any other conventional video camera. Exposure must be correct, you need to watch and protect you highlights, expose to the left etc. A camera shooting raw behaves much more like a film camera, you can afford to push the exposure higher if you want less noise, just like film.

But linear raw comes at a cost, mainly a time and storage cost. We have become very used to the simplicity of working with video. Modern file based workflows are fast and efficient. Raw needs more work, more processing, more storage (compared to compressed at least). But computers are getting faster, storage is getting cheaper. Right now I believe that raw is only going to be used by those that really do need and want the very best flexibility in post production while log will become more and more common for episodic and documentary production. But, the time will come when we can handle raw quickly and easily and then perhaps we will look back at legacy codecs and wonder how we managed. Although saying that, while we still broadcast and distribute programmes using backwards compatible legacy gamma with it restricted dynamic range for display on devices with only 6 stops of display latitude, a general shift to raw with all it's extra overheads may never happen.

Sareesh Sudhakaran
June 27th, 2012, 10:58 PM
One of the key benefits of a raw workflow is that normally you will be working at a higher bit depth, at least 12 bit if not 14 bit or 16 bit. This in turn allows the use of linear capture as opposed to the log capture normally associated with conventional video.


Please correct me if I'm wrong but essentially 12-16 bit RAW is not the same as 12-16 bit RGB - the RAW converter still has to process the individual data for each pixel and create an RGB image, and assign it a color space, gamma, etc - there's a lot of potential for voodoo here.

At least as far as Blackmagic is concerned, I'm sure Resolve will handle the native cDNG files well - SpeedGrade does too, supposedly. It's everything else I'm worried about - effects, titles and other stuff have to be natively added to RAW, but how - if one is editing native?

Alister Chapman
July 4th, 2012, 02:08 PM
Your quite right that raw is different to RGB, not sure why you think I'm saying otherwise?

With a native raw editing workflow the software will have to decode the raw image on the fly and convert it to RGB or YCbCr. Your titles etc will be in RGB or YCbCr. This will be extremely processor/GPU intensive and may require reduce resolution or other shortcuts to make it happen in real-time especially if your building up layers of clips or effects.

David Heath
July 5th, 2012, 04:43 PM
In the case of the ARRI ALEXA, recording to ProRes internally in the camera is easy, and it has proven to be good enough for episodic television.

For shooting features or high-end commercials, we are told that production companies want a higher image quality.

When recording RAW, one gets the highest quality image out of the ALEXA and one also gets lots of metadata.
I think it's highly important to make it clear that RAW won't necessarily give any higher image quality in the end - what it WILL do is allow for far greater versatility in the post process.

The question then becomes whether that versatility is worth the disadvantages? And here it's worth being clear what the downsides of RAW are - fundamentally that all the footage needs work done to get a viewable product, and that can add extra cost

So is RAW worth it? For such as news the answer must be no, for such as a feature film the answer is far more likely to be yes.

If it's possible to get the "look" right in camera, RAW offers little extra, and adds disadvantages. If you want to keep options open - and the "look" of a scene in a feature film may not be finalised until late in production - RAW offers a great deal.

What it does NOT offer is better image quality per se.

Dan Keaton
July 5th, 2012, 04:51 PM
Dear David,

For practical, high end cameras that are available today, ones that can output raw and can record internally, the raw output can provide higher image quality.

Using the ARRI ALEXA as an example of such as camera, the internal ProRes recording is limited by the fact that the debayer of the image must occur in real time, and the processing power in the camera is limited by the amount of electrical power available.

The image can be debayered in real-time, but a better debayer, and thus a higher quality image, can be obtained when a faster, more powerful computer can be used, and additional processing time can be devoted to each frame of video.

David Heath
July 6th, 2012, 04:02 AM
Point taken, certainly in principle. I can't pretend to have any figures but I wonder just how big any such quality improvement will be? My suspicion is nowhere near big enough to normally compensate for the more complicated workflow.....?

Whereas the gain from potential extra flexibility from RAW is an order of magnitude higher.

Sareesh Sudhakaran
July 6th, 2012, 04:10 AM
Using the ARRI ALEXA as an example of such as camera, the internal ProRes recording is limited by the fact that the debayer of the image must occur in real time, and the processing power in the camera is limited by the amount of electrical power available.

The image can be debayered in real-time, but a better debayer, and thus a higher quality image, can be obtained when a faster, more powerful computer can be used, and additional processing time can be devoted to each frame of video.

Dan, in your opinion, what is the Lightroom/CaptureOne Raw processing engine equivalent for the ArriRaw?

Dan Keaton
July 6th, 2012, 04:27 AM
Dear David,

We have been working on images that show the actual difference.

As soon as possible, we will post them here.

Jim Arthurs
July 6th, 2012, 08:51 AM
A couple weeks ago I decided to do a simple test of the Alexa recording in camera Prores vs ARRIRAW via the Gemini 4:4:4 with upgrade.

I had not seen ANY public tests anywhere and suspected the reason for this was that the difference was so slight that you'd need to do 2X magnification, image differencing, and all kinds of viewing enhancements in order to see something, anything.

After all, most TV series and even a few decent budgeted feature films have opted for Prores recording over ARRIRAW. So, to my mind, if they're NOT shooting ARRIRAW it was because the picture advantages aren't worth the significant extra expense for a recorder costing half or more the camera with a complicated workflow and the associated hassle of extra data and file management.

I was wrong in my assumptions! ARRIRAW reproduces fine detail that is completely blurred over by the in-camera de-bayering, processing and recording to Prores.

http://ftp.datausa.com/imageshoppe/outgoing/ARRIRAW/BUNNY_PRORESvsARRIRAWexample.png

http://ftp.datausa.com/imageshoppe/outgoing/ARRIRAW/CHART_PRORESvsARRIRAW.png

Simply put, if you record Prores you're "blind" to a good 25% of what the camera is capable of giving you in terms of detail. Is that important? Depends, of course! If you're making a web video or some project that has limited shelf life, then maybe not. But even here, if you're doing green screen you're compromising by not having what the camera is capable of giving you.

But, now, with an inexpensive recording option for ARRIRAW called the Gemini 4:4:4, one that is lightweight, doubles as a monitor and uses media that per minute is many factors cheaper than my first P2 cards with the prosumer HVX200 I owned generations back, then the case for NOT shooting ARRIRAW on any project that needs to have shelf life is greatly diminished. And, I have to wonder how many DP's are getting to see true side-by-side comps and how many producers are then making informed decisions?

And for the test I did, the workflow was simple and straightforward and no more complicated than shooting with any outboard recorder combo like an EX1 and a nanoFlash. An added bonus, as mentioned, was that the Gemini was the only on-camera monitor as well (no viewfinder on this particular Alexa). The SSD cards recorded by the Gemini played straight into Premiere CS6 from the eSata reader I was using, and I had good full frame rate playing the native ARRIRAW files. Obviously you need a RAID for serious work, but the ARRI "Shoot and Edit" mantra is still a valid one with ARRIRAW due to the acceptance of the file format in editing apps and the reduction in size, cost, media and power requirements for the actual recorder. Remember, ARRI "gives" you ARRIRAW for free... it's just not been all that practical to record it.

Alexas, for the most part, shoot TV series and movies. Almost without exception, this programming is an ASSET that will be mined for revenue for years (decades) to come. Recording ARRIRAW, while not true 4K, still goes a long way in future proofing these productions in a way that is simply impossible with on-board recording. Even now, ARRI has just released new de-bayering improvements that will glean additional quality from the same ARRIRAW footage shot when the cameras were first released. It's a pretty safe bet that when you re-approach your archival footage in a few years you'll get even BETTER results than today. This isn't a new concept, the RED folks and users understand this quite well, as footage shot with the first RED cameras years ago looks even better when processed anew with the latest de-bayer code.

Disclaimer, I know and like Convergent Design, and used their Alexa and a borrowed Gemini 444 for this test.

Regards,

Jim Arthurs

David Heath
July 6th, 2012, 02:39 PM
Can I enquire about the methodology, Jim?

A lot depends on what the end result is destined to be - straight 1920x1080, or something which may be viewed via 4k equipment etc.

I was previously assuming the former. Whatever the workflow, the aim being to end up with a 1920x1080 final product.

If so, the comparison should be between the 1080 ProRes and the RAW output deBayered, and then downconverted to 1080. The comparison of the charts seems to imply that's not how the comparison is being done - it reads as though the 1080ProRes is being rescaled up to match the dimensions of the deBayered raster?

If so, it's hardly surprising there is a difference, but is it not one that would be largely eroded by subsequent downconversion to 1080? I'd be more interested to see the RAW footage downconverted to 1080 and put straight up against ProRes 1080.

I take your points about future asset mining in some future situation, but for those with the money it's the difference now with 1080 that most matters.

Jim Arthurs
July 6th, 2012, 04:33 PM
Can I enquire about the methodology, Jim?

Yes, I was scaling Prores to match the frame raster of the native ARRIRAW. And rightly so... how else can we demonstrate the full extent what is lost? The internet, as I found, is not flooded with these kind of Prores vs ARRIRAW tests. :)

As I mentioned, I doubt many people, and probably people who SHOULD know, have ever seen the difference in resolution side by side. This is real, usable resolution that is disposed of by the low-pass filtering of the codec and the in-camera down-sample... rightly or wrongly depending on the project, granted, but very much absolutely lost. This loss is, in a very real way, a loss of already paid for production value.

I'd be more interested to see the RAW footage downconverted to 1080 and put straight up against ProRes 1080. <snip> ]I take your points about future asset mining in some future situation, but for those with the money it's the difference now with 1080 that most matters.

Certainly a valid request. I had not read every post in the thread, and see you point. Easy fix... Here is a layered Photoshop file so you can pixel-peep the differences. This file is untouched 1080p Prores on the top layer, and the bottom is ARRIRAW downsampled to match the 1080p raster size. No fancy down-sample methods were used beyond simply using the "better for downsampling" option in Photoshop. Toggle back and forth between the layers in Photoshop to compare...

http://ftp.datausa.com/imageshoppe/outgoing/ARRIRAW/PRORES_VS_RAW_1080p.zip

What I see is a significant "pop" in overall sharpness in the ARRIRAW, specifically seen in the elimination of most chroma aliasing at about 1000 LPPH. Remember, the chart is exactly 2X distant from the camera than normal, and all numbers on the chart have to be doubled. What this tells me is that I'm going from 800 usable line pairs up to 900 solid ones by shooting ARRIRAW compared to Prores. That's real usable extra resolution in the 1080p domain.

I used a 50mm Sony F3 kit prime lens at about 5.6 for this... I would expect even more pop with better glass and a fancy down-sample method like the latest ARC ARRI software.

Again, it's a project by project basis and I can't possibly tell you what's right for your project. But ARRIRAW makes a difference for future-proofing AND squeezing that extra bit out of the 1080p final raster. If recording ARRIRAW is difficult and expensive, it's only going to be done on a few high-end projects. If it's no more complicated than slapping a nanoFlash on the back of an EX1, then that sort of re-sets the expectations and demands re-examination of where and when it can be used.

Even if future-proofing isn't on the table, certainly a little resolution wiggle room for upscaling the image and fixing framing is nice, or simply knowing you're giving the VFX guys all you can for that green-screen shoot is enough to keep ARRIRAW in mind...

Regards,

Jim Arthurs

David Heath
July 7th, 2012, 03:38 PM
Yes, I was scaling Prores to match the frame raster of the native ARRIRAW. And rightly so... how else can we demonstrate the full extent what is lost?
I don't disagree with that, but do believe the point is an important one. There will ALWAYS be a weakest link - and yes, here we see that the camera front end is capable of more than ProRes and 1080p is capable of handling.

That doesn't surprise me - the Alexa is designed first and foremost as a 1080p camera, so I'd expect the front end to be capable of somewhat more than the system. That's sensible design.

But to make use of the extra you demonstrate, we're talking about a recording etc system such as 4k - and then I can see comments like "the camera doesn't nearly do justice to 4k, does it!?" There'll nearly always be a weak link!
Easy fix... Here is a layered Photoshop file so you can pixel-peep the differences. This file is untouched 1080p Prores on the top layer, and the bottom is ARRIRAW downsampled to match the 1080p raster size. No fancy down-sample methods were used beyond simply using the "better for downsampling" option in Photoshop.

What I see is a significant "pop" in overall sharpness in the ARRIRAW, .........
As I'm sure I don't need to tell you, there are two ways an image can "look" sharper - better real resolution and/or using detail enhancement. The big question is whether what you very well demonstrate is due to the former or latter. (Or a bit of both! :-) )

I SUSPECT it's largely down to the latter - and as evidence would say that toggelling to ARRIRAW has a marked effect on the appearance of the lower frequencies as well as the higher. In other words, it's possible that by increasing the in-camera detail settings you may find a value where the differences are far smaller.

This really comes back to my original point. With such as the ProRes option you have to largely "burn-in" camera settings - and that ranges from gain, to colour balance, to dynamic range handling...... and includes detail enhancement setting. Use RAW and all those may be set and adjusted after shooting - including detail/aperture levels. The real benefit of RAW (IMO!) is flexibility - certainly if the final product is destined to be 1080, period.
If recording ARRIRAW is difficult and expensive, it's only going to be done on a few high-end projects. If it's no more complicated than slapping a nanoFlash on the back of an EX1, then that sort of re-sets the expectations and demands re-examination of where and when it can be used.
I don't disagree with that at all - please don't think I'm knocking the idea of RAW recording, quite the opposite. It's just that the original core question posed by the thread was "we are looking for reasons why a production would, or would not choose to shoot in RAW".

My feeling is that the PRIMARY reason why a production would choose to use it would be flexibility. The PRIMARY reasons why they would choose not to would be more work (hence more expense/delay) in post, and a more complicated/expensive recording system. I fully agree that what Dan is proposing may largely negate the second of those disadvantages.

Both Sony and Canon have now brought out large format single sensor cameras with 8 megapixel chips, 3840x2160, with easy to derive 1080p as a here and now, but with 4k RAW as an obvious future. In those cases, RAW offers far greater potential than wih sensors primarily designed to get 1080 by conventional deBayering. (Such as the 2880x1620 of the Alexa.)

(Jim - any chance of reposting the "bunny" test in the same way? Original ProRes, and downscaled RAW?)

Jim Arthurs
July 7th, 2012, 07:49 PM
But to make use of the extra you demonstrate, we're talking about a recording etc system such as 4k - and then I can see comments like "the camera doesn't nearly do justice to 4k, does it!?" There'll nearly always be a weak link!

There's plenty argument that a 4K sensor such as the original RED sports doesn't saturate a projector or broadcast 4K raster either, you need something significantly greater to account for de-bayer issues, but to not record that significant extra detail the Alexa offers you just because it isn't full 4K doesn't make sense in the way that the people who carp about the RED not technically filling that full 4K raster bothers me, it's just a difference in degree.

For the markets the Alexa camera is aimed at; TV series and feature work, every little bit will help. We wouldn't be watching remastered Bluray Star Trek, Twilight Zone and others if those shows weren't "future proofed", if entirely by accident by shooting film. Now with current productions, we have to work at it to ensure that happens and make these options know to those in charge.

Your "weak link" point is excellent, so let's apply it to standard 1080p production. What's the "weak link" in the HD production chain? Any camera/codec that JUST records exactly 1080p. Because it's probably not really giving you the "every pixel different" situation you'd hope for. If it's a 3 chip camera, it's going to alias at frequencies you could sub-sample out with higher res originals. And of course the extra resolution "wiggle room" and value for VFX that I keep bringing up because it's a real benefit.

I agree completely with you regarding the flexibility of RAW being a key asset, but I assert the extra resolution is equally important for 1080p production as well. We're not far off in opinions, I just put a lot of stock in resolution and detail and now that I know what's missing when recording on-camera... well, it's grating. I have an exact metric of the difference, thanks to the test, and know when it would be handy to have on hand, and what to expect.

As I'm sure I don't need to tell you, there are two ways an image can "look" sharper - better real resolution and/or using detail enhancement. The big question is whether what you very well demonstrate is due to the former or latter. (Or a bit of both! :-) )

Back before the RED hit the streets, I was front and center on the Red forums proclaiming that the 4K Red would be a mediocre 4K camera, but an excellent 1080p camera due to the down sampling effect. Down sampling an image that doesn't have edge enhancement produces something which I referred to at the time as "naturally sharp images". What I have given you is exactly that, with full awareness of what I was doing, a "naturally sharp" image with no edge enhancement. It's higher resolving even in the 1080p raster than the Prores because the camera doesn't have the time or the hardware "smarts" to de-bayer and downsample as nicely and with as much computational horsepower as can be done in post.

Speaking of edge enhancements, for fun, throw the "sharp" filter in Photoshop onto each layer and compare. There's stuff that will be accented and brought forward in the ARRIRAW frame that simply isn't there to be enhanced in the Prores image.

The great irony is that it will be easier for independents to take advantage of the benefits of ARRIRAW now that both the recording (thanks to the Gemini) and the post (thanks to Premiere, After Effects, etc.) has been simplified and made vastly less expensive. The high end shows and series have existing workflows they will take convincing to change up or modify.

It really is stupid easy to use ARRIRAW now. One man band stuff... record, transcode and edit.

Do I need it for everything? No, but who says you have to do everything in ARRIRAW on the same show? Use it for the tricky stuff, use it for when you see your crane operator giving you a so-so shot knowing you can de-shake in post and still wind up with a full raster of detail. Use it for the wide shots and the VFX plates.

(Jim - any chance of reposting the "bunny" test in the same way? Original ProRes, and downscaled RAW?)

Sure, I'll process some stuff up when I can get at it on Monday... glad to help.

Regards,

Jim Arthurs

Sareesh Sudhakaran
July 7th, 2012, 09:34 PM
Fine art printing outclasses IMAX anyday, let alone 4K cinema.

What I've learnt from that process is shooting in RAW allows the option of capture sharpening (or deconvolution sharpening) in the RAW program. The algorithm (good ones like Lightroom 4, Capture One, etc) is fine tuned to understand each camera's bayer process (even though it has not been debayered yet) to find the best settings for such sharpening.

Fine art photographers sharpen a second time depending on the image.

And finally, when outputting, depending on the size of the print, paper quality, web, etc,. they apply an output sharpening as well. Third time, here.

Arriraw, being RAW in the same tradition as digital still cameras, allows this excellent opportunity in video. But:

My question is, there are no algorithms produced specifically for Arriraw to replicate this advantage. At most, I see software "supporting arriraw natively" - whatever that means.

I would pose the same question of Redcode Raw, for which Red has released specific software based on their sensor and codec. Is there something like that exclusively for Arriraw? As far as I know, the originally authorized arri recorders just recorded .ari files, without any processing.

The scary part is, if there is none - then the sharpening applied by various plug-ins and algorithms on arriraw are the ones used for full raster RGB images - which might improve the image, no doubt - but isn't the best possible solution for RAW.

Just my random thoughts on the subject.

Dan Keaton
July 8th, 2012, 05:40 AM
Dear Sareesh,

I enjoy reading your posts.

I do not feel fully qualified to answer all of your questions or comments. So I will leave that to others more qualified.

I can state that there are three main ways to start using ARRIRAW.

1. The ARRIRAW Converter, available for free from ARRI. This is a Mac based program.

ARRI Group: ARRIRAW CONVERTER (http://www.arri.de/camera/digital_cameras/tools/arriraw_converter.html)

The ARRIRAW Converter was recently upgraded.

2. ARRI offers a SDK, a Software Development Kit, which is used by many professional software companies.

This was also recently upgraded. One significant upgrade is the ability to use the Graphical Processing Unit to speed the deBayer process.

3. Software companies can write their own code and algorithms.


Side Note:

Before I started researching RAW, I naively thought that there was one, proper and correct way to deBayer an image.

I have leaved that this is not the case.

One can do a quick deBayer for confidence monitoring.

One can do a better deBayer, wth more time and computational horsepower. And this may involve multiple passes.

And one can use state-of-the-art deBayering software.

This will obviously provide the best deBayer.

But, an important point is the "state-of-the-art" advances over time.
So if one archives the original Raw files, one can go back later and re-deBayer the original files and obtain better images.

ARRI's new SDK should, in my opinon will have a very positive impact.

1. New, more sophisticated deBayering algorithms are used.

2. The GPU, (Graphics Card) on your computer can be utilized to provide vastly increased computational horsepower. Thus, a very compute-intensive algorithm (computer code) can be used to provide a better image in a shorter period of time.

Sareesh Sudhakaran
July 8th, 2012, 07:15 AM
Hi Dan, thank you for the kind words - I'm usually guilty of having strong opinions and I'm grateful to anyone who puts up with it!

I can guess how tough it must have been for your engineers to make sense of the voodoo data stream shoved through a dual HD-SDI cable. My line of thought isn't a reflection on the performance of the Gemini 444.

Here's an example of something that probably supports my point of view:

The Arri Converter/SDK has two debayer algorithms - ADA-3 HW (Hardware) and SW (Software) - none of which provide any degree of control except -
Sharpness from 0 to 300 (Default 100). This does not correlate to any traditional and professionally used sharpening algorithms.
The maximum color bit depth rendered to DPX/TIFF is 10/16-bit - no 32-bit float option.
It has excellent support for color spaces, but very few 'dials' for color control. If I shoot green chroma and need to tweak the green color, I'll have to use a 3rd party grading application. So why bother with the converter?

The Arri converter basically mirrors the in-built camera controls. It's like Henry Ford's famous "any color as long as it is black" policy on the Model-T.

Lazy Arri? I don't know and I don't want to judge. How can I? I love the Alexa as it is, warts and all.

But real-time raw processing seems like the next big challenge in video! In the coming months, it will be interesting to see how other manufacturers deal with these issues.

David Heath
July 8th, 2012, 05:35 PM
Down sampling an image that doesn't have edge enhancement produces something which I referred to at the time as "naturally sharp images". What I have given you is exactly that, with full awareness of what I was doing, a "naturally sharp" image with no edge enhancement. It's higher resolving even in the 1080p raster than the Prores because the camera doesn't have the time or the hardware "smarts" to de-bayer and downsample as nicely and with as much computational horsepower as can be done in post.

"It's higher resolving even in the 1080p raster than the Prores because the camera doesn't have the time or the hardware "smarts" to de-bayer and downsample as nicely ..." - maybe.... but it could be because the in-camera detail setting is set low. Maybe to a negative value to deliberately knock detail off for a "look"? That's what I meant by "it's possible that by increasing the in-camera detail settings you may find a value where the differences are far smaller".

OK, that's maybe a bit academic. The real point is that with RAW the detail setting is changeable in post, with ProRes it's burnt in, so no question - the RAW gives you flexibility.

John Brawley
January 18th, 2013, 08:37 PM
What I've learnt from that process is shooting in RAW allows the option of capture sharpening (or deconvolution sharpening) in the RAW program. The algorithm (good ones like Lightroom 4, Capture One, etc) is fine tuned to understand each camera's bayer process (even though it has not been debayered yet) to find the best settings for such sharpening.
.

You might want to try Davinci Resolve. You can download the free version and it will work with ARRI RAW as well as RED RAW and DNG natively.

The free version is exactly the same as the full version, except it has no noise reduction, 3D support and is limited to a single GPU and 1920 resolution (on export).

It has a very good Debayer engine and Resolve is probably to "go to" motion RAW grading tool at Hollywood / Studio level productions.

It's kind of amazing you can get the same software for free.

jb

Sareesh Sudhakaran
January 24th, 2013, 03:49 AM
I agree, John. 100%. It is really cool of them to offer it for free.

I'd love to try Arriraw on it. but everyone here shoots Prores. Maybe I'll get around to testing it some day. Thanks for the tip!