View Full Version : Sony NEX-VG10 AVCHD E-Mount Lens Camcorder
Pages :
1
2
3
4
5
6
7
8
9
10
[ 11]
12
13
Brian Luce October 6th, 2010, 08:59 PM Hi Steve,
Why are you interested in avoiding using conventional BDD disks?
And also, have you posted a review of this camera yet? Love to hear your evaulation.
Steve Mullen October 6th, 2010, 09:41 PM I'm not sure why you use CineForm, but there is not only no QUALITY advantage to transcoding to an intermediate code there is a quality disadvantage.
When native editing is used, the AVCHD is decompressed in the NLE to RGB/YUV 422 just as each frame is needed.
When you use an intermediate codec, first AVCHD is decompressed to RGB/YUV 422. Then it is recompressed. No matter the claims, this second recompression can NOT improve quality. It can NOT even preserve quality because it is a second cycle of compression. You are ediing second generation video. Moreover, the file has now greatly expanded. So more storage and a decrease in quality! In the NLE, the intermediate codec is decompressed in the NLE to RGB/YUV 422 just as each frame is needed.
All NLEs work with RGB/YUV 422. In no way is quality lost by keeping the source files as AVCHD until the instant each frame is needed.
Likewise, it makes NO difference when 4:2:0 is converted to 4:2:2. In fact, it is better to wait and do a direct conversion from 4:2:0 to 4:4:4 rather than do it in two steps.
If rendering is needed, no good NLE ever renders to AVCHD, HDV, XDCAM EX, etc! This is one of the big myths. Folks worry their graphics will be compressed to AVCHD, HDV, XDCAM EX, etc. Nope.
FCP and MC force a render to ProRes 422 or DNxHD or uncompressed.
Moreover, FCP NEVER EVER uses rendered files during export. Every frame starts with the AVCHD source. MC can use renders, but not if you delete them before export.
This is why I'm checking quality with Vegas. Each AVCHD frame is decompressed to RGB/YUV 422 and recompressed to AVCHD.
You use Premiere and its code base is old, so it MIGHT render to AVCHD -- but I can't believe it does. Adobe lacks its own Intermediate codec so perhaps it uses DVCPRO-HD which cuts horizontal resolution and is only 100Mbps. Buying CineForm's codec to use within Premiere makes sense.
Having said all this, Premiere can NOT play AVCHD on an 2.53GHz I5 without audio stuttering and never has been able to do real-time transitions -- unless you have an Nvidia board.
Vegas does a better job, as it slows down on transitions.
Bottom line, there is VERY GOOD reason to use an intermediate codec -- performance!
Robert Young October 7th, 2010, 01:01 AM I'm not sure why you use CineForm, but there is not only no QUALITY advantage to transcoding to an intermediate code there is a quality disadvantage.
I use Cineform for HD editing because in my personal experience with CF (beginning with Premiere Pro 1.5) it does the best job of preserving the original image quality throughout the editing process, including (and particularly) all the way to any and all of the final delivery formats required.
I have never experienced a quality decrease as a result of using the CF DI.
It also often provides better quality previewing during editing, and does increase the NLE performance/speed compaired to editing highly compressed acquisition formats.
For me, there is no real downside.
Conversion of the raw files is quick and automated, and that's really the only extra step.
The rest of it is simply the ordinary workflow of editing, and rendering out to delivery format.
An awful lot of commercial productions, including many Hollywood movies utilize CF DI.
I am using CS5 on a Win7 64, 12GB Tri RAM, Intel i7, nVidea CUDA GPU system. There's no question that it's dealing with native formats impressively these days. But I have a bulletproof workflow with CF that produces terrific, consistant results at the delivery end.
So far, I remain unconvinced that native format editing would be an improvement in any way
Mike McKay October 7th, 2010, 01:58 PM I am using CS5 on a Win7 64, 12GB Tri RAM, Intel i7, nVidea CUDA GPU system. There's no question that it's dealing with native formats impressively these days. But I have a bulletproof workflow with CF that produces terrific, consistant results at the delivery end.
So far, I remain unconvinced that native format editing would be an improvement in any way
You're using CS5 on an i7 system with CUDA.....and you don't edit native?? I'm sure CF is great, but surely you don't need it anymore? Have you tried native edit, I'm looking to go to CS5/CUDA because it seems like it can chew through AVCHD with multiple tracks and all kinds of color correction like butter......is that not the case?
Steve Mullen October 7th, 2010, 02:42 PM I was a big fan of CineForm and know the folks well. They provided an real-time engine to Premiere that was NECESSARY. The fact it used a great codec was fine with me.
But, then they started pushing that MPEG and AVC had to be transcoded to keep quality. which simply is not true. So performance was the key. But with CUDA there's no need since -- as you point out -- it screams thru AVCHD!
Robert Young October 7th, 2010, 04:52 PM You're using CS5 on an i7 system with CUDA.....and you don't edit native?? I'm sure CF is great, but surely you don't need it anymore? Have you tried native edit, I'm looking to go to CS5/CUDA because it seems like it can chew through AVCHD with multiple tracks and all kinds of color correction like butter......is that not the case?
Yes...
That is indeed the case, but all of that simply has to do with the editing experience being more smooth, fast, and capable.
It has no bearing on the inherent problem with AVCHD of maintaining final image quality as you apply effects, complex graphics, color correction, maybe apply a Magic Bullet Looks "look" to an entire sequence, etc.,and then finish it off by transcoding to a variety of delivery formats.
I have used native AVCHD edit on short, simple "trim & stitch" pieces, then out to web format. Looks fine.
But, even Adobe "World Wide Evangelist" Jason Levine quickly mumbles some caviets about native editing when discussing the "no need for DI anymore" topic.
Certainly, the bottom line is that if whatever you are doing looks good enough to you, then it's as good as it has to be.
But, my view is that AVCHD is a lossy codec, and Cineform, ProRes, etc. are substantially less so.
Steve Mullen is the first person I have ever heard make the claim that editing in AVCHD actually provides BETTER final image quality than Cineform. Even Adobe has not gone quite that far with their enthusiasm.
I should add that a lot of what I do ends up being delivered on DVD. The Cineform workflow to get from interlaced HD to DVD is excellent, and has consistantly provided me with the best looking DVDs I have ever made.
Mike McKay October 7th, 2010, 06:46 PM It has no bearing on the inherent problem with AVCHD of maintaining final image quality as you apply effects, complex graphics, color correction, maybe apply a Magic Bullet Looks "look" to an entire sequence, etc.,and then finish it off by transcoding to a variety of delivery formats.
But, my view is that AVCHD is a lossy codec, and Cineform, ProRes, etc. are substantially less so.
Steve Mullen is the first person I have ever heard make the claim that editing in AVCHD actually provides BETTER final image quality than Cineform. Even Adobe has not gone quite that far with their enthusiasm.
I should add that a lot of what I do ends up being delivered on DVD. The Cineform workflow to get from interlaced HD to DVD is excellent, and has consistantly provided me with the best looking DVDs I have ever made.
Well this is very interesting, I'm no engineer, but I'd sure like to know what the best workflow is. It's one area that has always caused confusion. What Steve is saying makes sense if the AVC is decompressed into RGB/YUV 422 anyway....not sure that recompressing again makes any sense? Especially with the massive transcode files that get created and eat up tons of disc space. Guess I need to experiment more.
Robert Young October 7th, 2010, 07:14 PM Well this is very interesting, I'm no engineer, but I'd sure like to know what the best workflow is. It's one area that has always caused confusion. What Steve is saying makes sense if the AVC is decompressed into RGB/YUV 422 anyway....not sure that recompressing again makes any sense? Especially with the massive transcode files that get created and eat up tons of disc space. Guess I need to experiment more.
I'm not an engineer either, but
AVCHD is a highly compressed acquisition codec, never intended to be an editing format.
It is very lossy and if you beat up on it in post with effects, color correction, transcodes, etc. it will show it.
Storage is dirt cheap. The larger file size for CF is not a big deal at all.
If you are doing simple editing (trims with an occasional crossfade, etc), or if it's all going out to the Web, it's not a big deal to edit AVCHD in CS5.
It's all in the eye of the beholder. If you are happy with the results of your AVCHD edit, how does it get any better than that?
It's not rocket science, you can use your own judgement :)
IMO, the "ultimate truth" of these things is elusive. People have different opinions and experiences. A lot of different approaches work well for different things. Which is REALLY better- 30p or 60i, P.C. or Mac?- answer: all of the above.
At the end of the day, it's about finding out what you like and what works well for the things you are doing.
Graham Hickling October 7th, 2010, 08:44 PM I avoid doing anything with raw AVCHD footage in Premiere, and use CFHD instead, because of this: http://www.dvinfo.net/forum/adobe-creative-suite/480465-cs5-avchd-chroma-bug.html
Robert Young October 8th, 2010, 12:38 AM I avoid doing anything with raw AVCHD footage in Premiere, and use CFHD instead, because of this: http://www.dvinfo.net/forum/adobe-creative-suite/480465-cs5-avchd-chroma-bug.html
Interesting.
I haven't followed that particular issue, but I have certainly had the impression that CS5 AVCHD previewing does not look quite as good as the Cineform.
It's not that the AVCHD looks bad, it's more like: Hmmm... looks pretty good, vs. Wow... Oh, yeah!
Not a very scientific analysis, but enough to convince me for the time being anyway.
Steve Mullen October 8th, 2010, 03:06 PM "AVCHD is a highly compressed acquisition codec, never intended to be an editing format.
It is very lossy and if you beat up on it in post with effects, color correction, transcodes, etc"
Don't mean to beat on you, but you can't "beat-up" on AVCHD because that's not how NLEs work.
ANY/ALL source codecs are decompressed to a frame ONCE. CF also decompresses AVCHD once.
After a frame is decompressed it is NO LONGER AVCHD/HDV, etc. It it now uncompressed RGB or YUV.
From this instant onward, ALL FX are done on this uncompressed RGB or YUV frame. You can stack as many FX as you want, they are all done on uncompressed RGB or YUV frames. (You can add as many uncompressed RGB or YUV frames from other streams of AVCHD -- it makes no difference.) With CUDA, you are doing these FX very fast.
-------
When you use CF, each CF frame during editing is uncompressed to RGB or YUV. No NLE can work on a compressed frame.
The only difference with using CF is that you originally uncompressed an AVCHD frame to YUV and then recompressed it to CF. Recompression MUST degrade the uncompressed YUV frame because the very definition of compression is DISCARDING information! It doesn't matter if you don't notice it. All compression is designed to toss out what the designer hopes you won't notice. But something MUST get removed from each uncompressed YUV frame -- or each CF frame would be the size of an uncompressed frame!
Only if you convert AVCHD to uncompressed and stored it in a file, would the uncompressed YUV frame be PRESERVED. And, that huge frame could never have a better image than that which was in the AVCHD frame.
And, when you edit, that uncompressed frame -- from a huge file -- will be identical to an AVCHD frame that is decompressed on-the-fly. So NO intermediate codec edit can ever be as good as a native edit. That's simply a math fact.
-------
When all the FX have been applied to the uncompressed RGB or YUV frame, it is sent to your monitor. Then it is discarded. It is NOT compressed as an AVCHD frame. It is not compressed as a CF frame. It is not even stored as an uncompressed frame. It is gone.
Every time you view your timeline, your NLE starts over with the untouched source frames which are once again uncompressed to RGB or YUV. That is what NATIVE real-time editing is all about..
When you export, the only difference is each uncompressed RGB or YUV frame is not discarded. It is compressed using your chosen export codec. Each exported frame will have been decompressed ONCE and recompressed ONCE. But, by using CF, each exported frame will have been decompressed TWICE and recompressed TWICE.
=========
What happens if YOU choose to render some or all of a timeline?
The one thing I can tell you is you would never ever render to AVCHD! FCP only renders to ProRes 422 (you get to chose its parameters) and MC only renders to DNxHD (you get to chose its parameters) or DVCPRO-HD. Each of these NLEs has a menu where you specify the render codec.
I have no idea where you tell Premiere what codec should be used. Since you own the CF codec, that's the one you should choose.
When you export, you decide whether or not to use rendered files. FCP will not let you. With FCP you know each exported frame will have been decompressed ONCE and recompressed ONCE.
With MC you delete the render files prior to export to force it not to use the files. I suspect the same is true of Premiere. Of course, if you are in a hurry you can use the render files even though some quality will be lost.
=======
The bottom line is that back in the days of DV someone coined the term "acquisition" codec because of the way NLEs worked in the olden days -- and it keeps being used. An acquisition codec is a SOURCE codec. It is never an EDIT codec. And, it may be an export codec -- that is incorrectly called a "distribution" codec.
Mike Burgess October 8th, 2010, 05:07 PM HI Steve.
When you said in your last post that you would never, ever render to AVCHD. Can you expand on that, explain what you are talking about? Are you saying not to edit AVCHD and burn an AVCHD DVD?
Thanks,
Mike
Robert Young October 8th, 2010, 05:11 PM Steve
I appreciate your exposition on what is a complex and somewhat controversial subject.
You make a good case for the durability of AVCHD.
I certainly would not rule out the possibility that I might someday switch over to editing HD in native formats.
However, presently I have a work flow with CF that is quick, and an absolute no brainer.
I can easily mix material that was originally in different formats from different cameras.
I get splendid, predictable, consistant results no matter how involved the project is, or what sort of delivery is required. It's like- if it ain't broke, why fix it.
Maybe I'm sort of like the kid who is asking his mother about the stars and planets. She says "Why don't you ask your father- after all, he's an astronomer"
Kid replys "I don't want to know THAT much about it" :)
Steve Mullen October 8th, 2010, 05:41 PM HI Steve.
When you said in your last post that you would never, ever render to AVCHD. Can you expand on that, explain what you are talking about? Are you saying not to edit AVCHD and burn an AVCHD DVD?
Thanks,
Mike
Render has at least two meanings:
1) Render FX, means perform the FX math on the RGB/YUV frames. No compression is used -- the frames are only displayed.
2) Render FX, means perform the FX math on the RGB/YUV frames AND compress the resulting frames to a file. One would never compress using any long-GOP codec.
Thus, during editing, one would not "render" to AVCHD.
During EXPORT, however, one can certainly compress to AVCHD. Many call exporting "rendering" which it is. But, rending during export can be to ANY codec.
PS: "It's like- if it ain't broke, why fix it." That's true, but most modern NLEs already let you mix anything on a Timeline. That's what's called an "open timeline." All the sources are native. They can be SD and HD. They can be progressive and interlaced. They can have different frames sizes. They can even have different frame rates.
Now I'm not claiming you can do these things with Premiere. The code base for CS5 -- with the exception of CUDA -- is still the old old Premiere Pro with bug fixes. So YOU may need to use CF. I'm only saying the reason you gave for using CF is not valid. And, the CineForm marketing materials have not been valid for years -- and are really not valid with CUDA.
And, while nothing may be broken, transcoding to CF is a huge waste of time and space because you have CUDA.
=======
I am curious about the report of gamma issues. Is this all AVCHD? Just Sony? Just Pana? Just Canon? Just 24Mbs? It if it's so major, why hasn't Adobe fixed it by now?
Robert Young October 8th, 2010, 06:34 PM And, while nothing may be broken, transcoding to CF is a huge waste of time and space because you have CUDA.
Transcoding to CF on today's systems is way faster than RT and can be done directly off of the camera card if you want to cut it to the minimum. Time is not a real issue.
As for space- space is the cheapest part of this whole deal anymore. That's hardly a problem either.
Mike Burgess October 8th, 2010, 07:08 PM Render has at least two meanings:
1) Render FX, means perform the FX math on the RGB/YUV frames. No compression is used -- the frames are only displayed.
2) Render FX, means perform the FX math on the RGB/YUV frames AND compress the resulting frames to a file. One would never compress using any long-GOP codec.
Thus, during editing, one would not "render" to AVCHD.
During EXPORT, however, one can certainly compress to AVCHD. Many call exporting "rendering" which it is. But, rending during export can be to ANY codec.
PS: "It's like- if it ain't broke, why fix it." That's true, but most modern NLEs already let you mix anything on a Timeline. That's what's called an "open timeline." All the sources are native. They can be SD and HD. They can be progressive and interlaced. They can have different frames sizes. They can even have different frame rates.
Now I'm not claiming you can do these things with Premiere. The code base for CS5 -- with the exception of CUDA -- is still the old old Premiere Pro with bug fixes. So YOU may need to use CF. I'm only saying the reason you gave for using CF is not valid. And, the CineForm marketing materials have not been valid for years -- and are really not valid with CUDA.
And, while nothing may be broken, transcoding to CF is a huge waste of time and space because you have CUDA.
=======
I am curious about the report of gamma issues. Is this all AVCHD? Just Sony? Just Pana? Just Canon? Just 24Mbs? It if it's so major, why hasn't Adobe fixed it by now?
Thanks Steve.
Little by little I am learning, although I have a long ways to go. It will take me at least 5 more times reading your post before it begins to become somewhat clearer. I am, after all, a very slow learner. I do appreciate your patience and explanations. Thanks again.
Mike
Robert Young October 9th, 2010, 12:31 AM When all the FX have been applied to the uncompressed RGB or YUV frame, it is sent to your monitor. Then it is discarded. It is NOT compressed as an AVCHD frame. It is not compressed as a CF frame. It is not even stored as an uncompressed frame. It is gone.
Every time you view your timeline, your NLE starts over with the untouched source frames which are once again uncompressed to RGB or YUV. That is what NATIVE real-time editing is all about..
When you export, the only difference is each uncompressed RGB or YUV frame is not discarded. It is compressed using your chosen export codec. Each exported frame will have been decompressed ONCE and recompressed ONCE.
Good call on the technology of native format editing.
I have been quite underestimating the process and it's potential.
This little discussion has been an eye opener for me- I appreciate your patience.
Steve Mullen October 9th, 2010, 12:40 AM Transcoding to CF on today's systems is way faster than RT and can be done directly off of the camera card if you want to cut it to the minimum. Time is not a real issue.
As for space- space is the cheapest part of this whole deal anymore. That's hardly a problem either.
By the way, I am surprised at how long it takes to read AVCHD files WITHOUT conversion. And that's because as each file is read, it is processed to yield two additional files. One has non-video data and the other I suspect is a "hint" or "index" file that NLEs can use for editing.
Even Sony Vegas processes the data during import and creates a waveform file.
Bottom-line, native editing may well not be faster than is conversion to an Intermediate file.
And, in favor of intermediate editing, those without CUDA, will find it difficult to edit more than one stream with no more than 1 second transitions. EDIUS does very well, but you need four real cores -- maybe 8 cores.
So now that I've probably confused folks, let me sort it out. Native IMHO is better, but only if you have a monster computer -- which means not a laptop. Intermediate editing can use a far less powerful computer, but you need tons of storage -- which once again is not a laptop.
For those of us with laptops, 720p30 is a better match because each frame is half as big. Laptops simply aren't ready for 720p60 or 1080p30. It will take several more years to get 6 to 8 real cores in a laptop. And, it will likely burn your lap. :)
So far, with my MBP, I've found the free copy of Vegas running under XP under Bootcamp on a 2.53GHz I5 to be "reasonable" -- assuming you like editing with Vegas.My other favorite is far from free. Avid Media Composer v5 is REALLY sweet. It is now far more like FCP -- without FCP's negatives. For the classically trained editor, MC is very intuitive. For someone without decades of editing, Vegas is likely to be EZ to learn.
Noa Put October 9th, 2010, 03:38 AM By the way, I am surprised at how long it takes to read AVCHD files WITHOUT conversion. And that's because as each file is read, it is processed to yield two additional files. One has non-video data and the other I suspect is a "hint" or "index" file that NLEs can use for editing.
I am a premiere pro cs3 user and bought canopus edius pro specifically to handle my dslr footage on my 3 year old pc, importing those native 1080p files and dragging and dropping it onto the timeline is just a matter of seconds. Right after that I can view my footage, not in realtime though but sufficient to do some very rough cutting.
Ron Evans October 9th, 2010, 07:15 AM I mainly edit with Edius 5.5 but also have Vegas 8 and Vegas 9 as well as CS3 on the PC at the moment. The philosophy of Edius is to always edit at the choosen project properties. If that is 1920x1080 then the output is always 1920x1080. This made earlier versions of Edius not work with AVCHD as there was too much processor power needed with the then software algorithms. Vegas from Vegas 8 would edit AVCHD native by reducing the preview resolution to maintain frame rate of project or one can select the preview resolution and frame rate will drop based on PC power/resources.
My sources are from SR11, XR500 and NX5U both Edius 5.5 and Vegas will edit these native if there is only one track on my Q9450 quad core, 8G RAM running WIN Vista 64. For multicam I still need to encode to the Canopus HQ intermediate but I am aware that those with faster processors can do multicam native now in Edius 5.5.
I agree with Steve that native is preferable and went this way with HDV as soon as the editors were able which for Vegas and Edius was some time ago. I will also go full native for AVCHD as soon as I get a PC upgrade.
What one must look at though is the quality of the conversions through to the output. For editing the conversion is to RGB/YUV for preview but for export the conversion is from source through to selected output format. It may well be that choosing intermediates at some point leads to a better outcome depending on the NLE. As an example I prefer TMPGenc encode so I export a HQ file from Edius and do my encoding for discs in TMPGenc.
Ron Evans
Steve Mullen October 9th, 2010, 08:05 PM I was reading about the firmware update to the nex 5 today. It will finally get aperture priority mode when shooting video. The comment was that in bright light an nd filter will be needed to keep shutter speed low.
This is true, of course. But, with the vg10 even when you use shutter priority, you still need an nd filter, for two reasons.
At the ideal 1/60, the aperture may be f/22 which is too small. Your video will be soft.
To get a shallow dof, you need to keep the aperture near, but not at, the largest f-stop at the current focal length.
Because you need to do this under varying light levels, it seems a variable density nd filter is a must. That being said, I've found an nd8 (3 stops) to work most of the time outdoors.
It is amazing that with the exception of the new nikon 7000. None of these cameras have a built in nd filter.
Bottom line, getting manual control isn't enough.
Robert Young October 10th, 2010, 12:56 AM Because you need to do this under varying light levels, it seems a variable density nd filter is a must. That being said, I've found an nd8 (3 stops) to work most of the time outdoors.
I shot a big, long event gig today completely with the VG10.
Almost all of the shooting was ENG style, a lot of AF and AE. All of the daylight shots were @ 1/60, and using the VariND. I could easily & quickly stay b/t f 3.5 and f 8 depending on what I needed.
It all seemed to work very well. The cam was easy to shoot with under pressure, and the VariND provided the missing link for outdoor shooting.
I'm just now downloading the footage- we'll see what I've got.
David Bankston October 12th, 2010, 06:33 AM Yes. But I would like to know first:
1. If the camera outputs uncompressed HDMI when shooting.
2. The camera shoots interlaced or progressive, even if the latter is most likely in psf.
30P or 60i – Which Is It? - Quoted from Luminous-Landscape Site
Most people interested in video know that interlaced is old tech, that all modern TVs are progressive devices, and that therefore Progressive video recording is to be preferred, because it potentially offers twice the vertical resolution of interlaced. But Sony AVCHD video cameras (at least until one gets to their higher-end prosumer and pro gear) is speced as recording 60i. But, when you read the fine print you discover that qt least some of these cameras (such as the VG10) really aren't recording 60i, they're capturing 30P and placing it in a 60i "wrapper".
Confused? Well, you're not the only one. I'm unable to write a knowledgeable treatise on the subject, but in brief, what's going on is that the AVCHD standard doesn't include 30P, but it does include 60i. This is relevant for those that want to burn a Blu-Ray disk because Blu-Ray uses the AVCHD standard. (Ok – everyone that burns Blu-Ray disks please hold up your hands. Humm. I thought so.)
So what we have is confusion. Most Sony's AVCHD cameras (including the VG10) capture 30P, but make it appear to other devices as 60i. Most non-linear editors, such as Final Cut, figure this out by themselves, and when you check the Info screen will confirm that the footage is 30P.
Other manufacturers who adhere to the AVCHD standard, for reasons best known only to themselves, try and avoid this confusing state of affairs, capture 30P, and also record as 30P. No interlace confusion.
So if you are looking at a Sony AVCHD camcorder that is speced as 60i, be assured that where the rubber meets the road (in your NLE) you will likely be working with true 30P footage. I can't imagine how many camcorder sales Sony loses each year because of this confusion.
Ron Evans October 12th, 2010, 07:08 AM David,
Most Sony AVCHD camcorders do not capture at 30P they are in fact true interlaced 60i with the temporal motion of 60 exposure a second( the field rate) . Other than the VG10 and a few others like the Bloggies( which record at 30P and 60P but not 60i) the Sony's are all 60i.
I for one like this as I really do not like the temporal motion of slow frame rates. I too would love progressive but at 60p or faster. Some of the latest Sony AVCHD camcorders like the CX550 will output 60P from the HDMI essentially deinterlacing in camera to match most displays which are 60P or faster in NTSC lands. A VG10 with selectable progressive frame rates would be nice for those who want a film look and those who want a real look. But neither 30P or 60P full resolution( 1920x1080) are part of the spec!!!!
Ron Evans
Steve Mullen October 12th, 2010, 03:52 PM "30P or 60i – Which Is It? - Quoted from Luminous-Landscape Site"
When "reviewers" ask a question like this it indicates that still photographers don't understand video. And, in most cases when they review the video capabilities of these new cameras they start to babble nonsense. But, it is understandable.
When Sony USA PR doesn't "understand," it shows how Sony/Pana/JVC Japan works. The engineers in Japan understand that their chip can't run faster than 30Hz. They know BD and AVCHD don't record at 30p. And, they know if you put the 30p in a file tagged as 60I -- that the file will be compatible with EVERYTHING. So they send Sony USA the specs that say "60i." Since all the consumer marketing groups are staffed with low-paid boys and girls (mostly the latter) they simply print what Japan sends them. (They don't know to ask questions or are too afraid to ask questions.) And, reviewers print what the PR sheet says.
The same confusion came with AVCHD Lite which claimed to be 60p. But, the chip runs at 30Hz.
In both cases, the goal was to make files that are compatible with equipment and NLE. There is no 720p24, 720p25, or 720p30 in the BD/AVCHD world. There are only 720p50 and 720p60. Which by the way are what are broadcast. Likewise 50i and 60i.
All the NLEs I've tested see the clips as 50i or 60i. Make a 50i or 60i Sequence and all will be well. You can go right to AVCHD DVDs or BDs.
Unfortunately, for the non-consumer buyer, all will not be well. We'll notice the odd motion from 25fps and 30fps. We'll wonder how much quality will be lost when our NLEs needlessly perform interlace scaling for FX. Alternately, if we drop the clips in a 25p or 30p Sequence, we'll be pained that each clip will be erroneously deinterlaced.
We'll worry that if we upload an interlace file to Vimeo, will they automatically deinterlace it causing a loss in vertical resolution. And, we'll ask "should we deinterlace when making 720p25/720p30 for the Net."
Lastly, we'll wonder if the deinterlacer in our HDTVs are adaptive or not. Will they Weave deinterlace or only bob deinterlace?
So for the intended consumer market, calling it 50i/60i makes a good deal of sense. This group will buy because it is very sexy looking and has the magic buzzword "BIG CHIP." And, it does shoot beautiful video!
When we try to buy a $2000 camcorder rather than a $6000 product, we doing exactly what Sony does not want us to do. This likely explains why no 24p. We are supposed to buy products from the PRO division!
Adam Gold October 12th, 2010, 05:11 PM I can't imagine how many camcorder sales Sony loses each year because of this confusion.Um, my guess is "none." Pros understand the difference and know how to work with it, and consumers don't care -- they just know it works.
Graham Hickling October 12th, 2010, 07:10 PM That's a harsh call to say the LL review is babbling nonsense.
Plus, to add to the confusion, what on earth is this about?: "There is no 720p24 .... in the BD/AVCHD world"
BD spec:
1920×1080 59.94-i 16:9 2D encodes only
1920×1080 50-i 16:9 2D encodes only
1920×1080 24-p 16:9
1920×1080 23.976-p 16:9
1440×1080 59.94-i 16:9 (anamorphic) MPEG-4 AVC / SMPTE VC-1 only
1440×1080 50-i 16:9 (anamorphic) MPEG-4 AVC / SMPTE VC-1 only
1440×1080 24-p 16:9 (anamorphic) MPEG-4 AVC / SMPTE VC-1 only
1440×1080 23.976-p 16:9 (anamorphic) MPEG-4 AVC / SMPTE VC-1 only
1280×720 59.94-p 16:9
1280×720 50-p 16:9
1280×720 24-p 16:9
1280×720 23.976-p 16:9
720×480 59.94-i 4:3/16:9 (anamorphic)
720×576 50-i 4:3/16:9 (anamorphic)
Steve Mullen October 12th, 2010, 08:37 PM ".. what on earth is this about?: "There is no 720p24 .... in the BD/AVCHD world" "
My god, you caught me!
Bill Koehler October 14th, 2010, 06:24 AM Pretty good observations Dave.
Unfortunately for Sony, Panasonic hasn't been standing still either, and judging by the phenomenal interest in the AF100, I think Sony has REALLY missed the boat on this one. Only time will tell, but by the end of the year, I think Sony will definitely be on the back foot for quite a while.
Cheers,
Vaughan
I disagree. Reason? They are targeted at two very different demographics. The NEX-VG10 at $2K USD is targeted at the keen amateur. But even the keen amateur is rarely running XLR mics much less HD-SDI + external recorders and/or monitors, ND filters, etc. etc. etc. The Panasonic AG-AF100 on the other hand, at $5K USD, is targeted strictly at the professional videographer and has the feature set to prove it. And this is where you live so this is what appeals to you.
Andy Wilkinson October 14th, 2010, 06:45 AM I agree with you Bill. I had a hands on with the Sony yesterday at the ProVideo 2010 event here in the UK. It is definitely a "rich kids toy", no way a pro camera (but I'm sure there are/will be some lovely films made with it).
Now the Panasonic sure looks much more of a professional tool in every way (...although the codec still worries me a bit).
Steve Mullen October 16th, 2010, 01:43 AM Actually the shallow the DOF the fewer the background details that need to be compressed and so more efficient the compression so the less load on the codec.
David Heath October 17th, 2010, 07:37 AM So for the intended consumer market, calling it 50i/60i makes a good deal of sense.
The straight truth is that it is 25/30psf, and the lamentable fact is that the term psf is so little used or even widely understood. At the same time it is both of i25/30 and p25/30 - and at the same time it's neither of them. It's psf25/30, end of story.
What that means is that the pictures look like p25 in terms of motion rendition, it behaves like i/25 in terms of equipment compatability. Steve, I agree with pretty well all you say, but if I bought a camera marked i/25 (50i) I'd expect 50Hz motion rendition. In this case, I'd get 25Hz.
David quoted the following sentence earlier: "......some of these cameras (such as the VG10) really aren't recording 60i, they're capturing 30P and placing it in a 60i "wrapper" ". "Psf" is the simple way of saying "capturing 30P and placing it in a 60i "wrapper" ".
Robert Young October 17th, 2010, 02:29 PM Unfortunately, for the non-consumer buyer, all will not be well. We'll notice the odd motion from 25fps and 30fps. We'll wonder how much quality will be lost when our NLEs needlessly perform interlace scaling for FX. Alternately, if we drop the clips in a 25p or 30p Sequence, we'll be pained that each clip will be erroneously deinterlaced.
We'll worry that if we upload an interlace file to Vimeo, will they automatically deinterlace it causing a loss in vertical resolution. And, we'll ask "should we deinterlace when making 720p25/720p30 for the Net."
For me, these are the operative questions regarding 30psf.
I can use a native 60i sequence, mix VG10 and CX550 (60i) and if going out to BR, I'm not concerned. But what happens to a VG10 60i sequence when it gets "deinterlaced" for web?
One work around for me, if I use Cineform DI, the CF converter can be instructed to "interpret" the raw footage as 30p. It will simply reassemble the fields into the original 30p frames without applying any deinterlacing algorhythm. There may also be a way to have CS5 "interpret" the raw footage to 30p without deinterlace. I haven't tried that yet. Anyway, as 30p, I feel more confident that no matter if going to web, BR, or DVD, there'll be no unanticipated tampering with the images.
But, I feel like there are still many unanswered questions at this point.
Simon Wyndham October 17th, 2010, 02:37 PM All you have to do is tell your NLE that the footage is progressive. Easy. No de-interlacing required. Have people forgotten that any SD camera that records 25p or 30p is also recording to PsF? It was never a problem then, and it isn't a problem now.
Robert Young October 17th, 2010, 03:02 PM ...Have people forgotten that any SD camera that records 25p or 30p is also recording to PsF? It was never a problem then, and it isn't a problem now.
Hmmm...
I hadn't forgotten, I just never knew that to begin with.
That is what I suspected re CS5- just open a 30p sequence and drop the psf onto it.
However, if you allow CS5 to open a default sequence based on the footage, it will be 60i.
Seems like that would be the smartest way to handle VG10 footage, rather than an interlaced sequence.
Simon Wyndham October 17th, 2010, 03:09 PM There's slightly more to it then dropping the footage onto a 30p or 25p timeline. If you do that the NLE might assume that the footage is interlaced and perform a deinterlacing algorithm on it. I say *might* because depending on the system it may or may not understand that the footage is already progressive.
For example I always had to tell both Vegas and FCP that my XDCAM SD footage was progressive. So you will need to set up a 25p or 30p timeline, but make sure that the individual clip properties are set to progressive or no field order.
As David said;
At the same time it is both of i25/30 and p25/30 - and at the same time it's neither of them.
So it actually doesn't matter if you use an i or a p timeline. If you use an interlaced timeline you will still see progressive, and if you use a progressive timeline, as long as you tell the NLE that the footage has no field order, you will also see progressive.
The key is the encoding at the end of the chain (to BluRay or DVD), to make sure that the compressor knows what it is receiving, and setting the output accordingly.
David Heath October 17th, 2010, 05:51 PM Have people forgotten that any SD camera that records 25p or 30p is also recording to PsF? It was never a problem then, and it isn't a problem now.
Whilst true, there are caveats to that, Simon.
It's quite true that an SD camera is unlikely to record progressive directly, and extremely likely to record it as psf. If the editing software then reconstructs the original progressive, fine, but if you were to simply display it on an interlace display with no further processing, you'd get horrible 25Hz "twittering" on horizontal or near horizontal lines.
For that reason, if the psf pictures are likely to be viewed directly via an interlace system, it's very desirable to line average the original progressive image. That obviously softens the vertical resolution, but it's a small price to pay for getting rid of the line flicker. If psf is simply being used as a carrier, the destination being an NLE which will reconstruct true progressive, you obviously don't want line averaging.
Robert Young October 17th, 2010, 09:11 PM There's slightly more to it then dropping the footage onto a 30p or 25p timeline. If you do that the NLE might assume that the footage is interlaced and perform a deinterlacing algorithm on it. I say *might* because depending on the system it may or may not understand that the footage is already progressive.
In CS5, if I drop the 60i psf on a 30p timeline, it previews normally on a big HDTV monitor- no loss of horizontal rez, no artifact, twitter, etc.
However, I can highlight multiple clips in the project window, activate the pull down menu, select Modify>Interpret Footage>Conform to Progressive (no fields). On preview, it looks exactly the same as the unmodified psf.
So, I'm not sure exactly what PPro is doing- Is it really conforming to 30p automatically when the psf goes on a 30p sequence, or is it just providing great previewing capabilities?
The clip properties window for individual clips gives framerate as 29.97 for both i and p footage, and makes no reference to field order. No help there.
Steve Mullen October 17th, 2010, 11:04 PM There's slightly more to it then dropping the footage onto a 30p or 25p timeline. If you do that the NLE might assume that the footage is interlaced and perform a deinterlacing algorithm on it. I say *might* because depending on the system it may or may not understand that the footage is already progressive.
For the non-consumer there certainly more. Vegas Pro and FCP do need you to re-tag all clips as progressive or they will deinterlace each clip when used in a progressive sequence.
FCE and iMovie do not offer that function. The clips will be deinterlaced -- although I sell software the re-tags interlaced AIC to progressive.
Premiere Elements can only make a progressive Project if you choose H.264 DSLR setting. (But, AVCHD may not playback -- needs to be confirmed.) If you make an AVCHD Project you can only have an interlace sequence but it appears dropping clip into a sequence MAY change the sequence to progressive (Premiere may look within the data stream) -- BUT during export the sequence is assumed to be interlaced and deinterlaced if you make 720p30.
EDIUS and Vegas Movie Studio I need to re-check.
Sony, by marketing as 50i/60i saves the consumer from all this complexity. They can just use it. Of course quality is VERY likely to be lost.
Were Sony to market it as 1080i60/30fps or 1080i60/30p -- they would confuse every kid at Best Buy, most every consumer buyer, and a number of "pros." The fact I still need to re-check NLEs says its not obvious the exact workflow needed for each one.
PS: Media Composer claim the clips are progressive when they are imported to DNxHD for use in a progressive project.
Simon Wyndham October 18th, 2010, 02:04 AM For that reason, if the psf pictures are likely to be viewed directly via an interlace system, it's very desirable to line average the original progressive image.
Yes, the twitter could be an issue with direct viewing. Although when I used to edit SD I also had a very nice consumer CRT (a JVC) that had a progressive mode on it that would combine the two fields from PsF footage. Shame none of the pro monitors I came across at the time could do this! There were probably other consumer displays that could do this too, since many DVD manufacturers failed to correctly set the progressive scan flag on their discs. So TV and DVD player manufacturers had to find a way to properly combine the fields automatically. My TV required me to manually tell it to do so though.
For the non-consumer there certainly more. Vegas Pro and FCP do need you to re-tag all clips as progressive or they will deinterlace each clip when used in a progressive sequence.
FCE and iMovie do not offer that function. The clips will be deinterlaced -- although I sell software the re-tags interlaced AIC to progressive.
In the case of the consumer software the PsF footage can be happily edited on an interlaced timeline. There would be no loss of info. The issue then really comes when it gets to final encoding. But in actual fact it shouldn't make much difference.
Lets say that you have 1920x1080 50i PsF footage and you edit it in your NLE on a 50i timeline. No deinterlacing or loss of line info takes place in that case. Now you want to output a 1920x1080 sequence for Vimeo or YouTube. Conventional wisdom says that you need to deinterlace interlaced footage because of the weird combing effects when shown on a progressive display. However in this case it doesn't matter because the footage really is still progressive. No combing will be seen even if you don't perform any deinterlacing. You'll see both fields at the same time as you should.
As David said, there could be an issue on interlaced displays, but to be honest I never really had any issues, certainly no more so that with interlacing itself! Although camera setup did help. But if you are going out to DVD or BluRay then the encoder does need to know that it is receiving progressive scan footage (even if you output from the NLE timeline initially in a 50i file). Although having said that I have heard that a lot of players these days ignore the field flag and do their own interpretation of whether footage is progressive or not due to the unreliability of the manufacturers in the past.
One of those things that is complicated yet simple at the same time!
Steve Mullen October 18th, 2010, 08:32 PM "In the case of the consumer software the PsF footage can be happily edited on an interlaced timeline. There would be no loss of info. The issue then really comes when it gets to final encoding. But in actual fact it shouldn't make much difference.
Lets say that you have 1920x1080 50i PsF footage and you edit it in your NLE on a 50i timeline. No deinterlacing or loss of line info takes place in that case. Now you want to output a 1920x1080 sequence for Vimeo or YouTube."
I agree, BUT if any scaling is done there will be a vertical resolution loss. Why? Because each field will be scaled individually and then recombined. During the scale, all the lines are NOT together, so they cannot be used for the scaling math. PIP and 3D FX will suffer.
Moreover, when you export to H.264, you need to be VERY sure your NLE does not assume H264 is progressive (which is what Apple does) and then auto-deinterlace the 1080i Sequence. (One needs all 1080-lines for the WDTV.)
When you export a 1080i file and upload to a website, the website may auto-deinterlace. With 720p internet video there will be a serious visible loss. But, with 1080p now being supported, there will be a visible loss. If you can export an H.264 file without auto-deinterlacing, then this is the way to go.
===========
FCP forces a deinterlace of every I clip when it is used in a P Sequence.
Since Apple seems not to document what kind of deinterlacing is used, for speed it might well be bob which is a 40% drop in vertical resolution. About the same would occur if blend were used.
I do know that iMovie forces deinterlacing.
So if you use a 1080p Sequence, you must change every clip to P. Therefore, you must be sure your NLE offers this capability!
Bill Kerrigan October 19th, 2010, 02:39 PM Please, I am dying to know if the HDMI connection puts out uncompressed HD that can be used with a Nanoflash. Has anyone tested it?
I was planning to test the VG10 and nanoFlash today...
But just before I left, I connected a HDMI cable to a monitor... and discovered the camera stopped feeding a video signal to both the viewfinder and LCD panel.
Unless I've done something wrong… it appears, if you want to uses a nanoFlash, KiPro Mini or Ninja, it means adding a HD monitor to the camera.
Does anyone know if it's technically possible to change this in firmware?
Robert Young October 19th, 2010, 04:38 PM I've read this before, that HDMI out disables all of the camera monitors.
There is a firmware update for VG10 scheduled for Nov, but I haven't seen exactly what issues it will address.
Bill Koehler October 19th, 2010, 05:30 PM I was planning to test the VG10 and nanoFlash today...
But just before I left, I connected a HDMI cable to a monitor... and discovered the camera stopped feeding a video signal to both the viewfinder and LCD panel.
Unless I've done something wrong… it appears, if you want to uses a nanoFlash, KiPro Mini or Ninja, it means adding a HD monitor to the camera.
Does anyone know if it's technically possible to change this in firmware?
Does pressing the Finder / LCD button do anything to toggle the display state?
Just throwing out ideas...
<Edit>
Whoops, my bad. On page 34 of the Owners Manual, under 'Notes' it states: "No image is displayed on the viewfinder and LCD monitor when signals are output via the HDMI terminal."
Charlie Webster October 24th, 2010, 03:03 PM Well....bit the bullet big time.
I shoot about 5 weddings a year and misc projects, up till now with my PD170 and VX2000. I nearly went HD earlier this summer, but the downscaling issues for DVD output made me ask myself: why? My PD/VX are just as sharp on DVD as any HD, and better in low light than any. Workflow is simple. Clients are happy.
Another hobby of mine has been backcountry photography where I've been using just a super-zoom pani. Horrible noise in the long shots.
So I started to look at alternatives last week and came across the NEX cameras. I settled on the NEX-5, but I needed the big lens, which is 900.00 if you can find it.
Then I noticed a bunch of my searchs also pulled the VG10 which comes with the 18-200. Found this thread and read some more reviews.
End result: just ordered Nex-5 two lens bundle (16 + 18-55) 799.00 + the VG10 from adorama for a total cost of 2798. from adorama. Ouch!
Of course I'm terrified of the manual zoom and DOF focus issues for weddings, but the season is over and I'll have time to figure it out.
I could have held out for a red scarlet and REALLY spent some dough, but I'm excited to try this setup.
TY to all for the great posts and reflections!
Would love to get some advice on which memory cards to try...
Robert Young October 24th, 2010, 04:06 PM I could have held out for a red scarlet and REALLY spent some dough, but I'm excited to try this setup.
You'd be waiting a long time- RED has finally announced that they are abandoning the Scarlet prosumer product.
The VG 10 is a fun camera that is capable of beautiful photography.
There are a couple of issues:
1) Focus is a big issue- even at f-5,6,7 DOF is shallow enough to be noticable. The auto focus works pretty well, but like all of them, it's not bulletproof. You can press the "Photo" button and the camera will refocus and light up the sensors it is using- at least this tells you what the cam is looking at.
2) Low light is not terrific due to the slow stock lens (f-3.5). You may need to get a faster prime lens to get best low light. Sony is adding several new lenses to their E-Lens series in 2011.
3) Exposure control- to maintain control of both iris and shutter in daylight, you will need to add ND filters, or IMO the best solution, a variable ND filter like the "VariND". Your PD-170 has them built in, the VG10 does not.
4) Image stability- the OIS and IS features on the VG10 are pretty good, but not as effective as I've seen on some of the small chip cams (CX550, for example). Camera movement, particularly "roll" can really spoil an HD shot. You may find a monopod gives more reliable stability than handheld.
5) Manual Zoom- The zoom is rather stiff when new, but loosens and smooths out a bit with use. Nonetheless, handheld zooming is quite difficult. It is definitely more doable with a monopod. You will very likely be cutting back on your zoom shots.
6) Memory cards- either Sony Duo, or SD cards. I've used both, it makes no difference- just be sure the card is rated to handle the max data rate of the camera.
You will enjoy this camera.
There is a learning curve- it is quite different from using the PD-170, and the images will have a very different look than you are accustomed to.
For wedding photography, you should be able to create a really beautiful, more cinematic looking product once you learn the camera.
Have fun!!
Simon Wyndham October 25th, 2010, 12:32 AM You will very likely be cutting back on your zoom shots.
Good. Anything that stops people zooming during a shot is a good thing!
Robert Young October 25th, 2010, 12:34 AM Good. Anything that stops people zooming during a shot is a good thing!
Agreed!!
I'm not missing it a bit.
Charlie Webster October 25th, 2010, 02:27 AM You'd be waiting a long time- RED has finally announced that they are abandoning the Scarlet prosumer product.
The VG 10 is a fun camera that is capable of beautiful photography.
There are a couple of issues:
1) Focus is a big issue- even at f-5,6,7 DOF is shallow enough to be noticable. The auto focus works pretty well, but like all of them, it's not bulletproof. You can press the "Photo" button and the camera will refocus and light up the sensors it is using- at least this tells you what the cam is looking at.
2) Low light is not terrific due to the slow stock lens (f-3.5). You may need to get a faster prime lens to get best low light. Sony is adding several new lenses to their E-Lens series in 2011.
3) Exposure control- to maintain control of both iris and shutter in daylight, you will need to add ND filters, or IMO the best solution, a variable ND filter like the "VariND". Your PD-170 has them built in, the VG10 does not.
4) Image stability- the OIS and IS features on the VG10 are pretty good, but not as effective as I've seen on some of the small chip cams (CX550, for example). Camera movement, particularly "roll" can really spoil an HD shot. You may find a monopod gives more reliable stability than handheld.
5) Manual Zoom- The zoom is rather stiff when new, but loosens and smooths out a bit with use. Nonetheless, handheld zooming is quite difficult. It is definitely more doable with a monopod. You will very likely be cutting back on your zoom shots.
6) Memory cards- either Sony Duo, or SD cards. I've used both, it makes no difference- just be sure the card is rated to handle the max data rate of the camera.
You will enjoy this camera.
There is a learning curve- it is quite different from using the PD-170, and the images will have a very different look than you are accustomed to.
For wedding photography, you should be able to create a really beautiful, more cinematic looking product once you learn the camera.
Have fun!!
Bob TY for drilling down on the basic issues.
I always use a monopod (though of course sometimes it's retracted)---so that won't be a change---but I rarely touch the lens--I will learn.
Simon--amen! I have cursed myself many times while editing for too much zooming---that friggen button is dying to be played with. Never looks great---best is a fast zoom to a new steady frame size.
I'm very interested to see the low light performance- hoping it will be at least as good as the PD with wide adapter in place.
Don't tell me I need one of these:
SAL-70200G | 70-200mm f/2.8 Telephoto Zoom Lens | Sony | Sony Style USA (http://www.sonystyle.com/webapp/wcs/stores/servlet/ProductDisplay?storeId=10151&catalogId=10551&langId=-1&productId=11033557)
or maybe more practical: SAL-2875 28-75mm f/2.8 SAM Constant Aperture
November firmware update will be interesting to see if these lenses are viable for video.
PS: the low light sure looks good here:
http://vimeo.com/groups/nexvg10/videos/15206564
however check the last few seconds of this clip
Nex-VG10 moire reduction and 50/1.4 ND tests on Vimeo
wow. that's a fast lens. I see what you are talking about.
maybe a Sony SAL-50F14
Henry Olonga October 25th, 2010, 08:09 AM [QUOTE=Robert Young;1581772]You'd be waiting a long time- RED has finally announced that they are abandoning the Scarlet prosumer product.
Are you sure about that Robert. Do you have a link to this anouncement? I thought that the price has changed a tad but by a grand or so plus it is having the HDR added but I do think that it's not abandoned.Delays yes....
|
|