DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Convergent Design Odyssey (https://www.dvinfo.net/forum/convergent-design-odyssey/)
-   -   nanoFlash Public Beta 1.6.226 Firmware Comments (https://www.dvinfo.net/forum/convergent-design-odyssey/487423-nanoflash-public-beta-1-6-226-firmware-comments.html)

Peter Moretti November 21st, 2010 06:28 PM

Quote:

Originally Posted by Rafael Amador (Post 1590255)
Peter,
The miss match is HUGE.
You simply can not cut between a picture with an aperture and the same picture with a different aperture.
The picture JUMP as when you were cutting between signal out of sync.
They have different sizes on screen.
This may be something originally designed for DV, but affects every single standard, format, size,..
Just open any QT file and switch between the different aperture options.
rafael

"Classic" and "Encoded Pixels" mode usually look the same. Many times there will be no difference between "Clean" and "Production" modes. One crops for frame size and one doesn't.

So are you saying that nano is using the wrong pixel aspect ratio for the codec, which would cause a differenc between "Classic" nano and "Production" native recording? If so, how come no one, AFICT, has noticed this? (It's hugely obvious but no one has complained about it?)

Finally, are you saying that the six dots around the center are caused by an Aperture Mode difference? You can see clear as day in the images by Piotr that the difference is caused by chroma bleeding in along the horizontal grey band of the color bars. That has to do with 4:2:2 vs 4:2:0 chroma sub-sampling. It has nothing to do w/ Aperture size. If you can show otherwise, then I will readily admit I'm wrong. I don't want to mislead anyone. But I don't see how you can say that difference is caused by Aperture mode. But, of course, I'll leave my mind open to being convinced otherwise.

Adam Stanislav November 21st, 2010 11:37 PM

Quote:

Originally Posted by Dan Keaton (Post 1590021)
While I understand a Vectorscope and Histogram, I do not consider myself an expert in using these tools.

It is really very simple. The vectorscope plots the chroma on Cartesian coordinates, with Cb being the x-axis and Cr being the y-axis. So, it calculates the Cb and the Cr of every pixel in the image and places a dot in the 2D plane based on those coordinates. It does not tell you how many pixels each dot represents, only that there is at least one pixel in the bitmap with those Cb and Cr values.

The placement of each dot around the circle (i.e., the angle of the imaginary line connecting the dot and the center of the coordinate system) represents the hue of the pixel(s) the dot represents. The distance of the dot from the center (i.e., the length of said imaginary line) represents the saturation of the pixel(s) the dot represent.

So, any pixel with no saturation (black, white, or any gray value in-between) will be represented by a dot in the center (Cb = 0, Cr = 0). In the case Piotr has presented, the EX image had a lot of pixels with very little saturation (so the dots were scattered near the center), while those same pixels had no saturation in the nF image (one dot in the center). Neither is necessarily “better,” they are just different.

Final Cut Pro 7 - Scopes - Vectorscope describes how to interpret the vectorscope in Final Cut Pro, but it really applies to any software displaying a vectorscope.

The histogram calculates the Y (luminosity) of each pixel and shows you how many pixels in the bitmap have that Y value. In this case, the x-axis represents the luminosity, the y-axis the number of the pixels with that particular luminosity. The x-axis also shows you the darkest (left extreme) and the lightest (right extreme) colors in the bitmap.

At any rate, when color grading, the vectorscope helps you adjust different clips to about the same hue and saturation, the histogram helps you adjust them to about the same range of luminosities.

Dan Keaton November 22nd, 2010 01:36 AM

Dear Adam,

Thank you very much.

I was thinking of the single dot in the center of the nanoFlash vectorscope as being Black.

Is this correct?

Adam Stanislav November 22nd, 2010 02:51 AM

No, the single dot in the center means any type of gray, from black to white. In other words, anything with no chroma regardless of its luminosity (or luminance or brightness).

I have just taken a second look at those graphs and realized they were of the SMPTE bars. In that case I take back what I said one was not necessarily better than the other. The nF image is clearly superior to the EX image because the EX has some chroma for the grays, which clearly should be achromatic, just as the nF has them.

There are very few dots in those vectorscopes because there are only a few colors in the SMPTE bars. And as I pointed out, the vectorscope simply shows how many colors are in an image, not how many pixels are of each color.

Neither is perfect due to the MPEG compression, but the nano is considerably better: It shows four different yellows, one green, three cyans, two blues, two magentas, and two reds. There really should be just one or two of each. The white, black, and the grays are all without any chroma (as they should be), so that is excellent.

The EX shows many more colors because it compresses more, so it produces more intermediate colors at the edges of two colors. And, unfortunately, it adds some chroma to the grays, which is rather bad.

The nano is a clear winner here.

Piotr Wozniacki November 22nd, 2010 03:05 AM

Quote:

Originally Posted by Adam Stanislav (Post 1590577)
The EX shows many more colors because it compresses more, so it produces more intermediate colors at the edges of two colors. And, unfortunately, it adds some chroma to the grays, which is rather bad..

Agreed - this is exactly what I meant supposing earlier:

Quote:

Originally Posted by Piotr Wozniacki (Post 1589978)
[...] that the "extra information" would probably be just the EX's lower color resolution garbage.

Thanks Adam!

John Richard November 22nd, 2010 06:38 AM

Maybe Adam, who is more versed than I in scopes, could chime in on the potential effect of software scopes vs. hardware scopes in this discussion.

Most software scopes are only using parts of the image to display results. Some software scopes will only look at and display the results of every 6th or 7th scan line - NOT the whole image as is done with hardware scopes. The only sofware scope I know that attempts to display the vectorsope results of the WHOLE image is the one in Apple's Color (and even this software scope uses some manipulations to achieve a measurement representative of the whole image).

It would seem to me that we are looking at some very fine quality detail judgements with a software vectorscope that is only measuring a very small percentage of the scan lines in the image. I have no idea what the actual ratio of total scan lines that the scopes in Sony Vegas use, but I would bet that it isn't measuring but a small percentage of the scan lines in the image as do most software scopes. I think we could be using the wrong type of vectorscope to make such critical technical quality decisions?

Rafael Amador November 22nd, 2010 07:49 AM

Quote:

Originally Posted by Peter Moretti (Post 1590475)
"Classic" and "Encoded Pixels" mode usually look the same. Many times there will be no difference between "Clean" and "Production" modes. One crops for frame size and one doesn't.

So are you saying that nano is using the wrong pixel aspect ratio for the codec, which would cause a differenc between "Classic" nano and "Production" native recording? If so, how come no one, AFICT, has noticed this? (It's hugely obvious but no one has complained about it?)

Finally, are you saying that the six dots around the center are caused by an Aperture Mode difference? You can see clear as day in the images by Piotr that the difference is caused by chroma bleeding in along the horizontal grey band of the color bars. That has to do with 4:2:2 vs 4:2:0 chroma sub-sampling. It has nothing to do w/ Aperture size. If you can show otherwise, then I will readily admit I'm wrong. I don't want to mislead anyone. But I don't see how you can say that difference is caused by Aperture mode. But, of course, I'll leave my mind open to being convinced otherwise.

Hi Peter,
No.
No relation between the "6 dots" and the aperture.
The "6 dots" are clearly generated when the 422 stuff is down-sampled as 420.
When the line on the base of the color bars are merged in with the top line of the gray below.
The 420 blocks generates those pixels, with the same tone, but desaturated.
No mystery.

About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.

http://www.dvinfo.net/forum/converge...-aperture.html

I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.

I have to apologize because I've pointed Dan about "Aperture" as the reason of the NANO/SxS files miss matching on the Piotr test.
I'm trying to find out if there is something similar to "Aperture" on the MXF files.
I understand that the need of different "presentations" for digital video, is not exclusive of the QT files.
However I'm very limited to try to dig on the MXF format.
Cheers,
rafael

Piotr Wozniacki November 22nd, 2010 08:27 AM

Quote:

Originally Posted by Rafael Amador (Post 1590633)
About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.

http://www.dvinfo.net/forum/converge...-aperture.html

I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.

Rafael,

Yep - I remembered your post immediately after the "Aperture" term came into play in this thread. I've been wondering the same - is there a similar setting somewhere, for the MXF format?

So far, I found nothing.

Cheers

Piotr

Piotr Wozniacki November 22nd, 2010 08:39 AM

Quote:

Originally Posted by John Richard (Post 1590621)
Maybe Adam, who is more versed than I in scopes, could chime in on the potential effect of software scopes vs. hardware scopes in this discussion.

Most software scopes are only using parts of the image to display results. Some software scopes will only look at and display the results of every 6th or 7th scan line - NOT the whole image as is done with hardware scopes. The only sofware scope I know that attempts to display the vectorsope results of the WHOLE image is the one in Apple's Color (and even this software scope uses some manipulations to achieve a measurement representative of the whole image).

It would seem to me that we are looking at some very fine quality detail judgements with a software vectorscope that is only measuring a very small percentage of the scan lines in the image. I have no idea what the actual ratio of total scan lines that the scopes in Sony Vegas use, but I would bet that it isn't measuring but a small percentage of the scan lines in the image as do most software scopes. I think we could be using the wrong type of vectorscope to make such critical technical quality decisions?

John,

I think you're right that the software scopes are not as accurate (and unbiased) as good, properly calibrated and used, hardware instruments - but I believe that for the sake of this simple comparison, they're enough.

What they show is (at least for me) clearly the same as what I'd expect - and Adam's post explains the details very well.

Nevertheless, being knowledge-hungry - I'm still hoping someone will join the party here with even more details, especially on the Aperture setting (whether it has its counterpart in the PC-based NLE world, etc.).

Cheers

Piotr

Adam Stanislav November 22nd, 2010 11:07 AM

Quote:

Originally Posted by John Richard (Post 1590621)
Maybe Adam, who is more versed than I in scopes

Oh, I am no expert. I just understand the mathematics behind it. As for the difference between hardware and software, it is hard to tell without knowing about the specific software used. Some software will only look at portions of the image, some at the entire image. And some will look at the entire image of the current frame but only at portions of the image while playing the video. Add to it that software manuals do not always disclose this information.

Nevertheless, in this case we were comparing the vectorscope of the same image as compressed by two different devices, presumably using the same software for the scope, and the differences were quite clear.

Rafael Amador November 22nd, 2010 11:27 AM

Quote:

Originally Posted by Piotr Wozniacki (Post 1590649)
John,

I think you're right that the software scopes are not as accurate (and unbiased) as good, properly calibrated and used, hardware instruments - but I believe that for the sake of this simple comparison, they're enough.
Piotr

There is no reason for software scopes being less accurate.
I mean, if there is any kind of limitation lack of accuracy is due to a poor design of the application.
If well designed, on PLAY a system my have problem to make a perfect reading in RT, but that is just a matter of processing power.
On stand-by the scope should make a perfect reading,
I don't know about other NLEs, but I think that the VideoScope in FC is OK (full scan).
rafael

Peter Moretti November 22nd, 2010 12:07 PM

Quote:

Originally Posted by Rafael Amador (Post 1590633)
...

About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.

http://www.dvinfo.net/forum/converge...-aperture.html

I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.

...

Thanks Rafael. I read the post and understand better what you're dealing with. I don't use FCP, so I can't comment on how Aperture Mode works with it. Be aware that sometimes a program will ignore or default to a particular Aperture Mode setting regardless of what's designated in the QT file. This can cause some very annoying problems.

But I don't see an Aperture Mode problem or mismatch in the colorbars that Piotr posted. The frame sizes are identical, there is no border around one but not the other, and the bars are exactly the same width. Actually, in all the posts I've seen by Piotr comparing EX and nano images, I haven't seen a size disparity. Since Aperture Mode changes can only affect image size (by cropping or changing the pixel aspect ratio) I never considered Aperture Mode with the nano as being an issue.

But Piotr is not using FCP, so maybe that's why you are seeing issues that we seem not to be. BTW, have you tried opening the file in QT Pro and manually changing the Aperture Mode?

Perhaps CD can add a firmware choice to change the Aperture Mode used by the nano? IIUC, it's just a flag in the QT wrapper that doesn't affect how the image is recorded; it's strictly for playback.


Thanks and good luck,

Peter

Rafael Amador November 22nd, 2010 02:51 PM

Hi Peter,
Yes, I shot too fast without realizing that Piotr was working with MXF files.

However I think that in the background of the issue brought by Piotr could be some similar "displaying related" reason.

- The issue that the Piotr's pics shows (Post 33) is exactly what FC shows when comparing the same clip with different Presentation/Aperture.

- I don't think that Presentation/Aperture is something exclusive of QT.
That feature has been set to address certain needs when displaying file-based digital Video on a computer.
Somehow, MXF files have to address the same "needs".

All this told from my absolute lack of knowledge on MXF files and the fact that I can't try to dig more inside that technology with my Mac.
We will keep trying to put together the pieces of our puzzles :-)
Cheers,
rafael

Peter Moretti November 23rd, 2010 12:32 PM

I can completely understand why you would think from just looking at the scopes that the problem is an Aperture Mode issue; as I'm sure that's very similar to the way an Aperture Mode discrepancy looks.

If I were CD, IMHO, I'd be hesitant to make an all out change to the Aperture Mode from Classic to Production, but including an option to change it would be nice.

BTW, have you tried changing the Aperture Mode manually in QT Pro? It's under: Window... Show Movie Properties... Conform aperture to.

Good luck and I really hope this gets resolved for you. B/c the differences can be relatively minor, thus potentially passing before many sets of eyes before someone spots it. But when that happens, all hell can break loose. (I imagine there hasn't been much of an uproar over this b/c MXF is being widely used.)

Take care.

Billy Steinberg November 23rd, 2010 01:56 PM

A comment and a request.

If you want to go to the horses mouth, click here for the Apple tech note on the functionality of, and how Macintoshes react to, the aperture setting. Also note that Google is your friend; entering "macintosh aperture mode quicktime" brought it up as the first choice. It does not go into detail about whether the aperture mode is a flag in the quicktime header, or an atom in the data, or even whether it's Apple specific. I suspect it's in the quicktime header, but I haven't done the detective work to verify my suspicion. I also don't know if CD is embedding pixel aspect ratio or not, but that's one of the functions of the aperture mode fields. The edge cropping flag is one of the others. If CD decides it's worthwhile to implement the aperture mode info in one or more of their file formats (mov, mpg, mxf), then the discussion becomes "what should it be set to, or should it be a user menu selection".

That was the comment; the request is for a moderator to separate out all these extraneous discussions from this thread, the nanoFlash Public Beta 1.6.226 Firmware Comments thread, and for everyone to PLEASE try to stop hijacking threads, (to which I guess I'm now also guilty of), particularly ones as important as this one.

Billy

Dan Keaton November 23rd, 2010 02:24 PM

Dear Billy,

Thank you for posting the link.

I have read it.

Based on what I read, I have a few questions:

1. It only mentions SD video. Does this make any difference for HD video?
It does not indicate that HD video will be changed at all.

2. We may have choosen "Classic" so that our files work with only versions of Quicktime Player and newer versions.

I do not know the answer, but I do not know what will happen if we choose Production and this prevents older versions of Quicktime Player from working. This would have to be something for us to consider.

Billy Steinberg November 23rd, 2010 03:48 PM

Note that the aperture setting does NOT change the way the video is recorded at all, it's just a little information that the playback or editing system can choose to implement or ignore.

The cropping aspects were mostly due to the crap on the right and left of the frame that DV cameras invariably put out. The pixel aspect ratio was there for cameras that didn't use square pixels (DV again, mostly), so the aspect ratio didn't get changed on playback. (Actual pixels versus displayed pixels). As far as I know, this is only relevant in SD.

Whether it was meant to apply to HiDef or not I don't know, but at least on Macintoshes, it does seem to have an effect on playback of HiDef images today, (even if it's not supposed to). In early days, before "aperture" and prosumer HD camcorders, there was a setting called "hi quality". DV format video suffered when it was turned off, though I always attributed it to course de-interlacing rather than alteration of the pixel aspect ratio (it may have been both). Back then almost nothing was shot in progressive, and this was an easy way for Apple to get rid of the interlace artifacts when played back on the (progressive scan) computer. They de-interlaced when "high quality" was turned off. Of course, they threw away half the vertical resolution that way...

Billy

ps From my reading of the tech note, it looks like older versions of Quicktime Player will ignore the aperture setting, so that probably shouldn't effect your decision. Newer versions of Quicktime can be told to ignore the setting embedded in the video and use whatever you tell them. In any case, more research is needed, particularly with regard to how non-Apple systems react to the "aperture" info. (What an inappropriate name to call what this setting does).

Rafael Amador November 23rd, 2010 05:18 PM

Hi Dan, Billy and Peter,
At the end of the day I don't think the "Aperture" is any big issue.
I think that is enough that the people is aware of this act, in the improbable case that somebody want to mix the same picture coming from SxS and NANO.
So I don't think of the need of changing nothing on the NANO files.

Billy, you are right that, up to the Apple notes, the "Presentation" is to address some issues of DV, but the fact is that that setting is in available for every QT movie, and can change the properties of the movie.

I guess that the NANO-QT files just keeps the same "Presentation" option than the original standard SONY XDCAM HD 422.
Cheers,
Rafael
PS: Just to add that I found that difference when I was trying to reproduce the Piotr tests about SxS vs NANO noise.
I saw that I could never fully match both pictures. it was always like a "1 frame" off-set but impossible to correct by trimming in any direction.

Peter Moretti November 23rd, 2010 07:05 PM

I just want to add, with Billy's permission ;), that there are HD formats that don't use square pixels. Some are HDV, XDCAM (not 4:2:2) and XDCAM EX SP. These formats should all benefit from using "Production" over "Classic" Aperture Mode settings.

I am surprised that there is a Classic vs Production difference w/ nano files, but of course it's possible and seeing some frame grabs would be nice.

BTW, Aperture Mode has been part of QT since 7.1, so I think the vast majority of nano users will be using a version of QT that takes advantage of the setting. And like Billy said, I'd be very surprised if the older versions of QT don't just ignore any Aperture Mode setting.

HTH.

Adam Stanislav November 23rd, 2010 09:16 PM

Quote:

Originally Posted by Dan Keaton (Post 1591202)
Does this make any difference for HD video?

Does Programmer's Guide to Video Systems - Lurker's Guide - lurkertech.com answer your question? In brief, the clean aperture takes 16 pixels off the left edge, 16 pixels off the right, 9 pixels off the top and 9 pixels off the bottom in HD, both 1080 and 720. That preserves the 16:9 ratio.

As Billy pointed out, the full frame (production aperture) is in the file, it is just that video players are supposed to cut those edges off. Editors are supposed to use the full production aperture. This gives them some flack at the edges for things like sharpening and softening filters which need the values of the surrounding pixels to work. That way the contents of the clean aperture can be filtered properly and the final viewer never sees those edges that could not be filtered properly.

Theoretically, computer players will probably show the full frame and video (non-computer) hardware will only display the clean aperture. I'm no Martha Stewart, but it's a good thing.

Dan Keaton November 23rd, 2010 09:28 PM

Dear Friends,

Billy, Peter and Adam, I appreciate your assistance.

If the consensus is that this applies to HD and "Production" would be a worthwhile change, then we will, of course, consider the change.

I hope everyone understands that we have to be conservative. We can cause a lot of problems for ourselves and others if we make a change and it causes unexpected problems.

I will bring this up with our engineers.

I also want everyone to know that this, at this time, seems like a safe change.
After all, this is used by the Sony EX cameras, and I have not heard of any problem with it.

While in an ideal world we could have an option for everything, having lots of options adds to the complexity of the system, and it intimidates new users. For every new option, there are many that do not understand the option and it creates confusion and uncertainty.

Thus, it would be my choice to investigate this change the make it. After all, this "Aperture" function is not widely known, nor intuitive even to experts.

Dan Keaton November 23rd, 2010 09:44 PM

Dear Adam,

Others have suggested that we use "Production".

Is this your recommendation as well?

Adam Stanislav November 23rd, 2010 10:06 PM

Yes, I would go with Production since that is what you are actually saving into the files.

Peter Moretti November 23rd, 2010 11:11 PM

Dan,

If you want to play it most conservatively, you might want to test if changing to Production makes any difference to those users who are having problems. That can pretty easily be done by changing the Aperture setting in Quicktime Pro and saving the file. Here's a step-by-step:

Open the file in QT Pro, choose Window, Show Movie Properties and click on the Presentation tab. There is check box called "Conform aperture to:" and next to it is a pulldown menu with the four different Aperture Mode choices. Choose "Production Aperture" and save the file. That should do the same thing as changing the Aperture Mode that the nano writes.

This way those users can test to see if the Aperture setting change makes any difference. But of course if you think it really isn't necessary to test this way, then I would imagine that a change to Production would not cause any problems.

What I do find odd is that Aperture Mode is causing a mismatch between the EX and nano files. B/c square pixel files (which XDCAM 422 uses) should display the same using Classic or Production, AFAIK. (That's b/c Production does not crop the edges, it only adjusts for pixel aspect ratios that are not 1:1.)

Best of luck with all this :)

Dan Keaton November 24th, 2010 04:48 AM

Dear Adam and Peter,

We will investigate this.

Right now, based on the document that I read, for HD, I am guessing that changing to Production will not make any difference.

But, we need to run some tests.

We welcome others insterested in this to run their own tests, using the procedure provided above and report their results. Our employee who would normally run this test is out sick, so we will have a delay before we can test it.

Piotr Wozniacki November 24th, 2010 07:35 AM

Dear Dan, Peter, Rafael and Billy,

I'm of course following your posts on the Aperture setting in QT files with great interest, but - having no access to Mac/full QT player / FCP, and only working with MXF - I cannot help much. Has anyone found any information about using a similar setting in MXF files?

That said, I still believe the differences in scopes I posted are mainly due to the 420 vs. 422 color subsampling.

Cheers

Piotr

Piotr Wozniacki November 24th, 2010 01:41 PM

As one of those who changed the subject of this thread, I'd like to repeat my kind request to Chris to move the posts, related to the Aperture subject, into a new thread of its own.

Coming back to the current Beta discussion, I'd like to say that - to my very positive surprise - I have discovered that my Transcend 400X, 64GB cards now work with 280 Mbps bitrate!

Dan, if CD has improvement on the nanoFlash performance again - then all I can say is WOW, thank you :)

Piotr

PS. Please answer my question about the slight audio lag, though....

Dan Keaton November 24th, 2010 02:15 PM

Dear Piotr,

Please be careful in testing for 280 Mbps.

Please record a test until the card completely fills up.

We have a huge buffer in the nanoFlash. A slower card will appear to work at a higher speed due to this buffer, but, over-time, our buffer will fill up, and we will have to down-shift to a slower bit-rate.

(And most any card will work at 280 Mbps if one is doing a time-lapse sequence.)


For the audio delay, could you run a test for us?


Record internally, and record to the nanoFlash.

Have a clapper or other similar device, very close to the lens and microphone.

Test, in post if the audio and video are aligned.

Test using Sony Vegas Pro 10.

Then check the footage in Sony Clip View 2.30.

Then test the nanoFlash clips.

I am will say that it would be possible for the nanoFlash to be 0.004 seconds off..

Before we run the audio alignment tests, I will need to know the details when you noticed that they were 0.004 seconds off. What Frame Rate, Embedded or Analog Audio Input, specific camera, how far the sound was from the Mike, etc.?

Rafael Amador November 24th, 2010 02:21 PM

Dear Dan and all,

APERTURE:
I've been playing with the "Aperture" on different NANO's and SxS clips(1920x1080 and 1280x720).
I can't say more than: The more I play, the most confused I get.
I don't know if I'm making something wrong or QT Player is behaving in an erratic and unexpected way.
In short: Applying the same aperture to the same clip, not always shows the same picture. Changing modes, some times the picture shows four different "aspects". Some times I shift modes and I get only two "aspects". Some times, no change.
it seems that the 4 modes shows the same. Then, I have to close the picture and open again to make functional the "Aperture" control.
So, sorry, but nothing clear.
So two options:
- Let things as they are.
- Change mode. In this case I think the best option I think should be "Encoded Pixels".
As the Apple says:
"Encoded Pixels: Neither crops nor scales the video. A DV NTSC (4:3 or 16:9) track appears as 720 x 480.
With HDTV formats, we have nothing to correct. We know that we have 1920x1080 or 1280x720 Square pixels.
Thats exactly what we have to display: FULL/PLAIN HD.

PIOTR's Test
SxS and NANO picture compared on the Vegas VideoScope;
The same difference that shows Vegas, is shown by FC.
But I do not consider the differences on the Waveform neither on the Vectorscope. I think that the differences on Waveform and Vectorscope is understandable and is due to the 422 vs 420 compression.
The difference to consider is on the Histogram.
For me here is the problem.
What is what should show a Histogram?
The Waveform shows Luma and the measure unity are IRE.
The Vectorscope shows the phase of the Color Components, and the resulting Chroma Vector.
But, what should show the Histogram?
What measures make?
Which unities uses?
Well, the Histogram doesn't really measure nothing.
Is an spatial representation of the Luma values of the whole picture (some systems/software have also RGB Histograma).
The Histogram try to give a visual idea of the main "luma bands' (?) on the picture.
You need accuracy on a Waveform and on a Vectorscope because otherwise you are at risk of going "illegal", but YOU REALLY DON'T NEED MUCH ACCURACY ON A HISTOGRAM.
Those little differences doesn't change nothing for the Video Editor/Colorist.
So I think that the differences may be due to the no much exigent design of the filter.

So take the Histogram for what is intended: A rough visual representation.

Make a test:
Transcode your SxS and your NANO clip to any 8b and 10b Uncompress format and compare their Histograms.
You will find that the 6 clips shows 6 different Histograms.
No much concern about that anyway.
Cheers,
rafael

Piotr Wozniacki November 24th, 2010 02:48 PM

1 Attachment(s)
Quote:

Originally Posted by Dan Keaton (Post 1591650)
We have a huge buffer in the nanoFlash. A slower card will appear to work at a higher speed due to this buffer, but, over-time, our buffer will fill up, and we will have to down-shift to a slower bit-rate.

Dear Dan,

I'm aware of the buffer, but with previous firmware, it filled up and the nano down-shifted pretty quick when recording 1080/25p I-Fo at 280 Mbps.

Now, of course I didn't have time or patience to fill up entire card, but I recorded more than 5 mins video is several files. It never down-shifted!

As to the audio lag, the picture below shows waveforms of the nano (upper) and native (bottom) clips, recorded simultaneously on the EX1, with the nano fed from SDI. They have been aligned on Vegas timeline using single video frame accuracy. The ruler is scaled in seconds. I really don't know what else I can say.

Thanks,

Piotr

Dan Keaton November 24th, 2010 03:17 PM

Dear Piotr,

To prove that the 280 Mbps will actually work for you, for long takes, you will need to fill up the card.

You can also use one of our new features to "see inside" the nanoFlash.

We now have a FIFO level display. FIFO is the First In, First Out buffer.

The lower the percent of the FIFO that we are using the better.

If the FIFO creeps up and then goes back down, then this is normal.

If it jumps up then stays up, the the card is too slow for the Bit-Rate in use.

The following is from our latest manual 1.6.226.

Fifo Meter Display:

Displays the CF card's ability to keep up with the data rate of the video.

From the main menu, press and hold the left arrow key, and then press record button to initiate
a record session.

If Fifo meter rises over time towards 100%, the CF card is too slow to
handle the data rate. (Small spikes in the meter will appear during file transitions.)


As far as the audio, I see that the audio is different by 4 ms.

Which one is closest to being in-sync with the video?

How far away from the mic is the sound source?

In dry air at 20 °C (68 °F), the speed of sound is 343.2 metres per second (1126 ft/s). (Wikipedia)

Thus 0.004 seconds is about 4.5 feet away or 1.372 meters. (Dan)

Rafael Amador November 24th, 2010 03:47 PM

Quote:

Originally Posted by Dan Keaton (Post 1591650)
Dear Piotr,

We have a huge buffer in the nanoFlash. A slower card will appear to work at a higher speed due to this buffer, but, over-time, our buffer will fill up, and we will have to down-shift to a slower bit-rate.

Dear Dan,
Just for curiosity, how big is the buffer?
rafael

Peter Moretti November 25th, 2010 01:08 AM

Quote:

Originally Posted by Dan Keaton (Post 1591676)
...
As far as the audio, I see that the audio is different by 4 ms.

Which one is closest to being in-sync with the video?

How far away from the mic is the sound source?

In dry air at 20 °C (68 °F), the speed of sound is 343.2 metres per second (1126 ft/s). (Wikipedia)

Thus 0.004 seconds is about 4.5 feet away or 1.372 meters. (Dan)

Sorry to jump into this, but I had mentioned mic distance to Piotr, and he responded that:

1) The EX and nano were recording sound simultaneously from the same mic, so any delay caused by distance should be the same for both.

2) He never experienced this delay until this new firmware upgrade.

(I hope I characterized his responses correctly.)

Okay, I'll leave ya'll alone now :).


-Peter

Dan Keaton November 25th, 2010 02:48 AM

Dear Peter,

Thank you.

1. Yes, that makes perfect sense, same distance for both.

I did the calculations to show how critical the distance is, even a short distance, for getting the audio and video synced.

2. That may be true also, but did Piotr run these tests earlier?

Of course we want to get the audio/video sync dead on.

But, a lot goes into this. A lot also goes into the camera to get the audio and video syned up also.

We do not know, for a fact, if the audio and video are perfectly lined up in the internal recording, and if this exactly matches the HD-SDI out. It may be, and I would expect it to be, but we have to consider this possibility.


Also, I am attempting to envison a reason, if one is recording to an EX camera, and to the nanoFlash simultaneously, why one needs to intercut footage from the EX with the nanoFlash footage.

But, as I said, I want to get this audio / video sync as close as possible.

Right now, .004 seconds, or 1/250th of a second is pretty close, but not perfect.

The Vegas Timeline Piotr has posted has been zoomed in greatly to show this offset.

Piotr Wozniacki November 25th, 2010 04:20 AM

Quote:

Originally Posted by Dan Keaton (Post 1591912)
Also, I am attempting to envison a reason, if one is recording to an EX camera, and to the nanoFlash simultaneously, why one needs to intercut footage from the EX with the nanoFlash footage.

Dear Dan,

In the type of recording I mentioned earlier in this thread, I usually shoot the live classical music performances with 3 cameras (EX1's). In the ideal world, they all should be recording to nanoFlashes - but unfortunately, I only have one. So I'm using it with the main camera, and the nanoFlash files get intercut with the native XDCAM EX from the other two (I know it's not a good scenario, but at least I have some material of the highest quality possible).

So, as you can realize, it's of paramount importance that for multi-camera edits, I can be sure all images are in the same position in time in relation to the sound (which is BTW recorded separately, the on-camera sound being only used for reference during editing).

You could say now that the offset can be greater due to the cameras being located in different distances from where the microphones are, and you'll be right - but the human brain is very smart, and when watching a musician from some distance, it allows for slight delay in sound. On the other hand, no delay whatsoever is tolerated for close-ups - and when I show them from 2 or 3 different angles, I must do this tedious sound slipping by milliseconds, in order to get it right....

Piotr

Piotr Wozniacki November 25th, 2010 07:10 AM

Quote:

Originally Posted by Rafael Amador (Post 1591689)
Dear Dan,
Just for curiosity, how big is the buffer?
rafael

I second this question, Dan - right now recording at 280 Mbps on my Transcend 400X card, and the buffer only filled up to some 30% at the beginning, but now is at no more than 10%...

So, could it be true that with the new firmware, the Transcend cards are fully capable of 280 Mbps?

Piotr

PS. Still recording at 280, and the buffer bar is barely above 0%!

Russell Heaton November 25th, 2010 07:35 AM

Call me stupid, but looking at Piotr's Vegas timeline, the larger increments on his scale are 0.002 seconds and the smaller divisions are 0.0002 seconds. So it looks to me that any delay might be in the order of 0.0001 seconds because, to me, that's all the shift between the waveforms appears to be.

Put another way, that's about 100 microseconds. I can't believe it is even being considered a problem, unless it is cumulative. If it is constant then who cares? I defy anyone to hear that delay in a real-world application.

Cheers

Russ

Piotr Wozniacki November 25th, 2010 07:52 AM

Quote:

Originally Posted by Russell Heaton (Post 1591959)
Call me stupid, but looking at Piotr's Vegas timeline, the larger increments on his scale are 0.002 seconds and the smaller divisions are 0.0002 seconds. So it looks to me that any delay might be in the order of 0.0001 seconds because, to me, that's all the shift between the waveforms appears to be.

Put another way, that's about 100 microseconds. I can't believe it is even being considered a problem, unless it is cumulative. If it is constant then who cares? I defy anyone to hear that delay in a real-world application.

Cheers

Russ

Russ,

It's exactly 0.004 sec - please compare the length of the selection bar with the distance on the ruler, between the 22,370 and 22,374 stops...

Dan Keaton November 25th, 2010 08:11 AM

Quote:

Originally Posted by Rafael Amador (Post 1591689)
Dear Dan,
Just for curiosity, how big is the buffer?
rafael

Dear Rafael and Piotr,

It is very large, but I consider the size to be a trade secret.


Dear Rafael,

Our recommended bit-rates have to be conservative.

Not all cards, of a certain brand/type perform exactly the same.

And cards get busy, at times.

If 280 Mbps, with your Transcend 400x 64 GB cards works, then great.

And I have no problem with others testing their cards.

Just remember, I recommend taking the time to record until the card is full during the test.

As everyone should be aware, we have a program to constantly refine the firmware in the nanoFlash.

Piotr Wozniacki November 25th, 2010 08:47 AM

Quote:

Originally Posted by Dan Keaton (Post 1591965)
Our recommended bit-rates have to be conservative.

Dear Dan,

While I understand the above statement, just a technical question: if the buffer shows less than 10% after some 5 mins recording, and stays this way - is there still a chance it would overflow? If so, what could ever cause it?

Of course, if I ever use my cards at 280 Mbps in production, is my own responsibility.

Piotr


All times are GMT -6. The time now is 08:10 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network