View Full Version : Edge ghosting with XL2
Frank Aalbers March 6th, 2005, 01:52 AM Hi !
When I have contrasty edges in some of my XL2 footage I get edge gostings on the left side of the edge.
Here is an example:
http://home.comcast.net/~chalbers/ghosting.jpg
1. Is that supposed to happen in video ?
2. If it is , how can I clean it up ?
Thanks !
Frank
Richard Hunter March 6th, 2005, 02:46 AM Frank, I suggest you sit further from the screen. :)
Seriously, I think this is quite common with DV images, I see it a lot with many different cameras. It's probably a "feature" of the compression process. You could try playing around with the Sharpness setting on your XL2, but I don't know how much difference that would make.
Richard
Lauri Kettunen March 6th, 2005, 06:31 AM Frank, this is a somewhat unavoidable feature of the DV compression which occur at the boundary of dark and light objects. For example, when shooting the sun setting behind a mountain or whatever dark object, a bright edge will appear on the footage between the sky and the object. Still, in my experience the XL2 seems to manage much better than XL1 in thise sense. What comes to your example, there is nothing wrong withit: That's what the DV-compression generates.
Frank Aalbers March 6th, 2005, 12:43 PM Thanks for the feedback !
I'll just have to learn to live with it I suppose ...
I tried changing several settings, including sharpness. But that didn't work.
I suppose adding more details in my framing is what to do to get rid of attracting attention to the edges. :-)
Frank
Jim Sofranko March 6th, 2005, 07:19 PM That may be compression artifacts as suggested but it does seem odd to me.
I must admit I never noticed it on my XL1 DV footage even when projected. So I don't think I totally agree that it may be normal. But now I will have to examine my footage closer to see. Maybe I just wasn't looking closely enough.
Is it noticeable when screened directly off the camera playback onto a monitor?
Greg Boston March 6th, 2005, 07:58 PM Frank,
I won't swear to it, but looking at that image makes me think you might have been working with a wide open iris. An indoor shot with what I guess is available room lighting. Since the reflection on the face of the vertical column seems fairly sharp, your DOF can actually be shallow enough in this condition to have the back edge of that column slightly out of focus. If you were zooming in, the DOF will get shallow indeed. Just a thought because I have done this same thing in my house trying to create a shallow DOF. Also, the position of the light reflection suggests that the light source might have been a little to the right which would create a bit of a shadow on the back left side of the column.
Or, maybe I missed your point and none of what I just typed is relevant. :-)
regards,
-gb-
Frank Aalbers March 6th, 2005, 10:59 PM Hi Greg ! It's all relevant indeed !
But it wasn't a controlled lighting setup. Just a little test I was doing. And that's when I noticed the ghosting. I'm surewith carefull lighting conditions you can take care of a lot of bad artifacting. I'm still in my chidren shoes on real lcontrolled lighting setups though. That's next on my "to learn" list.
Thanks for the info though !
Frank
Lauri Kettunen March 7th, 2005, 01:27 AM Jim, Greg, Have you ever sorted out what the DV compression does?
The point is that, the compression is about loosing a significant amount of information (the ratio is 5:1), and obviously, there is a price to pay for that. According to the saying, there are no free lunches. This problem Frank demonstrates is a well known consequence of the compression (technically a so called "discrete cosine transform"). In some conditions it becomes more visible, and thus, the other way around, the artifact depends bit on the settings of the camera. Still, there is no way to avoid the problem completely.
An easy way to convince oneself of the problem is to take a footage with a DV camera and a still photo of the same target with a digital SLR camera yielding raw images. Then, if one compares the images, the edge ghosting effect Frank talks about is apparent. Tu sum up Jim, just look again carefully of your XL1 footages, and you will definitely discover the effect.
A. J. deLange March 7th, 2005, 07:17 AM I suspect that this is not compression (DCT) related but rather luminance leakage. DV compression artifacts usually manifest themselves as jaggy edges because the DCT works on "tiles" i.e square groups of pixels as exemplified in the current thread "XL2 vertical streaking". Luminance leakage isn't easy to explain but is a consequence of matrixing after gamma correction and band limiting (sub sampling) of the color difference signals. See Poynton for details. If I'm right then this is not a fault of the camera but of the NTSC system. Given that the camera uses SD NTSC encoding (and PAL will show it too) it's going to exhibit this phenomenon under certain conditions.
Scott Aston March 7th, 2005, 08:52 AM A.J.....you are WAY TOO smart! So with the limits of dv25mbs being said. Remember the "Clive" thread? If you have moire ie ..like shooting a building or house with shingles, is there anyway to still shoot the building and minimize the moire damage and have acceptable footage. Or is this just something a 1/3 3CCD miniDV camera just can't do?
Thanks
Scott
A. J. deLange March 7th, 2005, 08:38 PM It is something no digital camera can do unless that camera is equipped with a blurring filter which, for example, the Nikon D still cameras are but the Canon XL video cameras are not. Given that, you must obtain a blurring filter from another source and use it (at the cost of a less sharp overall image) or try to avoid scenes that are known moire producers i.e. anything with regular patterns with a pitch comparable to the pixel spacing on the CCD. You can learn a lot about what causes it to the point it is noticeable by looking for it in broadcast SD and HD and I guarantee you will find it! Moving farther away from the pattern to the point where it looks uniform or closer to it to the point where the individual shingles (or whatever) are clearly resolved is an approach that you may be able to use. Some people have reported success in blurring the images in post.
Jim Sofranko March 7th, 2005, 09:04 PM In film I know some diffusion filters can help with this problem and a good film-to-tape transfer on a Spirit will resolve it as well.
My understanding is that it has to do with the display medium more than the acquisition medium. AJ, does HD display have the same moire problems?
Richard Hunter March 7th, 2005, 09:18 PM As A.J. says, moving the cam will affect the moire pattern. If you can't do that, try changing the zoom setting instead. In fact, if you zoom in/out and watch the viewfinder, you will see the moire patterns come and go. With any luck you can find a setting that works for your framing and also does not show the moire. (Of course if your shot involves a zoom you can't really avoid the moire!)
Richard
Lauri Kettunen March 8th, 2005, 03:33 AM <<<-- Originally posted by A. J. deLange : I suspect that this is not compression (DCT) related but rather luminance leakage. -->>>
A.J., I remember seeing examples of applying just DCT and then getting egde ghosting in the output. Intuitively, as the very idea is to map an image from the spatial space to frequency domain, and then neglect higher frequencies, it is likely that the loss of information should be visible at the interfaces between bright and dark objects. ("Rapid" spatial changes correspond with high frequencies) Still --unless one has good frieds who are experts in signal processing-- the best way to find out is to write a Matlab code and test the whole thing oneself.
A. J. deLange March 8th, 2005, 09:28 PM Jim,
Yes, any rastered or digitized image collected with a system that presents higher spatial frequencies to the sensor than the sensor's pixel spacing (i.e. no antialiasing filter) will show moire. Go to a store that sells HDTV gear and watch the screens for a few minutes. You'll see it. The monitor does have a great effect on when it occurs. A properly antialiased image presented on a screen with pixel spacing too coarse to support the resolution of the image will show it. So with respect to Richard's comment - the relatively low resolution of the stock color viewfinder may deceive you in trying to eliminate moire by zoom/angle/composition. The higher resolution B&W viewfinder would be better in this regard and a studio monitor better still.
Lauri - yes, the DCT will upset edges and I'm not saying that the ghost in Frank's frame definitely isn't caused by the DCT but rather that I suspect it isn't because DCT artifacts are usually blocky resulting in a jagged appearance like the artifacts in this frame grab: http://www.wetnewf.org/DCTEdges.jpg (note that I have used USM to emphasize the effect in this frame - it's not that visible in the original). As you note the DCT throws out high frequency information so that the tiles tend to be more uniform than they would be without compression and it is this that makes the tiles stand out from one another. This happens more in a picture with lots of detail (like the one I've posted here) because it has more high frequency content. Frank's frame doesn't have nearly as much detail. In fact it has so little that I doubt the DCT algorithm would have to throw out any high frequency info.
Barry Goyette March 8th, 2005, 09:55 PM It looks like a plain old edge sharpening artifact to me...(and one that I have seen in virtually every frame of DV from every canon camera I own.) Used for its intended purpose (NTSC video), this type of artifact is not a bad thing...it make the footage look better (sharper) on the lower resolution output of a typical television. However, when one stands too close to the computer, as I often do, these things do get annoying. These are the same type of artifacts that are seen in photoshop when you use USM with a high radius (which looks bad on screen, but often can make an image printed on newsprint with a coarse screen look amazingly good).
Try lowering your the sharpness setting on the camera. If that doesn't help, then its what AJ said (which went right over my head!!!)
Barry
Frank Aalbers March 9th, 2005, 01:18 AM I already tried sharpness to the lowest setting on the camera. Still the same thing.
Frank
Frank Aalbers March 9th, 2005, 01:20 AM Hi AJ ! The example you posted can be easily fixed using horizontal chroma blur. It did NOT work on the edge ghosting I had though.
Frank
Lauri Kettunen March 9th, 2005, 01:27 AM <<<-- Originally posted by A. J. deLange : because DCT artifacts are usually blocky resulting in a jagged appearance like the artifacts in this frame grab: -->>>
A.J., For some reason can't reach the site you gave, but yes, I understand your point., and will check the frame as soon as the connection is available. In the mean time --as it a pleasure to talk with somebody who is up to the technical details-- if you look at the figure of the DCT basis functions given in e.g. http://www.cs.cf.ac.uk/Dave/Multimedia/node231.html, do you refer to those functions which are indeed like tiles. Basically, what I had in mind was that in addition to those tiles there are also other kind of basis functions.
Frank Aalbers March 9th, 2005, 01:34 AM Hi AJ !
This is your USM grab turned through chroma blur. As you can see it fixed 80% of the problem. I'm sure if I would run this FX on the original footage it will be almost totally gone.
http://home.comcast.net/~chalbers/chblur.jpg
Frank
A. J. deLange March 9th, 2005, 07:46 AM This last round of comments strengthens my suspicions (but does not prove!) that it's luminance leakage.
Lauri - yes, the "basis functions" illustrated in the site you referenced are exactly what I have in mind. The images our cameras capture are broken into square groups of pixels and then each of the 64 basis functions are overlayed and the "dot product" computed (actually that's the way JPEG works - there is another twist in video having to do with the two fields). The dot products are then quantized with the dot products from the upper left hand corner being assigned the most bits and the ones from the lower right being assingned the least. If the picture is busy (like my pinwheels) then the lower right corner dot products are significant but as there is a limited bit rate available they must be quantized coarsely or discarded altogether. When the image is reconstructed fine detail is lost as a consequence of this so that the tile involved is more regular in tone than the original. In the extreme case it would be of uniform tone with that tone being the average over the original (this would be the case if only the upper left corner dot product were retained). In these cases the edges of the tiles become perceptible (especially if you are looking for them) and that's why the DCT artifacting looks "edgy".
As to the image not being available: the "server" here is an old laptop kluged up to get around my ISPs schemes to make serving impossible from home accounts. Please be patient and try again (my router hung during the night.)
Lauri Kettunen March 10th, 2005, 08:33 AM <<<-- Originally posted by A. J. deLange : This last round of comments strengthens my suspicions (but does not prove!) that it's luminance leakage. -->>>
A.J. --Say one creates a frame with Photoshop which has only two colors, say the left side is light yellow and the right side is its opposite/complementary color in RGB space. Then, one exports this kind of stationary footage to DV-tape and captures it back. Now, the question is, will the border between the two colors be precise/sharp, and if not, what will that prove in your view? (You see my point, I'm thinking of some simple test which would reveal the origin of Frank's problem.)
The preliminary test idea explained above basically converts the compression issue to a 1D-problem simplifying things. What I still can't quite get in your explanation is, how are you going to express the abrupt jump on the border between the colors (in 1D) once you have suppressed the basis functions related to the higher frequencies? Of course, if there were a basis function with a jump, then everything would be easy, but aren't all the basis functions cosine terms, see http://dynamo.ecn.purdue.edu/~ace/jpeg-tut/jpgdct1.html.
A. J. deLange March 10th, 2005, 12:10 PM Lauri,
If they are yellow and blue I'm not sure what the border would look like but if they are magenta and green there will be a luminance error (dark space) between them. To see this turn on the color bars on your camera and record a few cm of tape. Now play back and capture or just play back to a monitor. You will notice a dark region between the magenta and green color bars. This is caused by the luminance leakage which I suspect is what's behind the phenomenon we are trying to understand. Neither the color bar pattern nor Frank's frame have much high frequency content so there is no need for the DCT algorithms to compress very much, high frequency basis function dot products will only be large in blocks (tiles) that span the boundary between bars, the algorithm will not have to throw out bits and so no DCT artifacts should be seen. The only bar pair that sould show the dark band is the magenta/green pair. As the other bar pairs have equally sharp transitions (and equally strong high frequency components) but do not show any error I call that proof that the magenta/green error is not from the DCT. Further, Poynton clearly states that it is from luminance leakage. This seems to me to be similar to the situation in Franks frame but it could be something else. I just don't think it's DCT related. Cheers, A.J.
Lauri Kettunen March 10th, 2005, 02:55 PM A.J., Ok, I see now what you mean.
Adam Bowman March 11th, 2005, 08:56 PM I'm surprised someone has brought up the haloing effects of the sharpening processing in the Xl2, because they're so very fine!
Compared to the XL1 I've been used to they're almost non-existent.
I have a technique which may help reduce the halo artifacts you've highlighted, you can read about it here: http://www.bargus.org/articles.html
But I like I said before, the halos are very fine on footage from the XL2 and it's probably not worth the effort (it may also introduce more noise than is acceptable).
Adam Bowman
Frank Aalbers March 12th, 2005, 01:04 AM <<<-- Originally posted by Adam Bowman : I'm surprised someone has brought up the haloing effects of the sharpening processing in the Xl2, because they're so very fine!
Adam Bowman -->>>
But if it's a sharpening effect in XL2 , how come it's still there when I change the XL2 sharpeness to minimum. It even doesn't change a bit !
Frank
Adam Bowman March 12th, 2005, 07:53 AM That's strange, maybe something odd about the sharpening method in-camera.
It would be maybe helpful if you posted say, some shots of a resolution chart or something that highlights the halos well, with varying degrees of in-camera sharpening. Just so we can see exactly what the the sharpening setting seems to change.
Adam Bowman
Andre De Clercq March 12th, 2005, 04:34 PM What we see in Frank's pic is just the good(?) old edge enhancement, not at all a DCT related issue. Some cams only reduce vertical edge enhancement (vertical filtering) when lowering the "sharpness" setting and keep horizontal enhacement constant.
A. J. deLange March 12th, 2005, 05:59 PM If it were sharpening it would look like this: http://www.wetnewf.org/ghostshp.jpg i.e. you'd see ghosting on either side of any edge, not just the left side of edges with particular color changes.
Andre De Clercq March 13th, 2005, 05:46 AM Luma sharpening filters (if well designed) are meant to enhance transients in a symmetrical way indeed. However, if an object has luminance steps (amount and steepness...the second derivative of the luma change is often used in luma sharpening algorithms) containing different derivative values in the horizontal direction one will get different edge sharpening levels (pre and overshoot). The left side in the Frank's pic is the shaded part and gets the most contouring. F.Y.I. color transient enhancers (CTI...) are another story, and based on totally different processing principles
A. J. deLange March 13th, 2005, 09:53 AM Andre,
Sorry, I'm not following your explanation. Do you mean that some sharpener implementations have frequency responses proporional to higher powers of frequency? Sharpeners are usually simply differentiators (first order) with the impulse responses shaped (frequency response approximately proportional to frequency) for the magnitude of the enhancement we want to see and the pixel width over which we want to see them.
I also don't understand the part about the shading. The transition from the doorway to the wall is of about the same magnitude as the transition from the wall to the lamp post. Shouldn't they be subject to the same amount of sharpening?
The answer is that they should and in fact they are subject to the same amount though this is not easily perceived fromt he image itself. I've put a plot of a horizontal line of luminance (through the screwhole in the light swithchplate) at http://www.wetnewf.org/sharpening.jpg. This plot clearly shows the classic step response for differentiating sharpener: a droop before a transition from dark to light followed by an overshoot after the transition and converesely going from light to dark. I've marked some of these on the plot. Approximately equal transitions are approximately equally sharpened. Of course once you have looked at this plot and go back to the frame you'll see the sharpening in the frame as well.
So at this point I think the "ghost" IS sharpening (in fact I'd say the luminance plot is a pretty strong proof). What I don't understand, however, is why changing the camera settings doesn't change the result.
Now back for a momemnt to http://www.wetnewf.org/ghostshp.jpg (in which I took Frank's frame and sharpened it still further to the point where the over/undershoots were plainly visible). If you blow up that frame in the region to the right of the switch plate you will see a lovely illustration of what we are talking about with respect to the way the DCT works and the kind of artifacting it leads to. You can see the tiles I was talking about and you can see some of the basis functions that were illustrated in the article Lauri mentioned. This is for educational value only and is not to be construed as a statement on my part that I think the ghosting was from DCT's. I don't now and never did.
Andre De Clercq March 13th, 2005, 11:30 AM Unfortunately I cannot access yr pic. But on the other points:
1. Simple differentiation doesn't produce preshoots in the time domain, only overshoots( linear phase differentiator). The frequency response (magnitude) is indeed "proportional" to the frequency but this doesn't define the transient (time domain) properties as long as the phase characteristic of a filter isn't known.
2 On the "shade" issue: indeed the doorway/wall transient has (about) the same magnitude, but like I mentioned already it's the value of the second derivative that counts and this relates to magnitude AND steepness (focus).
3. DCT and other tranform based compressors introduce all kinds of artifacts, tiling is one of them if a rough quantization is used for the DCT components, but this doesn't relate to
Frank's "ghost" pics. And b.t.w. DCT compression doesn't reduce bandwidth as such, it only skips low level DCT components (again depending on the Q table) and these are not always the higher freq components.
A. J. deLange March 13th, 2005, 02:03 PM Sorry about the images. Try http://www.pbase.com/image/40759798 for the plot and http://www.pbase.com/agamid/image/40759802 if you want to look for the DCT tiles in my sharpened version of Frank's image. With repect to
1. Simple differentiators do produce some predroop but not in the way we'd like to see it. I think the best way to design a sharpener is to specify the desired step response (and I'm sure that's what the two controls on the typical USM implementation do) and then calculate the impulse response from that. Some tweaking may still need to be done to limit the size of the mask and keep the computation burden under control. A desireable response, like the one implemented in the XL2, in Photoshop etc has a frequency response that looks like a differentiator at lower frequencies i.e. there is a portion of the response that rolls up with omega but it looks like a differentiator followed by a low pass filter (and there is gain at dc). Given that the impulse repsonse is symmetric (which it would be if the step response is symmetric disregarding the dc component), phase is dead linear.
2. The second derivatives are the same: approximately 0. The transitions are from a low luma level to a high one in three points. The slopes are essentially constant.
3. I've said it before and I'll say it again. I don't believe the ghosting in Frank's frame has anything to do with tiling! The comment about tiling was for people who might be interested in what the DCT does. My doctored image shows it very well. As to whether DCT compression reduces bandwidth or not that would depend on your definition of bandwidth. It does not reduce bandwidth in the spatial domain as the resolution of the images is not effected though the amount of information conveyed about the high frequency part of the image is reduced. It does reduce the required capacity of the channel required to convey the image becuase fewer bits are required to describe each frame.
When I bought this camera I never dreamed I'd be so far down into the bowels of it.
Cheers, A.J.
Andre De Clercq March 13th, 2005, 03:47 PM Thx, I got the pics.
The luma drawing shows that both edges are enhanced. Because of the gamma correction applied before enhancement, white to black transients allways show somewhat more overshoot than the black to whit transients. That's why the pole has the black left border line.
For the other points:
1. Just draw a squarewave signal and add some amount of its first derivative (differenciated squarewave) and try to find preshoot in the resulting signal...
3. The final amount of data conveyed about the high freq, part is, apart from the dynamic Q-table for the DV codec, only the result of the applied entropy (Huffman) coding.
And further...I suspect that Frank's image got most of its artifacts (tiling and mosquito noise around the arrow) because of a strong second (re)compression.
A. J. deLange March 13th, 2005, 05:52 PM Andre,
Can't do 1. because the first derivative of a square wave contains Dirac delta functions. But if I run a sampled square wave through a FIR differentiator (which is, of course, an approximation of a perfect differentiator) I do see some pre droop. Not much I grant you but it is there.
The Huffman encoder deals with whatever bitstream it is handed. If the resultant rate is more than the channel can handle the process is throttled by tweaking the Q matrix. As its entries are powers of 2 each increment (e.g. from 1 to 2 or 2 to 4) throws away another bit.Not only does this present fewer bits to the encoder but as the number of bits per coeficient is reduced the probability of longer runs (and more efficiency for the encoder) is increased.
Your point about the second compression is well taken. After all this analysis of Frank's frame I'll have to try some of my own!
Cheers, A.J.
Andre De Clercq March 14th, 2005, 04:30 AM Just try a "real" sqaurewave in yr simulation (non zero rise time) in combination with a non zero time constant for the differentiator.
A. J. deLange March 14th, 2005, 10:04 AM That's what I'm doing - designing differentiators with varying numbers of taps and transition bandwidths using the Parks-McLellan (Remez exchange) algorithm. The software computes the step response for each design. Each step response shows pre-ringing.
Andre De Clercq March 14th, 2005, 10:32 AM If you talk about "varying numbers of taps" you are no longer talking about a differentiators but recusive and/or non recursive filters. This has nothing to do with differentiation of the input function. And remenber first order differentiation is about determining dv/dt and this is only positive or negative for resp rising or falling edges, so no preshoot in a first order differentiator.
A. J. deLange March 14th, 2005, 11:07 AM Messers Parks and McLellan think I am talking about differentiators. I suppose the question is how you would compute the first derivative of a stream of samples. The first difference? That is, in fact, a 2 tap (zero) FIR filter which only approximates a diffrentiator and not as well as a FIR filter with more taps.
Mathematically, a differentiator is a filter whose response is proportional to frequency and thus 0 at dc. A mathematical differentiator thus has infinite bandwidth and infinite gain (at infinite frequency). Such a differentiator has a step response with no predroop but is a mathematical abstraction (as is the step response - a pulse of 0 width, infinite height and unit area). In the DSP arena we can only approximate differentiators but, within the Nuyquist bandwidth do a damn good job if we use enough taps.
I'm sure 99.99% of the readership has lost interest at this point and I'm sure that you can find someone on your side of the pond to discuss this with as opposed to turning this into an EE forum. I can recommend some DSP texts if you like.
A.J.
Andre De Clercq March 14th, 2005, 01:17 PM Indeed nobody is maybe interested any further and certainly I am no longer interested in such discussions. After being involved for over 30 year in the design and implementation of real (not "mathematically") digital and analog signal processing that's sufficient for me. And no need for DSP literature. I wrote enough articles myself on the issue.
Frank Aalbers March 15th, 2005, 12:39 AM Thanks for all the info everyone.
En groetjes van een ex-antwerpenaar Andre ! :)
Frank
Anthony Marotti March 15th, 2005, 02:31 AM I love it when you guys talk dirty :-)
Andre De Clercq March 17th, 2005, 04:19 PM Hallo Frank, Belgie zendt zijn zonen uit... Now back to the "rules".
I used to know the Oakland area pretty well and often visited Berkely university when I was for a post-doc (1974-75... long time ago) at the "nearby" Stanford University. I was involved in medical ultrasound imaging and processing at that time. Till 2003 I was with Barco(Belgium) as VP R&D. Now I am a "retired" engineer...
Frank Aalbers March 18th, 2005, 12:35 AM Heh ! I haven't been in the US that long. Only 8 years from january 1997.
I'm now working at Pixar. Came to the US to CG work.
I have no plans at all to go back to Belgium. Too much fun here ! :-)
Frank
Michael Hamilton April 7th, 2005, 10:15 AM It might be luma undershoot as described at this site:
http://www.dvcentral.org/DV-Beta.html
I've noticed this artifact on some of my stuff. Mostly along vertical edges though. Its really a frustrating thing because you don't see it in the view-finder and you think you've got a pristine image, when you really don't. Every thing else is so great about this camera. I'ts really a shame Canon didn't do something about this problem.
Michael Hamilton
Anthony Marotti April 7th, 2005, 03:26 PM <<<-- Originally posted by Michael Hamilton : It might be luma undershoot as described at this site:
http://www.dvcentral.org/DV-Beta.html
I've noticed this artifact on some of my stuff. Mostly along vertical edges though. Its really a frustrating thing because you don't see it in the view-finder and you think you've got a pristine image, when you really don't. Every thing else is so great about this camera. I'ts really a shame Canon didn't do something about this problem.
Michael Hamilton -->>>
Hello !
I agree, this is potentially a very costly problem, especially if a client puts a lawsuit against you because you ruined one-of-a-kind footage.
If we could discuss this topic in english, that would be very helpful!
Questions:
Must you rewind and review every take of every scene on an already tight schedule?
What can be done to detect the problem other than that?
What can be done to avoid the problem?
Can Canon do a firmware update??
What else can be done to fix this problem, which is plaguing many of us?
Thanks :-)
A. J. deLange April 8th, 2005, 06:48 AM Gentlemen,
It itsn't luma undershoot and it isn't luma leakage (though at first I thought it might be - see earlier posts). It's sharpening. Earlier in this thread I posted a line scan of the image in question - the image is still acessible at http://ajdel.wetnewf.org:81/sharpening.jpg. Every vertical edge shows clear evidence that "unsharp mask" has been applied. This isn't a problem: it's a feature by which I mean that the designers put it into the camera in order to render the pictures sharper in appearance. They also gave you control over it (sharpness control in custom settings). If you don't want sharpening you can take it out or at least reduce it. As an alternative you can use a blurring filter in post but be warned that either of these steps will result in a picture that does not appear as sharp. This is one of those situations in life where you can't have it both ways (sharp picture without edge enhancement).
There it is. No Flemish, no geek-speek.
Andre De Clercq April 9th, 2005, 05:14 AM Edge correction in video (also called sharpening, aperture correction...) is indeed frequently used for getting a (subjectively) sharper image. In camera's, but in displays as well. and this double sharpening potential is the problem. Although a calibrated settings in pro equipment (peaking frequency and level) nothing is being standarized, So if the output device strongly sharpens the incoming signal and the source does the same the picture gets oversharpened and gets strong contours. As long a the contour thickness remains within a couple of arc seconds for the viewer it will still be tolerated. If the picture is "too big"or seen from too close it will be seen as an annoying contouring. Too much sharpening also creates higher noise visibility. Advanced sharpening techiques are being used nowadays which sharpen (almost) without contouring
Michael Hamilton April 21st, 2005, 10:42 AM Last week I went outside and tried using a tiffen sfx #2 to see if I could tone down the contouring, but forgot to turn down the sharpen feature. No luck.
Camera: 16x9/24p and 30p
Subject: wide and mcu shots of electrical poles against a semi cloudy sky.
There was no discernable change in the contouring problem.
I just went out again today to test our XL2 again. But this time I turned the shapen all the way down.
Camera: 16x9/24p and 30p
Subject: wide and mcu shots of electrical poles against an overcast sky.
There was no discernable change in the contouring problem.
There is also a pixel with black horizontal line going across the image in all my shots. This is more noticable in 16X9 mode.
A couple of weeks after we got the camera I noticed this and went to our dealer and the canon rep there switched out the cameras.
The new one has the same problem.
I'm playing this out through the camera on different monitors.
So the problem is'nt a bad deck or some other outside source.
Michael
Andre De Clercq April 22nd, 2005, 03:32 AM Michael, what you see in the viewfinder is not neccessary what you will get on yr monitor. Except from the small dimensions and reduced resolution (which can hide contouring for the human eye) the viewfinder just shows decompressed DV data. When yout monitor is connected through composite or Y/C, there is one more step involved: NTSC(PAL) encoding. Though the luma part in this process doesn't need further manipulation, those encoder circuits often apply extra sharpening in the Y channel. I would suggest, if you really want to see what's in your footage, to playback on a pro deck and interconnect through components. This excludes the extra encoding process.
|
|