View Full Version : How come no realtime audio fx?


Josh Bass
May 29th, 2007, 01:16 PM
So you have your video clips in the timeline to which you can add an effect, and then watch it play with the effect, but not have to render until you output.

I KNOW you can do audio fx the same way at the track level, so I'm not asking how, I just asking WHY they don't have the same type of fx for individual audio events/clips in the timeline? Why do you have to render a clip to put an effect on it? Just curious. Layman's terms please.

Brian Standing
May 29th, 2007, 02:11 PM
Audio FX work in real time if applied at the track, bus or project level, but not at the event level. Click the little green box in the upper right-hand corner of the track header.

I find this works quite well, since I can designate a track for "Dialog 1" and make sure all the events on that track get exactly the same treatment. You can also apply FX at the "Media" level, although I don't recall if those are real-time or not.

Josh Bass
May 29th, 2007, 02:27 PM
Right, I wanted to know why the program was designed this way?

Glenn Chan
May 29th, 2007, 02:58 PM
There was an explanation of this in the official sony forum. Not sure which thread it is.

Part of it is because some effects like reverb can extend past a clip/event.

Part of it is because implementing effects that way isn't compatible with the TC powercore cards.

2- You can right click clips and apply non-real-time FX.

Jarrod Whaley
May 29th, 2007, 10:35 PM
Glenn, I'm glad to hear that there's at least some rationale for this, because it has always seemed overwhelmingly silly to me. However, I just don't see that this rationale makes any sense (I know you're just reporting what you've heard and that your memory is a little fuzzy--I'm not shooting the messenger).

I can't figure out why, if we continue to use reverb as an example, the reverb couldn't simply be played back on the fly regardless of the length of the clip. That's how the track-level effects work.

As for TC Powercore cards not being compatible with it... I guess that makes some degree of sense, but why rule out a very helpful bit of functionality simply because one type of hardware doesn't like it? At the very least, Sony could allow users of this hardware to elect into the non-real-time model and eliminate what I consider to be a huge pain in the rump for the rest of us.

This particular aspect of Vegas has always annoyed the pants off of me. It's completely absurd that complex effects can be applied non-destructively to video--obviously a much more complex type of file--while audio has to be rendered on an event-by-event basis. An analogy: imagine if you had a lawnmower that could easily cut tall grass in one pass, but that required three passes for each patch of clover on the same lawn. It's completely nuts.

What happens is that you end up with a folder full of hundreds or thousands of little .wav files that you're afraid to ever get rid of on the off-chance that you'll accidentally delete something that's used in a project you'll need to open again someday. Apart from that, it just slows down the entire editing process quite a bit in certain cases. If you want to add differing levels of reverb (thereby making track-level reverb a non-option, since all events on that track would have the same level of reverb applied) to 47 clips, and then you change your mind later and want to remove the reverb, you're screwed--you have to manually go back through and dig up the original .wavs. If you try to use track-level reverb for each of these 47 events, you end up with 47 unwanted extra tracks. Granted, this is a somewhat exceptional scenario here, but I personally run into these kinds of issues far more often than I'd like to.

I wish with all my being that Sony would do something about this. That might be putting it a bit strongly, but my annoyance with this is very strong. If there's a good reason for it, I guess I'd begrudgingly accept it. I'm just not so sure that there is a good reason--or even a mediocre reason. As things are, I find this situation to be an extremely bone-headed bug in an otherwise amazingly well-designed app, all things considered.

Please excuse the rant. I'm just frustrated with this.

Marcus Marchesseault
May 29th, 2007, 11:52 PM
I'm new to Vegas, so ignore me if I sound crazy.

Is there a way to set markers and then have sort of "keyframes" for the level of the reverb on the audio track? Event-level effects would be better, but at least some sort of tweaking on-the-fly would make things tolerable.

Douglas Spotted Eagle
May 30th, 2007, 01:11 AM
Actually event-level isn't preferable, because if the event ends, the reverb can't continue.
You can insert a reverb as an FX bus, then right click the track header for which you'd like to apply reverb. A line will appear/envelope will appear. Double click that envelope to set nodes/envelope points.

Kris Bird
May 30th, 2007, 04:56 AM
I agree, track level effects make perfect sense ... this is very much in keeping with audio workflow. if you want specific treatment for an event, then create a new track and name it accordingly- you can easily slip any subsequent events into this track and it will receive the same effects, very useful. tracks might be- "char a dialogue", "char b dialogue", "char a reverb matching", "char a lf cut", "ambience", "distant / lf cut" .. etc .. your tracks become your presets, events get slipped accordingly .. I can't see in what circumstance you'd end up with "a folder full of hundreds or thousands of little .wav files" .. if you're doing anything dozens (let alone hundreds) of times, then surely they should be on a track (or obviously several tracks).

Glenn Chan
May 30th, 2007, 03:23 PM
As other people point out, just use trackFX and use extra tracks if need be. Vegas will support a lot of tracks. Some people will use like 40+ tracks (at least from what I've seen in other audio apps; I don't do such detailed audio mixing ever, though I believe Vegas shouldn't have a problem there).

Not having audioFX on an event level is kind of unintuitive (since it's not analogous to video)... but not having it is not a huge deal esp. since you shouldn't need to go into non-real-time FX often (which itself is kind of a misnomer to me, since the effects in it are usually real-time).

Jarrod Whaley
June 1st, 2007, 11:16 AM
Actually event-level isn't preferable, because if the event ends, the reverb can't continue.Why not? It continues past the end of an event when you apply it at the track level. Why couldn't a sound continue reverberating after the event ends in this case as well? Alternatively, why couldn't Vegas simply extend the length of the event by the requisite amount? If either of these scenarios is truly impossible, then that's that, but why would they be impossible? Look at velocity envelopes for video. When you use them to slow a clip down, the end of it gets chopped off when the end of the event comes along. Another possibility would be to make reverb work in that same way (which is how it used to work in Soundforge, by the way--you had to add silence at the end of your clip to allow for the reverb).

By the way, Spot, your suggestion to use envelopes (or automation) makes sense. However, it still strikes me as being much more of a pain to manage than real-time event effects would be. Then you end up having to be careful that your envelope nodes are staying in the right places when you move clips around and so on.

There are lots of workarounds and/or alternatives for this lack of functionality, but look at how much easier and simpler everything would be if audio effects could work like video effects work.

I can't see in what circumstance you'd end up with "a folder full of hundreds or thousands of little .wav files" .. if you're doing anything dozens (let alone hundreds) of times, then surely they should be on a track (or obviously several tracks).Well, I do have a folder full of hundreds of little .wav files. Like I was saying before, using track-level is fine if everything is getting the same effects AND the same settings on those effects. I run into lots and lots of situations, though, where I use a particular effects configuration only once in an entire project. If I use track-level effects, on certain projects I end up with tons of single-event tracks and waste a lot of time trying to find everything when I go through again later making fine adjustments to my edit.

but not having it is not a huge deal esp. since you shouldn't need to go into non-real-time FX oftenBut I do need to go into non-real-time effects often, because if I'm only using a particular effects configuration once, and I don't want an entire track added to my timeline for one event, I have no other choice. Many people may not have that same need, but I suspect that there are plenty who do. If all of this is not an issue for you, congratulations. For me, though, it can sometimes lead to very a sub-optimal workflow.

(which itself is kind of a misnomer to me, since the effects in it are usually real-time).But they aren't real-time; you have to render the event out to a new .wav file. Then if you change your mind about the effect or want to make a fine adjustment to it later, you have to go dig up the original file, re-cut it to sync, and apply the effects again. Yes, you could avoid all of this by using track-level effects, but then you're back at the point where you have an entire track with only one event on it again.

Don't get me wrong--the ability to apply effects to an entire track is very useful... but it is far from ideal for use with a single event.

Peter J Alessandria
June 1st, 2007, 11:22 AM
I agree with Jarrod on this. Always been a pet peeve of mine that in Vegas you can apply real-time video fx but not audio. Other packages let you do this with great effect (pun intended). I suspect it's more of legacy mindset ("always been done this way") than a real implementation issue.

Jarrod Whaley
June 1st, 2007, 12:02 PM
I suspect it's more of legacy mindset ("always been done this way") than a real implementation issue.I believe we have arrived at "Bingo!"

Kris Bird
June 2nd, 2007, 02:35 PM
Well, one point-- if an event-level effect can continue past the end of the event, what happens if you butt two of them together? (For example, event B starts before event A has finished it's reverb). Does the second event kill the first, or..? I would suspect that you'd want them to calculate separately and then mix, in which case internally vegas would need to put each event on a separate audio track. I guess that this is the complexity, as it would have to internally create and delete additional audio tracks for each event that had this type of effect, and mix that back into the track. Currently, event event just 'writes' directly into the track, with that summed result going into the track fx.

Steve Mullen
June 2nd, 2007, 06:17 PM
I agree with Jarrod on this. Always been a pet peeve of mine that in Vegas you can apply real-time video fx but not audio. Other packages let you do this with great effect (pun intended). I suspect it's more of legacy mindset ("always been done this way") than a real implementation issue.
It's an "audio" mindset that views tracks as not simply parallel data streams but as streams of effects. It would be like setting up a blue, red, green video track and tossing video in the track based upon how you want it tinted.
The problem is that "video" folks not only don't think this way -- although they could learn -- in many, if not most cases, the audio work that must be done involves cleaning-up sync audio which is event based. Not to have RT event based audio is not simply a mindset issue, it is a loss of productivity issue.
Adding reverb is red herring as in most cases the problem is too much reverb. It is probably the only FX that needs to continue past the event.

Josh Bass
June 2nd, 2007, 06:26 PM
So is there any chance a future Vegas would change this?

Douglas Spotted Eagle
June 2nd, 2007, 08:26 PM
Potentially, but it also causes many more headaches. It's got nothing to do with loss of productivity; this has been discussed since the early days of Vegas. Vegas is intended to:
a-operate like an analog suite operates in the real world
b-provide logical interface for workflows.
Audio can have temporal effects applied to the media. What happens say...when you have a 5 second event to which you want to apply a 10 second reverb? (Very common application)
The 5 second file has ended. A new file must be written.
This is just one example.
Yes, the engineers at Sony *could* (and probably easily) write code to make this work. But it's at a cost that likely isn't cost-effective in terms of various aspects of the way the app works.
In the "real" world, an FX bus is an FX bus. Vegas works this way, just like an analog system works. Interfaced with a HUI, Vegas functions identically to an analog console setup in a live or studio environment.

Jarrod Whaley
June 3rd, 2007, 11:22 AM
Potentially, but it also causes many more headaches. It's got nothing to do with loss of productivity; this has been discussed since the early days of Vegas. Vegas is intended to:
a-operate like an analog suite operates in the real world
b-provide logical interface for workflows.
Audio can have temporal effects applied to the media. What happens say...when you have a 5 second event to which you want to apply a 10 second reverb? (Very common application)
The 5 second file has ended. A new file must be written.
This is just one example.
Yes, the engineers at Sony *could* (and probably easily) write code to make this work. But it's at a cost that likely isn't cost-effective in terms of various aspects of the way the app works.
In the "real" world, an FX bus is an FX bus. Vegas works this way, just like an analog system works. Interfaced with a HUI, Vegas functions identically to an analog console setup in a live or studio environment.What headaches? And what about the headaches I (and surely many others) currently have by virtue of the fact that this functionality is not available? If you needed this functionality, I'd bet you'd be arguing in favor of it.

Video can have temporal effects too, by the way.

In the "real world" you don't have to do the equivalent of rendering out to a new file. If you were mixing on a console and had to record your sound to a tape every time you made minor tweaks to the effects configuration on it, I bet you wouldn't be happy.

I suspect you'll rebut by saying that if I were to use automation or envelopes, then I'll be operating the software like I would operate a console in the real world. But we're talking about software, not the real world. When I use a word processor, I don't expect to have to listen for a "ding" at the end of each line of text. When I am listening to an MP3, I don't expect to have to rewind at the end.

Software doesn't have to emulate the "real" world when certain aspects of the real world are a huge pain is the hindquarters; the entire point of using computers is that they are supposed to make things easier, cheaper, and less time-consuming (*everyone laughs uneasily here*). This particular aspect of Vegas makes things harder, more expensive (due to lost productivity), and much more time-consuming.

So is there any chance a future Vegas would change this?My guess is that the Sun will turn purple and start whistling showtunes before that ever happens.

Douglas Spotted Eagle
June 3rd, 2007, 11:29 AM
What DAW packages out there don't emulate the real world?
Could you identify temporal fX for video that go beyond the end of a file without writing a new file.
I'm not defending Vegas, I'm curious as to how you'd rather see this realistically work.
'nother example...
you a door slam/Foley. File size/length is 1.5 seconds. You want it to reverberate for say....15 seconds. Since it's a file-level door slam (which cannot be accomplished in the real world), where do the additional 13.5 seconds of reverberation come from without writing a new file on the fly?

Jarrod Whaley
June 3rd, 2007, 12:03 PM
What DAW packages out there don't emulate the real world?I don't know which ones, because Vegas is what I use. My point, though, is that there's no reason why a piece of software should have to be exactly like the real world. I realize that by emulating a console workflow, Vegas retains a familiar workflow that has worked for a long time. Believe me, I do see the value in that. However, if something is a pain to do on an analog console, why should that pain be translated directly into software, where anything goes?

Could you identify temporal fX for video that go beyond the end of a file without writing a new file.I realize it's not quite the same thing, but it's close in many ways: slow-motion requires a change in the length of the event. When you use an event velocity envelope, or change the playback rate in the event preferences, Vegas truncates the effect by retaining the original length of the event--you have to stretch it out manually if you want to get the entire clip as you originally cut it. Worst case, things like reverb could function like this. It would be a little bit of a bother to have to stretch out the event, but it would be better than juggling massive numbers of tracks containing single (or two or three) events and/or messing with complicated envelopes. Again, this would be the inelegant, worst-case solution; a better way of doing this is described below...

I'm not defending Vegas, I'm curious as to how you'd rather see this realistically work.
'nother example...
you a door slam/Foley. File size/length is 1.5 seconds. You want it to reverberate for say....15 seconds. Since it's a file-level door slam (which cannot be accomplished in the real world), where do the additional 13.5 seconds of reverberation come from without writing a new file on the fly?Since all we're talking about is real-time preview here, all Vegas would have to do is to continue playing the reverberating sound after the event ends, saving the actual rendering for the point at which the entire project is rendered. Again, if Vegas can play back reverb on the fly at the track level, why not also at the event level? How hard could that be to implement? And what issues (technical or otherwise) could that possibly cause? It would require more processing power, but the difference would be pretty marginal when you think about all of the processing that already goes into working with video in a real-time environment.

Douglas Spotted Eagle
June 3rd, 2007, 12:13 PM
I
Since all we're talking about is real-time preview here, all Vegas would have to do is to continue playing the reverberating sound after the event ends--just as Vegas doesn't write a new video file when you add effects to a video event. How hard could that be to implement? And what issues (technical or otherwise) could that possibly cause?

I surely wish I could have a dime for every time someone says "It can't be that hard." If it's not that hard, why isn't everyone doing it?

Event is 1.5 seconds. 15 second reverb. The next event on the timeline is only 6 seconds downstream from the event that has a 15 second reverb on it, on the same track. How would you see Vegas managing this at at Event level?

Jarrod Whaley
June 3rd, 2007, 12:28 PM
I surely wish I could have a dime for every time someone says "It can't be that hard." If it's not that hard, why isn't everyone doing it?We seem to be asking the same question, but with different rhetorical intentions. :) Why indeed.

If I can apply color correction, masks, keyframed motion, and keys all to a single event without having to render it before I can see those effects depicted on the timeline, I should be able to do the same amount of tweaking to an audio event and be able to see the changes reflected on the timeline without having to render out first. It's a no-brainer.

Event is 1.5 seconds. 15 second reverb. The next event on the timeline is only 6 seconds downstream from the event that has a 15 second reverb on it, on the same track. How would you see Vegas managing this at at Event level?It seems that I can't properly express what I'm suggesting. We're just talking about real-time preview--Vegas could simply play both events back with reverb applied to them, with no need to render them yet. Again, if you put a "dry" event on a track that has reverb applied to it, Vegas will apply reverb to the event on the fly with no need to render it out, right? Why in the world can't the same thing happen with an event?

Theoretically speaking: when the playback reaches the event, it plays the event as if it has reverb on it, and the reverberation continues after the event ends. Simple. Or, alternatively, Vegas could truncate the file like I was saying before, making a manual adjustment to the length of the event necessary.

Could you address each of these scenarios directly and tell me why, in your opinion, they would not work?

Douglas Spotted Eagle
June 3rd, 2007, 12:33 PM
In your color correction example, the color corrected clip isn't playing over top of the next clip in line. It requires a new track should you wish to play both events at the same time, and have them both be visible.Temporal-based video events do not spill beyond the temporal boundaries of the file/event.
Again...two events, 3 seconds apart. First event has 15 seconds of reverb on it. What happens to next event? What happens to the reverb during the next event, because the reverb should be carrying over it. On a track level, this is possible. But on an event level, it's not. I guess what seems fairly simple to grasp from my perspective may not be so easy to understand, or rather, I'm not explaining it clearly enough.

Jarrod Whaley
June 3rd, 2007, 12:48 PM
In your color correction example, the color corrected clip isn't playing over top of the next clip in line. It requires a new track should you wish to play both events at the same time, and have them both be visible.Temporal-based video events do not spill beyond the temporal boundaries of the file/event. I realize this, and I see the difference. The point here was just to illustrate how frustrating it is that audio effects are harder to deal with than video effects, and that this is completely baffling to me.

Again...two events, 3 seconds apart. First event has 15 seconds of reverb on it. What happens to next event? What happens to the reverb during the next event, because the reverb should be carrying over it.Again... Vegas plays (because playback in real-time is all that's necessary) the first event as if it has reverb on it, and plays the second the way you have instructed Vegas to play it. The reverb from the first event goes on after the event ends and is "mixed" on the fly with the sound in the second event. This is all just playback for preview purposes, mind you.

On a track level, this is possible. But on an event level, it's not. I guess what seems fairly simple to grasp from my perspective may not be so easy to understand, or rather, I'm not explaining it clearly enough.I feel the same way. What I'm suggesting here seems to me like a very simple thing, but I've ended up saying it four or five times. I'm clearly not explaining this very well.

Jarrod Whaley
June 3rd, 2007, 12:54 PM
Temporal-based video events do not spill beyond the temporal boundaries of the file/event. Sorry for the double post, but I thought I'd point out that this is, in fact, one of the two models for real-time audio effects that I have suggested.

I'd also like to add that even if, in your example, I had to put the event with reverb on it onto a second track so that it could overlap an immediately following event, I'd still prefer this method to the current situation, in which you have to create an entirely new track for the clip with the reverb on it. Instead, I could put the event with reverb applied to it on an already existing track.

Part of my problem with the way Vegas handles audio effects is that you either:
A) Have to render out tons of tiny wav files for each event to which you want to apply effects;
B) Have to wade your way through a project that contains an absurd number of tracks in many situations; or
C) Have to contend with extremely complicated and easy-to-screw-up envelope and/or automation schemes.

All I'm saying is that none of these scenarios lends itself to an efficient use of one's time and/or resources, and that a more efficient model than any of the above should by no means be too much to ask.

Glenn Chan
June 3rd, 2007, 01:46 PM
It seems that I can't properly express what I'm suggesting. We're just talking about real-time preview--Vegas could simply play both events back with reverb applied to them, with no need to render them yet. Again, if you put a "dry" event on a track that has reverb applied to it, Vegas will apply reverb to the event on the fly with no need to render it out, right? Why in the world can't the same thing happen with an event?
The post on the Sony forum somewhere explains why this is difficult:


If it were only as easy as you believe...it isn't.

- Not all FX tell us when they are done.
- FX can't be asked consistently for information like 'tail length' produced. Not being able to know exactly when an "event" is not longer producing data is a problem.
- Output can be variable if an FX has parameter automation. (If we let you put FX on Events, I cann't imagine that you wouldn't want to automate them over time...)
- Some plugins - UAD - can be very problematic when used in any way out of how they define them. (They define their own variance and rules on the VST SDK/Spec.)

For example.

You cannot just feed data into a UAD plugin when you want. You must process ALL UAD plugins at the exact same time relative to the current sample frame all the time other wise the hardware yells - loudly. So let say you have 10 events, each with a different UAD plugin. You must feed silence into all of the UADs so that the hardware limitation of the UADs is met. Sure, you could disable and enable the plugins on the fly, but you can't do this arbitrarily. It would still have to happen on sample frame boundaries. Disabling and enabling the UAD plugins has an overhead associated to it as well.

While it seems trivial on the surface, it is not. Sure we could guess and try to determine the output of a plugin in realtime, but now we are spinning CPU cycles on listening for "silence" or detecting tails.

WRT limting which plugins can be used:

I could just hear those users that have Brand X or "Type X" plugins that we choose not to support as Event level plugins. Ugh! If we can't make it work for every plugin, then we are just digging a different hole for ourselves.

I am not saying this can't be done or that we wont do it, but understand it is never as trivial as it looks on the surface. To do it right and make as many users happy is no small task.

FWIW: CDArch is not really giving you Event level FX. They GUI just makes it appear this way.

Peter
http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=482604

2- Feature film mixes can easily have 40+ tracks.

Vegas has shortcuts for collapsing tracks so they don't take as much real estate... check the shortcuts sticky at the top of this forum, I think it has them in there.

3- To me, a somewhat more sensible solution is if the non-real-time FX engine is tweaked so that it remembers your audio plug-ins... that way it sort of behaves like there is eventFX. Though reverbs won't work properly, and that might be confusing if you don't know what's going on.

Jarrod Whaley
June 3rd, 2007, 02:16 PM
Thanks for digging up that post, Glenn. At least now I have a better idea of *why* audio effects are incapable of being handled in real-time.

The poster from the Vegas forum mentioned that CD Architect's GUI makes it seem as if effects are being applied in real-time... I've never used CDA, but it sounds like a similar tweak to the Vegas GUI would satisfy me, because it sounds like at least in some ways, I'd never know the difference on any practical level.

You say that feature films routinely use 40+ tracks... this is in line with my experience with Vegas on long-form projects. Unfortunately, in an NLE, you can't see all of those tracks at once like you can on a console, for example. The problem with minimizing tracks is that it doesn't really go too far toward solving the problem of keeping everything in view if you do have 40+ tracks (which I often do), and if (make that when) you ever need to move a video clip and all of its corresponding audio clips, you're still likely to miss some little event down on, say, track 43. Yes, grouping events can help avoid this problem, but it doesn't apply to every situation, and you can still miss an event when you go to group them.

Also, even though the bit about not knowing when an audio effect will end almost kinda-sorta makes sense (especially since Sony can't know what 3rd-party plug-ins are going to do), I still don't understand why Vegas couldn't, as a slightly inelegant alternative, truncate the end of a reverb application at the end of an event, so that you have to think ahead and allow for silence before applying the reverb--or at the very least, allow us to use a real-time preview on some effects. I have no doubt that quite a few hours of my life that I have wasted wading through tons of tracks would be returned to me if Sony had adapted either of these models from the beginning.

I guess I just have to learn to live with this, as painful and time-wasting and counterintuitive and absurd and ridiculous as it is. Surely there has got to be a better way, but it sounds like we're never going to actually get a better way to handle audio, because Sony has their stock excuse and their minds are closed on the matter.

Anyway, thanks again for digging up the info, Glenn.

Brian Standing
June 4th, 2007, 02:47 PM
This may not help with your central concern, since you will still need to render out a .wav for an event-based Audio FX. However, you can help manage all the resulting "tiny .wav" files, at least on the timeline, by using Vegas' "take" function.

If you add the rendered .wav file over the original as a take (try right-mouse-clicking and dragging the file to overlap the original), you can easily return to the original, or cycle through multiple takes, by pressing the "T" key. You can also set Vegas to automatically display the take name, if you need to remember what you did to each one.

If they're truly tiny events, the rendering time isn't much, and once rendered they don't need to be again.

Just a thought.

Jarrod Whaley
June 4th, 2007, 03:12 PM
That is a helpful suggestion Brian, and it does address at least one part of the issue I personally have with non-real-time effects.

Thanks.

Josh Bass
June 4th, 2007, 04:08 PM
Oh yeah, I wanted to ask you guys about that. . .can you make Vegas 6 tell you the name of the clip/event in the timeline? 4 Did it, and I can't find an option anywhere in 6. What I'm talking is about is actual text written on the event in the timeline ("John CU take 6").


Also, doesn't FCP have the realtime audio FX?

Douglas Spotted Eagle
June 4th, 2007, 06:20 PM
Vegas does have realtime audio FX, at two different levels, whereas FCP only offers it at one level. Problem is, Vegas offers event-based FX as well, but they're not real time. Therefore, it's a problem for some folks.
Coming from the audio world, it's not an issue for many of us.
To view take names/event/audio file names, select VIEW/ACTIVE TAKE INFORMATION or CTRL+SHIFT+I.