Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Master effects #6359

Merged
merged 4 commits into from
May 27, 2024
Merged

Master effects #6359

merged 4 commits into from
May 27, 2024

Conversation

vsverchinsky
Copy link
Collaborator

@vsverchinsky vsverchinsky commented Apr 29, 2024

Resolves: #6346

Known issues:

  • UI: Master effects section does not fit automatically to the contents

  • No micro fades applied to solo/gain change

  • Latency increased for track controls in real-time (delay between the moment you press TC button and the moment you hear changes is audiable)

  • I signed CLA

  • The title of the pull request describes an issue it addresses

  • If changes are extensive, then there is a sequence of easily reviewable commits

  • Each commit's message describes its purpose and effects

  • There are no behavior changes unnecessary for the stated purpose of the PR

Recommended:

  • Each commit compiles and runs on my machine without known undesirable changes of behavior

QA:

  • Several projects can be open at once, each with master channel effects, and behavior is as expected.
  • Real-time effects can be added and removed during playback - works for both track and master channel.
  • Real-time effect UI indicates track name or "Master", depending.
  • Playback of MIDI file in otherwise empty project works (i.e., MIDI messages are sent, and only silence is played back).
  • Mute / Solo during playback still works (there may be a click audible but that shouldn't be a regression)
  • Playback of several tracks, some with effects with latency and others without, within a loop, yield expected result.
  • Master track effects are applied when exporting but not rendering (in place or to new track) or stereo-to-mono
  • Export does delay compensation
  • Export works for files of 10 seconds or more
  • Master effects are persistent across closing/re-opening a project
  • Master effect stacks in several projects open at once do not interfere with each other (e.g. Reverb in project A isn't applied to tracks in project B).
  • Backward compatibility: a project with master effects can be opened in previous Audacity version, only without the master effects.
  • Scrubbing

@LWinterberg
Copy link
Member

No micro fades

As in, it doesn't honor the micro-fades preference?

@dozzzzer
Copy link
Contributor

@vsverchinsky when playing a mono clip, only the left channel is playing audio:

image

@vsverchinsky vsverchinsky marked this pull request as ready for review May 5, 2024 18:44
@saintmatthieu saintmatthieu self-assigned this May 6, 2024
Copy link
Contributor

@saintmatthieu saintmatthieu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't finished yet the review, but stumbled on a potential crash.

@@ -68,6 +72,14 @@ void RealtimeEffectManager::Initialize(

// Leave suspended state
SetSuspended(false);

//Register the master channel group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this comment is relevant only to the following two out-commented lines or to what follows, too. Why keep the dead code?

@@ -50,6 +50,8 @@ class REALTIME_EFFECTS_API RealtimeEffectManager final :
public Observer::Publisher<RealtimeEffectManagerMessage>
{
public:
static const ChannelGroup* MasterGroup;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like even the pointer should be const, i.e., static const ChannelGroup* const MasterGroup;
Also, I'd prefer the initialization to take place here as well for clarity, rather than in the source file.

@@ -151,9 +153,10 @@ class REALTIME_EFFECTS_API RealtimeEffectManager final :
};

void ProcessStart(bool suspended);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trailing spaces

@@ -202,7 +200,7 @@ class REALTIME_EFFECTS_API RealtimeEffectManager final :
}

AudacityProject &mProject;
Latency mLatency{ 0 };
//Latency mLatency{ 0 };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just removing it?

//mRates.insert({MasterGroup, sampleRate});

VisitGroup(MasterGroup, [&](RealtimeEffectState& state, bool) {
scope.mInstances.push_back(state.AddGroup(nullptr, numPlaybackChannels, sampleRate));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nullptr aka MasterGroup ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If so I'd prefer MasterGroup over nullptr, in case one looks for all places where it's used.

@@ -148,6 +148,7 @@ struct AudioIoCallback::TransportState {
assert(false);
continue;
}
//TODO: initialize with sequence own channels count and sampleRate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ? Isn't this used by AllocateChannelsToProcessors to "plug" an effect's output channels to the number of playback channels. Why should this change with the master channel?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment isn't specifically about master channel.
When we allocate two mono effects to one stereo track there is at least one problem with UI : you can't see the output of the second channel.
I think configuring effects to process samples in native track format and rate could have benefit of reducing CPU load in case if playback sample rate is higher compared to playback rate. But that may be conceptually wrong.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the CPU, I don't know, it looks like there is no negotiation at the moment: the framework dictates the sampling rate to the effect instances, which I guess is much simpler. Also, I think resampling is inevitable, so I'm not sure we want to try and minimize it.
My point is that this TODO should probably be tempered with some doubt, e.g., "consider initializing (...) because (...)"

@@ -81,5 +86,9 @@ class RealtimeEffectPanel : public wxPanel
void SetFocus() override;

private:

void MakeTrackEffectsPane();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think correct English would be MakeTrackEffectPane (even if there's more than one effect).

}
}

auto samplesAvailable = std::min_element(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This crashes when playing a MIDI file on an empty project, because mPlaybackSequences and, consequently, mProcessingBuffers, are then empty.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a solution: the size of mMasterBuffers equals the number of playback channels, maybe it could be used instead.
Also, AFAICS these buffers always have the same size, so a simple mMasterBuffers[0].size() would be more expressive and sufficient.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or mPlaybackBuffers.

Copy link
Contributor

@saintmatthieu saintmatthieu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finished reviewing the first commit (the audio part) ; nothing major besides that crash, but hopefully this is an easy fix.
Please remove the dead code ; I don't see why it should be kept, and it's noisy and sometimes confusing.

mOldChannelGains.resize(mPlaybackSequences.size());
size_t iBuffer = 0;
// Bug 1763 - We must fade in from zero to avoid a click on starting.
//mOldPlaybackGain = 0.0f;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noise

mPlaybackBuffers[iBuffer++] = std::make_unique<RingBuffer>(
floatSample, playbackBufferSize);


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trailing spaces

// Avoiding std::vector
const auto pointers = stackAllocate(float*, mNumPlaybackChannels);
// Do any realtime effect processing, after all the little
// slices have been written.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any side effect if pScope == nullptr, so shouldn't the if (pScope) be on this line?

@@ -166,26 +171,21 @@ class REALTIME_EFFECTS_API RealtimeEffectManager final :
// using StateVisitor =
// std::function<void(RealtimeEffectState &state, bool listIsActive)> ;

//! Visit the per-project states first, then states for group
//! Visit states for group
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

" or for the master when group is null"

@@ -2015,104 +2023,208 @@ bool AudioIO::ProcessPlaybackSlices(
// warping
if (frames > 0) {
size_t produced = 0;
// Check for asynchronous user changes in mute, solo status
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment "mPlaybackBuffers correspond many-to-one with mPlaybackSequences" above this line is no longer correct

auto& buffer = mProcessingBuffers[iBuffer++];
const auto offset = buffer.size();
//there could be leftovers from the previous pass, don't discard them
buffer.resize(buffer.size() + frames, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add a comment that this resize is expected not to allocate memory in the worker thread, because sufficient size should have been reserved in AllocateBuffers. Perhaps, test capacity before and after and assert no change.

.0f);
}
else
//TODO: mixer->Process() may be wasting CPU cycles
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the mixer can be properly eliminated in the case that it does resampling of a track whose rate does not match the device output rate, or there is a time track. There will be some state in the resampler dependent on previous input.

However, note that the mixer stands as a producer in relation to this function which is its consumer. Perhaps, if it really matters, another thread could mix in advance.

But all that effort at overlapping computation may be unnecessary because this thread really spends most of its time idle anyway, going faster than the ultimate real-time consumption of its output.

Or then again maybe it might matter in future because we want to have ever more capacity for complicated calculations like more effects, more stretching, more pitch shifting.

Or again, maybe you were thinking of power consumption, not performance.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I though that mixer could be "advanced" without doing any actual work

else
//TODO: mixer->Process() may be wasting CPU cycles
//TODO: fade out smoothly
std::fill_n(buffer.data() + offset, frames, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice that as part of this PR you have moved the logic to silence tracks (which might be driven by unpredictable presses of the mute buttons by the user) from the low-latency thread to this high-throughput thread.

Was that for simplification of the former, so that it does less work? Or some other reason?

Does this comment mean you have not yet restored the microfading feature? (I have not yet read enough to answer that for myself)

I think I like this change because as I recollect, the microfading had a dependency on the buffer size given to audacityAudioCallback which can depend on operating system or larency settings. That should not be: the duration of the fade in or out should be independent of that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was that for simplification of the former, so that it does less work? Or some other reason?

The reason is that we can't do master effects processing on low-latency thread and that processing should happen after all the tracks have been mixed into playback channels.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this comment mean you have not yet restored the microfading feature? (I have not yet read enough to answer that for myself)

Micro-fading isn't completely restored, it doesn't happen when you move gain, pan sliders or press solo/mute buttons.

Copy link
Contributor

@saintmatthieu saintmatthieu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not requesting changes because I'm not certain of my analysis, but I think there is another bug, already present in the first commit but that I missed before.

// Choose a common size to take from all ring buffers
const auto toGet =
std::min<size_t>(framesPerBuffer, GetCommonlyReadyPlayback());

// Poke: If there are no playback sequences, then the earlier check
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what earlier check is meant here. Could it be that this comment is also outdated?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment refers to what now happens inside CallbackCheckCompletion. Amend the comment to make that more explicit. And it's now "below" not "above".

@@ -2682,19 +2719,21 @@ bool AudioIoCallback::FillOutputBuffers(
return true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A little bit higher, line 2712 on this commit, is the line float *outputFloats = outputBuffer;, which only seems to be an alias to the same thing, maybe a remnant of older code. Would you remove it if you get the chance?

for(auto& buffer : mMasterBuffers)
std::fill_n(buffer.begin(), samplesAvailable, 0);
//introduced latency may be too high...
//but we don't discard what already was written
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be a mini bug here: if adding a real-time effect during playback increased latency such that samplesAvailable becomes 0, what would be played for that frame would be the samples of the previous mPlayoutBuffers, possibly leading to a repeated playout of the last few ms.
Solution might be to return false, such that the AudioThread doesn't read the content of mPlayoutBuffers ?
It would be worse if a real-time effect had variable latency, but I don't think there's a use case for that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would apply to the next if(samplesAvailable == 0) too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the answer to your question is in FillPlayBuffers. If for some reason samplesAvailable will be zeroed, then there will be more processing passes until GetNeeded() == 0. Otherwise if I return false as you suggest then extra buffer exchange step will be initiated for same frame, which I think is less preferrable.
May be we should test how it works now with latency values that are much larger than our regular buffer sizes

//Prepare master buffers.
//These buffers may also contain leftovers from the previous pass
//Remember initial offset for each of them
const auto masterOffsets = stackAllocate(int, mMasterBuffers.size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have doubts about the correctness of the handling of latency for the master channel.
The discardable value returned by a real-time effect is the number of leading samples that can be discarded.
For the track effects, this is indeed what is done:

         for(int i = 0; i < seq->NChannels(); ++i)
         {
            auto& buffer = mProcessingBuffers[bufferIndex + i];
            buffer.erase(buffer.begin(), buffer.begin() + discardable);
         }

For the master track, however, no leading samples are discarded, and then

      unsigned bufferIndex = 0;
      for(auto& buffer : mMasterBuffers)
      {
         mPlaybackBuffers[bufferIndex++]->Put(
            reinterpret_cast<constSamplePtr>(buffer.data()),
            floatSample,
            samplesAvailable,
            0
         );
      }

which means trailing samples are discarded instead. It was already that way in the previous commit but I missed it.

If I'm correct, the consequence of this mistake would be that, if the master channel has an effect that adds delay, an artefact due to zeroed audio would be heard at the beginning of playback.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't see the necessity of masterOffsets. I don't think there can be left-overs from one ProcessPlaybackSlices to the next, as it would mean this method would produce more audio samples than asked for, and I don't see it to be the case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem does exist indeed. In the attached video, you can hear an audible crackle at the beginning of the playback with a non-zero latency effect (TDR Limiter) added to the stack:

Screen.Recording.2024-05-10.at.16.15.45.video-converter.com.mp4

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dozzzzer .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing latencies really right is a known bit of technical debt. I have had ideas. I didn't find time to elaborate and share them yet.

@saintmatthieu saintmatthieu self-requested a review May 10, 2024 08:09
@saintmatthieu saintmatthieu dismissed their stale review May 10, 2024 08:10

Removing change request because the particular bug was fixed.

// Temporary buffers, each as large as the playback buffers
std::vector<SampleBuffer> mScratchBuffers;
std::vector<float *> mScratchPointers; //!< pointing into mScratchBuffers

std::vector<std::unique_ptr<Mixer>> mPlaybackMixers;
std::unique_ptr<Mixer> mMasterMixer;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused

mLastPlaybackTimeMillis = ::wxGetUTCTimeMillis();
return false;
}

// The drop and dropQuickly booleans are so named for historical reasons.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment refers to variables that no longer exist

}
}

//samples at the beginning of could have been discarded
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete "of"

//introduced latency may be too high...
//but we don't discard what already was written
if(samplesAvailable == 0)
return progress;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am reminded that the handling of latency is still not satisfactory. I had ideas for fixing it that I have not found time to implement.

unsigned bufferIndex = 0;
for(const auto& seq : mPlaybackSequences)
{
//TODO: apply micro-fades
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that really TODO? I think you have conserved the micro-fade logic on the consumer side in FillOutputBuffers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC that one handles the changes in the volume slider, which are global. This is for the track-specific mute/unmute.

Copy link
Contributor

@saintmatthieu saintmatthieu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Paul-Licameli
Copy link
Collaborator

Paul-Licameli commented May 10, 2024

I have completed a review of changes in AudioIO.cpp.

If my contribution of per track meters is also wanted, there is a lot of needed work to make these compatible.

I may volunteer my own very different rewrite of AudioIO.cpp to support a master channel in the right way.

I understand that:

  • The sequence of transformations on individual tracks includes time warping (for time track), muting (of individual tracks), effects (stacked on single tracks), panning, in that order.
  • Then you mix tracks and apply the master stack, on the PRODUCER side of the RingBuffer. (As for individual stacks. We must avoid who knows what computations in opaque code, on the low-latency consumer side.)
  • Micro-fading is still applied on the consumer side.
  • There are only one or two RingBuffers corresponding to the device output channels.

The old version, without a master stack, differed in these ways:

  • There was a RingBuffer for each WaveChannel.
  • Applying mute and pan, and mixing of the RingBuffer outputs, was done on the consumer side.

Questions might arise about subtle differences of results, moving the muting before the effect processing. (Is it better with your change? Have you tried muting a track with a reverb, during play -- which before, would just mute it at once, but after, might instead produce the decaying tail of what was unmuted?)

Also, whether pan should apply before a per-track effect or after: It might make a difference for certain effects like reverbs that are not simple LTIs but might work in terms of mid and side channels. (@saintmatthieu, opinion?)

More later...

@Paul-Licameli
Copy link
Collaborator

More:

The existing ring buffers should not be removed. Ring buffers for the master mix should be added. The consumer thread should take equal samples out of all of them. Only the samples from the master mix are stored in the output buffer of the audio io callback.

This allows future development to send the buffers of pre-mixed per-track data inter-thread, back to the master thread for drawing.

See SendVuInputMeterData and SendVuOutputMeterData which now do such inter-thread communication: but I would need to send other buffers too.

My thinking is this: the consumer thread needs to have access to intermediate states of calculation of the mix samples, corresponding in time to the ultimate output. Those intermediates should be sent to other code that performs graphical display, keeping as closely synchronized as possible with the real-time playback. That synchronization will not be perfect, given the reality of frame refresh rates.

Still, the latency of the RingBuffer itself should be accounted for in that drawing update. Sending the per-track information to the main thread more directly from the producer thread would make less perfect synchronization of the display.

Therefore, more data should be travelling through ring buffers from the high-throughput but variable-latency producer thread, to the more regularly updating consumer thread.

@@ -2036,8 +2037,15 @@ bool AudioIO::ProcessPlaybackSlices(

auto& buffer = mProcessingBuffers[iBuffer++];
const auto offset = buffer.size();
const auto toConsume = std::min(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are these new lines fixing ? It looks like we are now ensuring that no new memory is allocated in that function. This comes at the risk, though, that we provide less than frames samples, if for some reason the capacity of the buffer is insufficient. Wouldn't that be worse ?

I wonder if this change is motivated by the initial playback click that we hear in @dozzzzer 's video above. I suspect this not to be due to buffer allocation, but simply because he is playing a sine wave without any sort of smoothing envelope, and it's then completely normal to hear this click.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we are now ensuring that no new memory is allocated in that function.

Yes

This comes at the risk, though, that we provide less than frames samples

Memory allocations are not better. The disadvantage of this code is that it's not capable to handle very large latencies (usually > 250ms).

I wonder if this change is motivated by the initial playback click that we hear in @dozzzzer 's video above

No, I did that in response to Paul's comment above

@saintmatthieu
Copy link
Contributor

@Paul-Licameli good point about reverb.
I confirm that muting a track may not instantly mute the sound but any effect with a tail will continue ringing.
It may sound better but I don't think this is the wanted behaviour: muting is muting. What if the effect is a long echo ?
@LWinterberg ?

@saintmatthieu
Copy link
Contributor

About panning before or after, yes, if the effect is non LTI, this would make a difference. If there is one place for panning and gain, I think it should be where it is a the moment, after the effect stack. I guess other DAWs do it that way, too (or may have also a "pre-gain"), and if a non LTI effect such as a compressor comes in, it will come with its own pre-processing gain parameter.
A mid/side encoding/decoding operation is a linear operation, though, so in that particular case it should not make a difference.

@saintmatthieu saintmatthieu dismissed their stale review May 16, 2024 09:18

Will approve now, pending Vitaly's rework on the mute behaviour.

@Paul-Licameli
Copy link
Collaborator

I disapprove.

please wait to see my alternative that will be compatible with oerctrack meters.

expext it today.

@saintmatthieu
Copy link
Contributor

Approving for now. My purpose is not to override @Paul-Licameli's disapproval, because maybe his alternative makes this very complex logic easier ? I'll leave up to @vsverchinsky to decide but I don't want to block QA anymore.

@Paul-Licameli
Copy link
Collaborator

Let it merge then: but my PR will begin with reversion of all the changes in AudioIO.cpp (not of other UI changes) and then a more careful stepwise rewrite.

QA will likely need to re-test everything involving sound quality, but not the UI changes.

@chinakov
Copy link
Contributor

chinakov commented May 17, 2024

Found issues:

  1. When a User adds MT-A Plugin by Mercurial to the Master Effects, and uses the built-in bypass feature, the audio played back is panned 100% to left, instead of stereo playback.
  2. When a User adds Kilohearts Distortion plugin to the Master Effects, and playbacks some audio, full scale noise can be heard for a second after playback has been initialized.
  3. If a User has added any Master Effects, and enabled them, the effects are not applied to files on file export (I am not sure what to expect here, should Master Effects render when they are enabled?)

@LWinterberg
Copy link
Member

(I am not sure what to expect here, should Master Effects render when they are enabled?)

Yes!

@Paul-Licameli
Copy link
Collaborator

Let it merge then: but my PR will begin with reversion of all the changes in AudioIO.cpp (not of other UI changes) and then a more careful stepwise rewrite.

QA will likely need to re-test everything involving sound quality, but not the UI changes.

My changes are more complicated than expected.

However, I might be fixing long standing problems with effect latencies. Search for issues with “latency”

@saintmatthieu
Copy link
Contributor

Looking into the export/render problem.

@DavidBailes
Copy link
Collaborator

DavidBailes commented May 20, 2024

@vsverchinsky . Serious accessibility issue, logged here: #6456

@DavidBailes
Copy link
Collaborator

@vsverchinsky . Another accessibility issue, logged here: #6457

@vsverchinsky vsverchinsky force-pushed the master-channel branch 3 times, most recently from 6c76721 to a4856a2 Compare May 22, 2024 09:30
@@ -1243,7 +1243,7 @@ bool AudioIO::AllocateBuffers(
));
for(auto& buffer : mProcessingBuffers)
buffer.reserve(playbackBufferSize);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@saintmatthieu Please avoid unnecessary formatting changes if you don't change the line itself - it complicates the search through commit history and review process

@@ -2241,7 +2241,10 @@ bool AudioIO::ProcessPlaybackSlices(
);
}
}


for(auto& buffer : mMasterBuffers)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a finally block already that does cleanup job

@vsverchinsky
Copy link
Collaborator Author

Completed review of @saintmatthieu commits 👍

@vsverchinsky vsverchinsky merged commit 972b06e into audacity:master May 27, 2024
10 checks passed
@vsverchinsky vsverchinsky deleted the master-channel branch May 27, 2024 09:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Master channel implementation
7 participants