Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADC render sizes vs AudioContext #9

Open
rtoy opened this issue Jun 29, 2019 · 9 comments
Open

ADC render sizes vs AudioContext #9

rtoy opened this issue Jun 29, 2019 · 9 comments
Labels
AudioDeviceClient Project label for AudioDeviceClient

Comments

@rtoy
Copy link
Member

rtoy commented Jun 29, 2019

As currently proposed, the ADC includes an AudioContext and provides a callback to call the audio graph and returns the data produced by the audio graph.

The ADC also can support different render sizes. Let's say 64 is the suggested HW size. How does that work with an AudioContext that must render 128 frames? Especially if there's an input to the AudioContext that is supposed to be generated from the ADC and sent to the AudioContext?

Perhaps the solution is to allow the AudioContext to work at different block sizes? Then it can match the optimum (or selected) value for the ADC.

@padenot
Copy link
Member

padenot commented Jun 29, 2019

I think it's best to do something like http://www.grame.fr/ressources/publications/CallbackAdaptation.pdf. This works both ways (ADC with bigger buffer size than 128, and the opposite).

@hoch hoch added the AudioDeviceClient Project label for AudioDeviceClient label Jul 16, 2019
@hoch
Copy link
Member

hoch commented Jul 16, 2019

Yes! Then we can assign this issue to @sletz. :)

@sletz
Copy link

sletz commented Jul 17, 2019

Well this old work was done in the context of adding ASIO support in PortAudio API. On Windows some drivers were using quite exotic buffer size values, so we has to adapt those values to more standard ones (like power of two) and wanted to minimize latency.
The main drawback of this kind of approach is that DSP CPU usage in not "homogeneously distributed in time" anymore.
So I guess the only proper way is to avoid all adaptations as much as possible, and use the real HW buffer size in the audio callback chain.

@padenot
Copy link
Member

padenot commented Jul 17, 2019

Right, I think we generally agree here. Authors will be able to pick the best buffer size for the platform (and I expect most of them to do that), but are allowed to pick something else, in which case, we'll have to do something else, such as the technique described in your paper.

@sletz
Copy link

sletz commented Jul 17, 2019

"The main drawback of this kind of approach is that DSP CPU usage in not "homogeneously distributed in time" anymore."

Take something like ADC running at 64 frames and AudioContext at 128 frames: AudioContext callback is called every 2 buffers, but still has to complete in 64 frames duration right? So you only have 50% of the DSP CPU bandwidth. If AudioContext callback takes more than 50%, it will not end in time.

Or do you think of even more complex buffering scheme?

@hoch
Copy link
Member

hoch commented Jul 19, 2019

This issue came up in the past. I think the proper buffer size can be calculated, but the non-homogeneous processing load is a problem. The most intuitive way to handle this is allowing the AudioContext to render less than 128 frames. The alternative would be having a ring buffer between the context and the device client and running them independently with an own clock. But this will have a drift problem.

@rtoy
Copy link
Member Author

rtoy commented Jul 19, 2019

I agree. We should update WebAudio AudioContext (and OfflineAudioContext) to allow a new option to specify the rendering size. I'll file an issue for that.

@padenot
Copy link
Member

padenot commented Nov 14, 2019

If the Web Audio API gains the capability to change its internal buffer processing size, then the ADC proposal has no advantages over using an AudioWorklet.

@jas-ableton
Copy link

jas-ableton commented Dec 19, 2019

It seems like there's a lot to be said for implementing a configurable buffer size with the current Web Audio API regardless of where ADC eventually ends up. Existing apps that are running on devices with eccentric buffer sizes like 192 will stand to benefit from a configureable render quantum as long as the developers make use of the new option - a significantly easier change that switching to an ADC based implementation.

See related: https://bugs.chromium.org/p/chromium/issues/detail?id=924426

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AudioDeviceClient Project label for AudioDeviceClient
Projects
None yet
Development

No branches or pull requests

5 participants