Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation of Audio Device Client and AudioContext #5

Open
chrisguttandin opened this issue Apr 2, 2019 · 6 comments
Open

Relation of Audio Device Client and AudioContext #5

chrisguttandin opened this issue Apr 2, 2019 · 6 comments
Labels
AudioDeviceClient Project label for AudioDeviceClient

Comments

@chrisguttandin
Copy link

Am I correct, when I assume that there is a 1:1 relation between an Audio Device Client and an AudioContext? Each Audio Device Client can only have one AudioContext and each AudioContext can only belong to one Audio Device Client. But an AudioContext is optional and an Audio Device Client can exist without an AudioContext associated to it.

If my assumption is correct, I think it would make sense to flip around the process of creating an AudioContext associated to an Audio Device Client. If there is a getContext() method on the Audio Device Client, it implies that the context is already there and just has to be returned. If the AudioContext would just be created as usual but with an additional constructor argument it might better reflect what is actually going on behind the scenes.

const ac = new AudioContext({ deviceClient: myPreviouslyCreatedAudioDeviceClient });

// This will then throw an error, because there is already
// a context associated to the same Audio Device Client
const ac2 = new AudioContext({ deviceClient: myPreviouslyCreatedAudioDeviceClient });
@rtoy
Copy link
Member

rtoy commented Apr 2, 2019

This is one area where I disagree with the current API. The API shouldn't know about AudioContext; it makes AudioContext a first-class citizen. I think I'd prefer AudioDevicieClient not know anything about AudioContexts. Instead, you create an ADC instance and do something like new AudioContext({destination: adc_instance}). It's up to the browser to figure out how to connect the two.

I think for situations like this the ADC callback is still called and all the audio from ADC and AudioContext are merged together to produce the final output.

@hoch hoch added the AudioDeviceClient Project label for AudioDeviceClient label Apr 2, 2019
@hoch
Copy link
Member

hoch commented Apr 2, 2019

The reason I proposed getContext() for ADC is the similarity with Canvas API. (for 2D or 3D rendering context). However, I am also leaning toward suggestions above.

The spec change on Web Audio API side will be really simple as well. This might be better than a getter pattern because everything settles down at the construction time.

partial dictionary AudioContextOptions {
  AudioDeviceClient deviceClient;
}

One constraint is: if AudioContextOptions has a valid deviceClient, the other properties will be ignored. (latency hint or sample rate) Also I am not sure if sharing the same client between multiple contexts is a bad idea. If you can drive multiple contexts with the same clock, I think it might open up new possibilities.

Probably there are more corner cases, so we'll have to keep looking.

@hoch
Copy link
Member

hoch commented Apr 4, 2019

Notes from telecon on 4/4/2019:

The group has two sets of thoughts on this issue.

  • AudioDeviceClient.getContext() enforces 1:1 relationship between ADC and AC. However, it comes with the tight coupling with AudioContext, and it may become an issue in the future.
  • Being able to pass ADC as a sink/destination for any audio-related task looks useful.
  • Exposing contextCallback function in ADC's device callback offers more control, but the pattern is error-prone.
  • Getting a rendered buffer from AC instead of contextCallback works very simple, but then you can't pass the ADC's input buffer to AC.
  • Should we allow passing one ADC to multiple ACs? If so, streams from multiple ACs will be mixed down and turn up in ADC's device callback?

@hoch
Copy link
Member

hoch commented Apr 4, 2019

To capture the original proposal, here's an example of pass-through between ADC and AC.

/* AudioDeviceClientGlobalScope */

const deviceCallback = (input, output, contextCallback) {
  // This callback will automagically handle buffer size difference between
  // ADC (user-defined) and AC (128).
  contextCallback(input, output);
}

Then the |input| above will be delivered to AudioContext.source.

const context = myDeviceClient.getContext();
context.source.connect(context.destination);

@hoch
Copy link
Member

hoch commented Jun 5, 2019

Some thoughts:

One of the most important functionalities that we want is to be able to invoke AudioContextCallback inside of AudioDeviceCallback.

Allowing to map multiple AudioContexts to a single AudioDeviceClient makes it exponentially complex. Given that passing an ADC instance to AudioContext's constructor is to support multiple AudioContexts with a single ADC.

setDeviceCallback((input, output, contextCallbacks) => {
  contextCallbacks[0](input, output);
  contextCallbacks[1](input, output);
  // and so on...
});

Theoretically this is possible, but not sure what can be accomplished with the expense of complexity. Here are some difficult questions:

  1. How do you differentiate multiple callbacks with its associated context? Also what's the right order and why?
  2. What if each context has varying baseLatency and outputChannelCount?
  3. More importantly, this will change the AudioContext spec. Web Audio API is already bloated with numerous corner cases.

In all, this is why I prefer to have 1:1 relationship between AudioContext and AudioDeviceClient. The getContext() getter can easily enforce a context to follow whatever the associated ADC offers. Also in this way we can keep Web Audio API intact and make the ADC spec work small. We only need to specify:

  1. AudioDeviceClient.getContext() getter.
  2. How AudioContextCallback will behave to accommodate the callback buffer size difference.

@hoch hoch added this to Needs CG discussion in Audio Device Client Jun 6, 2019
@rtoy
Copy link
Member

rtoy commented Jun 28, 2019

A quick summary of the F2F meeting. I understand the use case now and having the context as a part of ADC makes total sense. There was some question about how important that use case is, but unless there's a good reason not to allow this, I'm fine with having the AudioContext in ADC.

The use case is that having the context in ADC allows the ADC process decide to mix the ADC code with the AudioContext to allow fine-grained control.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AudioDeviceClient Project label for AudioDeviceClient
Projects
Audio Device Client
Needs CG discussion
Development

No branches or pull requests

3 participants