Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher-level MIDI message access #179

Open
svgeesus opened this issue Jun 22, 2017 · 15 comments
Open

Higher-level MIDI message access #179

svgeesus opened this issue Jun 22, 2017 · 15 comments
Assignees
Labels
class: substantive https://www.w3.org/2023/Process-20230612/#correction-classes
Milestone

Comments

@svgeesus
Copy link
Contributor

Currently, MIDI messages are delivered as events containing short arrays of bytes. This means the script author needs to write a function to parse and handle them. For example, the simple monophonic synth example in the spec has function MIDIMessageEventHandler(event) which masks off the channel info and then handles note-on and note-off.

Parsing is not as simple as it might appear. For example, The LSB MIDI CCs need t have their values stored, and those values read, combined with MSB, and set to zero when the corresponding MSB CCs are encountered. System Realtime events can occur interleaved in other messages.

Thus, it seems likely that a succession of incomplete, poorly-conformant parsers will proliferate by copy paste in each WebMIDI-using script.

This situation could of course be alleviated by script libraries which provide a better parser. However, it could also be useful to save developers the trouble (and reduce fragmentation and the need to load third-party libraries) by offering a higher-level MIDI message access. this would be in addition to the existing low-level access. For example, a developer could register a callback for NoteOn, without having to remember what the message byte value is, and without having to handle zero velocity or HRVelocity prefix.

@toyoshim
Copy link
Contributor

IMHO, there is no reason to have such higher-level functionalities as a part of web platform API set. Actually Web MIDI is a little high level set of API if we compare it with other device APIs such as Bluetooth and USB, but it's there because it runs over multiple physical transport including Legacy DIN, USB, BLE, and so on.

Downside to have it in platform layer is

  • Higher level API is more likely to introduce incompatibility among browser implementations (focusing on lower-level API set as we can as possible is a kind of web platform API consensus we reached recent years, IIUC)
  • Higher level API would like to be aligned with the style of other higher level JS frameworks that are used together (and would be stale soon in terms of API design that is changing very quickly in the JS world)

@cwilso
Copy link
Contributor

cwilso commented Jun 23, 2017

@toyoshim I took on the action of sketching out such an abstraction to see how it could be layered on top, and whether it makes sense. Step one of evaluating it would be creating a polyfill to see how well it works, so I think we can go down this path a bit to see if it makes sense. I'm not convinced either way at this point.

@svgeesus The only inaccuracy in your comments is System Realtime Messages are already handled; they are not delivered as part of the surrounding message, but before (see last paragraph of http://webaudio.github.io/web-midi-api/#MIDIInput).

The semantics of decoding controllers (MSB&LSB), message types, etc., is a fair point though, as I said in the FTF.

@djipco
Copy link

djipco commented Jun 25, 2017

For newcomers, I believe the hardest parts are decoding the messages and easily attaching listeners to a variety of events. It would be great if the Web MIDI API took care of all that but I'm afraid it would slow down development and adoption because of the depth of the work involved.

As an demonstration of that, you can take a look at the API of my WebMidi.js library and you will get hints of the kind of work that it would involve. This library is far from perfect but it does showcase the events that would need to be triggered and the methods that would be expected by users.

Don't get me wrong, I would love for the Web MIDI API to do all that but what I would love even more is for Firefox, Safari and IE to natively support the API.

@notator
Copy link

notator commented Jun 25, 2017

@cotejp
+1 to all your points!

I also agree with @toyoshim that the WebMIDIAPI should not be extended with such higher level functionalities. This issue is related to #124, and hence to my WebMIDISynthHost (which is an initial proposal for an API for software MIDI output devices).

As far as I know, many MIDI synthesizers simply ignore the FINE setting for pitch wheel deviation (REGISTERED_PARAMETER_FINE combined with DATA_ENTRY_FINE), so I decided to do that in the case of my Resident Sf2 Synth (which is one of the synths hosted at the above repository).
I think it would be a very bad idea to try to impose a single solution for all such cases inside the WebMIDIAPI. It would be much better to have custom software synths available, that are of a minimum size to cope with a web site's particular requirements. Such software synths just need documenting, to say how they react, if at all, to particular MIDI messages.

@svgeesus

Thus, it seems likely that a succession of incomplete, poorly-conformant parsers will proliferate by copy paste in each WebMIDI-using script.

A software MIDI output device would be a "parser" in the sense you are using, but it would be a closed piece of re-useable code, and not have to be added through cut-and-paste. I think natural selection will ensure that such code is unlikely to be poorly-conformant for long! :-)

@ryoyakawai
Copy link
Contributor

When you say "MIDI", there are several meaning. Such as physical interface, message format, electric circuit and so on. So it might be good to decide which is/are the scope of Web MIDI API. I think the spec of Web MIDI API now is focusing on Interface. And the MIDI parser would be the scope of its message format.
And I guess other platform, such as Android, Core MIDI on Apple's platform and so on, is focusing only on interface. I think this would be the reason why they are not having MIDI parse inside of the APIs.

@toyoshim
Copy link
Contributor

To be fair, I'd share some example cases that OS provides MIDI message parsing.

But, my preferable policy to design a system is to implement things at a right API layer. Only reasons to implement things at a lower layer are 1) that can not be implemented at a higher layer, or 2) that can improve performance drastically. In terms of this policy, what we should focus on is back pressure mechanism for sysex message handling. This is what we can not solve correctly in JavaScript layer today.

@svgeesus
Copy link
Contributor Author

svgeesus commented Jul 4, 2017

IMHO, there is no reason to have such higher-level functionalities as a part of web platform API set.
The same argument could be made for many things:

  • no need for WebAudio API, just let people work on sound buffers directly with whatever algorithm they want
  • no need for the DOM, if people want an object tree they can make their own
  • no need for HTML, people can provide their own parsers and implement their own semantics

I don't find such arguments convincing. There is benefit to having a well implemented and consistent thing upon which to build.

@svgeesus
Copy link
Contributor Author

svgeesus commented Jul 4, 2017

@svgeesus The only inaccuracy in your comments is System Realtime Messages are already handled; they are not delivered as part of the surrounding message, but before (see last paragraph of http://webaudio.github.io/web-midi-api/#MIDIInput).

Thanks, noted and good to hear

The semantics of decoding controllers (MSB&LSB), message types, etc., is a fair point though, as I said in the FTF.

Right. I see a lot of poorly-compliant MIDI code (and the compliance requirement in MIDI is very lax) which means in practice that people can't risk depending on anything more than the lowest common denominator features. I would like Web MIDI to not go down that path.

@svgeesus
Copy link
Contributor Author

svgeesus commented Jul 4, 2017

@toyoshim

In terms of this policy, what we should focus on is back pressure mechanism for sysex message handling. This is what we can not solve correctly in JavaScript layer today.

Agreed that back pressure needs to be addressed.

@svgeesus
Copy link
Contributor Author

svgeesus commented Jul 4, 2017

@cotejp Agree with your points. Agreeing even more with the intro to your library:

While great, most developers will find the Web MIDI API to be a bit too low-level for their needs. For example, sending and receiving MIDI messages involves performing binary arithmetic to encode or decode MIDI byte streams. Having to read the MIDI spec in order to properly do that is not fun. Also the native Web MIDI API makes it hard to react upon receiving MIDI messages from external devices.

But yes, good libraries mitigate that. I would just have preferred that Web MIDI be usable directly by developers rather than providing the minimum functionality and really requiring a library on top. You know like "the DOM is fine because people can always use JQuery on top. In fact they pretty much have to".

@toyoshim
Copy link
Contributor

toyoshim commented Jul 5, 2017

There is enough reason for Web Audio API, DOM, HTML, and so on. If you implement the same functionalities in JavaScript, it will be several times slower than native implementation. This is the case 2) of the design principles. (though actually, I sometime see opinions to say Web Audio is too high level thing)

But I may change my mind for this proposal. I'm actually negative to have a high-level API set, but adding more readable annotations to the MIDIMessageEvent might be simple eoungh as a reasonable extension.

But, parameters for each MIDI message type vary, and it may make the spec unnecessarily fat. If there is a good common ground to be simple (= concrete) enough, that would be fine.

@jussi-kalliokoski
Copy link
Member

While I'm pretty sure most people participating have read it, I encourage everyone to read The Extensible Web Manifesto. Designing APIs is hard. The Web MIDI API as it currently is, is trying to hit the sweet spot of abstraction level where it's as low as possible, but still abstracts over the painful and unnecessary details, such as how different transports such as USB work, without being leaky. In fact, if we at some point add the streams support, we have the possibility to treat all MIDI streams the same way, regardless of whether they're from a serial port, USB or even WebSockets. Personally I believe that goal has been achieved quite well (except for not having streams integration yet), and aside from back-pressure, there's very little you can't do with it compared to native APIs, and there are no severe performance limitations either.

All that said, I'd personally never use it directly in a production application - it's quite clear (at least to me, but I admit I'm biased) that it's not even pretending to be the right abstraction level. It's more like an invitation for library authors to explore and implement different designs. This is intentional because while the basics of MIDI are simple, the different applications are vast. I personally find it hard to believe that we can sit down in a committee and nail down all those use cases and find a silver bullet design that fits them all ergonomically. We don't even know (knowledge requires data) what the most direly needed use cases are yet. That in mind, I think that if we had initially gone with a higher level design, the scope (and therefore potential for bugs in both spec and implementation) would have been huge and never shipped (I'm actually astonished that WebAudio ever shipped outside Chrome, kudos for all the hard work to ppl involved!) Worse yet, the users would probably have had to resort to Abstraction Inversion in many cases.

But like I said, there is definitely a need for a higher level API, and I'm not opposed to such a thing being specced out as a standard once a design has been proven in user-space to be A) good B) stable C) broad D) popular enough to warrant implementing in browsers. The argument of the possibility of bad parsers out there is not very compelling, the same argument could be used for providing a built-in DOM element for chat applications (I exaggerate, replace with more apples-apples example of your preference) because people get them wrong all the time and to a much worse effect. The return on investment in both cases is not really promising enough at this stage, but the time may yet come.

@notator
Copy link

notator commented Jul 5, 2017

Remember that the Web MIDI API is not yet complete. #124 has a "future" label.

I think this issue will be solved when we have an API that is a standard extension to the MIDIOutput interface.
Javascript programmers will then be able to write libraries having any API they like on top of the MIDIOutput device.

Begin edit
More precisely: Javascript programmers can already write libraries having any API they like on top of the MIDIOutput device. But it would help a lot if the higher level MIDIOutput interface was standardized. That would promote the programming of libraries representing specialized output devices (e.g. virtual synths) in Javascript.
It might also be a good idea to extend the MIDIInput interface as well (see below).
End edit

@toyoshim said

But, my preferable policy to design a system is to implement things at a right API layer. Only reasons to implement things at a lower layer are 1) that can not be implemented at a higher layer, or 2) that can improve performance drastically. In terms of this policy, what we should focus on is back pressure mechanism for sysex message handling. This is what we can not solve correctly in JavaScript layer today.

  1. I've already implemented a version of the higher level API, so its certainly possible.
  2. In my experience, performance is not a problem.

I haven't tried receiving sysex messages. If these are currently being sent too fast for Javascript, they will have to be throttled somehow. Maybe the MIDIInput (and MIDIOutput?) interface could expose a function that allows Javascript programmers to throttle sysex messages?
As a programmer using such a function, I could imagine throttling long sysex messages that arrive before a performance starts (e.g. loading patches etc.), but receiving/sending them at the same speed as ordinary MIDI messages at performance time.

@cwilso cwilso added this to the V2 milestone Oct 16, 2018
@cwilso cwilso added the status: needs WG review Needs to be discussed with the Audio Working Group before proceeding label Oct 16, 2018
@rektide
Copy link

rektide commented Jan 30, 2020

This work might look a little different now that MIDI 2.0 is released & has defined a standard MIDI packet, maybe? I'll try to report back once I have found some better info on 2.0, but I wanted to throw this out there.

@toyoshim
Copy link
Contributor

discussed in the TPAC, and we will keep this thread open for further discussion for V2.

@mjwilson-google mjwilson-google added the class: substantive https://www.w3.org/2023/Process-20230612/#correction-classes label Sep 13, 2023
@mjwilson-google mjwilson-google removed the status: needs WG review Needs to be discussed with the Audio Working Group before proceeding label Sep 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
class: substantive https://www.w3.org/2023/Process-20230612/#correction-classes
Projects
None yet
Development

No branches or pull requests

9 participants