New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Starting a new mediasoup SFU adapter #372
Comments
That's great! I prefer new adapters be in a separate repository, similar to naf-janus-adapter. If you have a standalone repository, and you want it to be in the networked-aframe organization, we can later move it there. This way you are free to continue developing the adapter further and maintain it. |
Glad to hear that! I'll refine some of the code parts and post the repository url here soon. |
Here is the repository of mediasoupAdapter. |
Thanks for sharing it. If you are willing to continue this work, here are some notes you should address before moving it to the networked-aframe organization. You shouldn't have all the networked-aframe code in your repository, I see that the copy of the code is already outdated. I see the interesting parts are in I see you got inspiration from server/socketio-server.js to write If you're starting from scratch, you should try to use latest socketio version instead of keeping the 2.5.0 version. You should avoid writing comments other than in English if you want contributions in this adapter. I see you configured the server to take vp8 and opus stereo server side, the adapter is creating audio and video tracks, signaling and data are on the websocket. What about heartbeat to not close the socket connection? Looking at the code quickly I didn't see anything related to that. See #243 about this issue. If you didn't see it, I wrote about the hubs dialog/mediasoup adapter here: What I look forward in a mediasoup adapter and that we don't have currently in the janus sfu adapter is the usage of video simulcast so that the producer send different video quality low/medium/high and the consumer consume the quality they want based on their bandwidth, adding several video tracks in the RTC connection so you share both your webcam and a screen share. |
Thanks for telling me that! I haven't noticed these problems before. I'll fix them later. |
Hi, I have fixed some problems for now. Here are some updates:
|
That's great. I don't have much time to look at it right now but I added a message to networked-aframe channel on slack to see if there are people interested to try it and give you feedback. For the heartbeat you should probably send a simple websocket message yourself like every 30s (configurable) instead of using periodic-full-sync component though. janus adapter is doing it via the minijanus package, sending a keepalive message every 30s but only if there weren't any message sent. Don't send the message if it's not needed. Relevant code here https://github.com/mozilla/minijanus.js/blob/cf259aee5d87b81b9ffd21dcc4b87efb4c2d6647/minijanus.js#L233-L248 Some things to test:
|
excellent work! very exciting to have this. |
If I get some time tomorrow I'll try to take a stab at seeing how it runs on glitch. |
Help wanted! |
What you described seems to be the same issue as #320. We don't have this issue with the janus adapter where you can see your own video with |
@vincentfretin is there any barrier or reason to not have the easyrtc API updated to just exactly reflect the janus api? Would be a breaking change, but would be nice to unify the api and make the adapter seamless and I also like the idea of making the stream acquisition explicit in the app side in principle. Sounds like it wouldn't be a very difficult update in theory, no? I have mentioned I'm considering working with these features and was expecting to perhaps need to update the easyrtc adapter for the streams stuff anyways, might make sense to do that work at the same time. |
There is no barrier, it's just developer time. We definitely should use the |
This repo is just not wanting to run on glitch, glitch is choking on setup. It keeps giving me an empty node_modules folder. I'm still troubleshooting it, but I'm thinking it's a glitch.com glitch. Probably going to give up and ping support here soon. |
Ok, I seem to have it running, though I have not tested it at all: https://mediasoup-adapter-test.glitch.me/examples/ I had to make several small changes to the server file; it's configured to run locally, not configured to run deployed. Also, the install process seems to take more memory than a free glitch makes available (512mb). As a result, npm install hangs awkwardly. I had to boost the app to get 2gb of memory, then open the terminal and manually After doing that, and tweaking the index.js a bit to let glitch handle https and to use the port that glitch provides instead of the hardcoded config port, it finally started to run. |
example doesn't even seem to try to display/play received audio and video, not sure what I'm missing. Adapter doesn't seem to implement enableCamera or enableMicrophone. The logs seem to imply that I am indeed getting streams successfully, though. |
Tried adding and adapting basic-audio.html for mediasoup,
But then nothing happens. Not sure if there's an implied step I'm missing. |
so, in conclusion for now: the provided example has "video: true; audio: false" set, so does not show how to get audio working. adding networked-audio-source alone does not simply work as one would expect. and the a-plane that is a networked-video-source is commented out in the example, and uncommenting it does not work. improved example needed to show how to implement these features with this adapter, I guess. Either something else is needed to get those streams, or theoretically perhaps something is wrong with the glitch install, though I'm not seeing any errors. |
(Also, the keepalive functionality doesn't seem to work properly.) |
Hi,
I haven't fixed it yet. I will figure it out when I get some time. I'll appreciate it if you may offer me some advice. And as Vincent said, it is a good idea to add stream in the app instead of the adapter, I will have a try. Thanks! |
I see, was hoping that was the case. Cool, I'm attempting to debug that now. I forgot that context in between when I first read your post and now! You mentioned having this issue with networked-video-source, did you have the same issue with networked-audio-source? It's weird, it kind of looks like everything should work fine when debugging, just not hearing things. |
Yes, it is the same issue. createSound() {
NAF.utils.getNetworkedEntity(this.el).then((networkedEl) => {
const ownerId = networkedEl.components.networked.data.owner;
// HERE: ownerId is en empty string
console.log('audio-src', {ownerId});
if (ownerId) {
NAF.connection.adapter.getMediaStream(ownerId, this.data.streamName, 'audio')
.then(this._setupSound)
.catch((e) => naf.log.error(`Error getting media stream for ${ownerId}`, e));
} else {
// Correctly configured local entity, perhaps do something here for enabling debug audio loopback
}
});
}, I add some log info in the networked-audio-source. |
Hi Vincent, |
For |
About the adapter API, |
Thanks Vincent, I have been trying to make users see themselves and taking this as a bug while it is not. And I have fixed the bug that was mentioned by Kyle. By now, you can see each other successfully. @kylebakerio
Thank you, @vincentfretin and @kylebakerio ! |
P.S. These changes have been updated in naf-mediasoup-adapter |
I looked at the repo again, the repo structure is good now, you did the changes as suggested, that's great. FYI I had to install
You may want to add instructions in the README. It seems you changed the port to 8185, the README is still pointing to https://127.0.0.1:8181/examples Replace the room name by something other that shooter, modify also the title of the page.
Does the changes required to run it on glitch addressed in the repo? Is it possible to document it in the README? |
Hi Vincent,
I haven't run it on glitch, and I got video/audio streams successfully at localhost. |
And for the copyright in LICENSE, is there anything I can refer to? |
@vincent I have a hunch this is related to the bug fixed in aframe master and that having assets causes a script to run which changes the order of execution in aframe... would need to test, but I've run into obscure behavior around having assets and it interacting with NAF's initial load before. Also, yes, python 3 is required to install mediasoup, but aside from needing a boosted app on glitch for memory reasons, python 3 is available and it isn't a problem there, fwiw. @Vesper0704 this is good to know, I had removed that as it just looked like a mistake along with some other html mistakes, but I did start to run into that color bug and hadn't tried to fix it. Also, you say you got video/audio streams, but your current index.html doesn't look like it's even trying to show video and audio streams--do you have a different client? I have still not gotten the audio/video to stream on glitch, but I had tried manually merging in the changes I saw in your recent commits with my changes to make it run in a server environment that wasn't localhost, so it's possible I did something wrong and just need to start fresh again. Basically, my question is just to confirm: did you upload the correct client html to get the streams received by users, or are those local changes you haven't shared? Edit: just noticed you updated it in the last 9 hours, I last checked a few days ago. Will try again. |
Ok. Maybe the latest version will work in your test. |
Hi |
For copyright, I mean the year and your name here https://github.com/Vesper0704/naf-mediasoup-adapter/blob/master/LICENSE#L3 |
Thanks @vincentfretin , I have modified the LICENSE. |
Not sure what the deal is with this repo, but running npm install for the first time took almost an hour to complete. I've never seen something like that.
(this was of course on an anemic glitch.com server, though it was boosted, and a boosted glitch server gets more resources than the basic paid servers at say render.com) Looking at the processes in the beginning, it looked related to the SSL / HTTPS stuff, which I disabled anyways as that's included built-in to glitch and can be expected to be handled in the deployed server environment. Looking at package.json, though, it's not immediately clear what dependency is leading to what sub dependency that's leading to this issue? Didn't dig too deep there yet. Also, are you removing console.warn? Anyways: while everything seems to be running and the socket part of the server is clearly up, It's frustrating; in the console, I seem to get the stream, but nothing actually plays. Must be close, but just not sure what the issue is yet, I'm thinking it's likely client side and autoplay related, something small, but that's a guess for now while I poke around--maybe the streams are empty? https://mediasoup-test-3.glitch.me/ Looking around in the console, the video tracks are This results in the browser error message,
The audio tracks are not muted, though, and I don't hear them either. @Vesper0704 do you see any reason why this wouldn't be working here? Do you have any working examples up? |
to be clear, you can use this demo to compare a working easyrtc example, and you can find the tool shown here @ chrome://webrtc-internals/ |
digging around in the mediasoup documentation and the config file in the repo, made some changes around ssl stuff, it looks like the connection doesn't immediately error out and I do see what I think is my local media shown live: Still haven't heard anything or seen anything in the app itself, and it looks like there are no peer connections... I am seeing
maybe there's a problem with the stun server setting? not immediately clear what issue is. let me know if either of you know where to go forward from here, otherwise I'll try to pick this up again later. |
(if you want to remix my glitch to try: remember, the install takes an hour and I think it has to be boosted or it won't work.) |
Hi @kylebakerio , |
I have not tested locally, my focus is trying to get this to run in a deployable server environment. Your image link is broken, but I assume it's an image of it working. :) You have only gotten this running locally, I take it? |
I'm suspecting that the problem may be related to ports being open, which could be the config and/or glitch server environment, still researching though. |
Looks like glitch does not currently allow using udp: https://support.glitch.com/t/listen-for-udp-messages/1410/3 that is unfortunate, but we should be able to run mediasoup over tcp, right? in the meantime, I may also try to get this running on a render.com server instead, which I suspect will be a bit more full featured on the server side--both options will be ideal to have anyways. |
I think we may need to use
|
Yes, I have only gotten this running locally. |
It's probably like any other sfu, you indeed need an outgoing port range that can be opened for rtp media. |
and WebRTC is using DTLS so it's udp, not tcp |
mediasoup supports both udp and tcp, though. obviously udp is ideal, but it would need to manage over tcp for a glitch deployment. (docs). I think we'd need to setup a local reverse proxy within the glitch deploy to get it running in that context. It's starting to become pretty sub-optimal at this point as a useful demo. I've started playing with render.com, but I need to migrate some old heroku projects before they shut down their free tier anyways, so I began with that before I try this... but their free tier might be too anemic for the install process, and their lowest paid tier is more expensive than glitch is... Mediasoup might just not be able to run on any free servers (even glitch I need to boost for install it seems, to be fair), but open to hear about other ideas. Maybe just accepting it would have to be paid and putting together a guide for digital ocean or google cloud or aws would make more sense? |
Hi @kylebakerio . // in server/config.js
webRtcTransport: {
listenIps: [
{
ip: '0.0.0.0',
announcedIp: '162.14.105.224', // server ip
}
],
maxIncomingBitrate: 1500000,
initialAvailableOutgoingBitrate: 1000000,
// reasonable because the outgoing bandwidth is usually less than incoming bandwidth
} Did you reconfigure the |
Hi @vincentfretin , |
I asked some guys in the mediasoup forum, and they mentioned that running it in a heroku-like environment (which glitch is) is full of headaches and very difficult, possibly won't be doable. I got started trying on google cloud, but other stuff has come up and wasn't able to get around to it. Very good to hear you were able to get it working. Is your hosted version publically available to test? I still plan to test it soon. |
I tested it and then shut it down because the resources on my server is limited. I'll reopen it if you want to have a look. |
Hi, I see you implemented a generic API
It seems you implemented only enabling/disabling a track, you should also implement stopping, really removing the track from the RTCPeerConnection (the equivalent through the mediasoup api) with About your implementation Did you take the changes from Vesper0704/naf-mediasoup-adapter#2 into consideration? |
Hi @vincentfretin |
And as for the changes from Vesper0704/naf-mediasoup-adapter#2, Kyle said it was a wrong click. I took a look at it and I think it is better to keep |
I have implemented the mediasoup adapter and achieved audio/video transmission just as webrtc/easyrtc adapters have done. It is based on SFU architecture and can greatly improved the efficiency and capacity of audio/video chat. Would it be possible that I can submit a new PR?
The text was updated successfully, but these errors were encountered: