You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
This is a very minor thing I guess, but to my understanding the effect of buffering is not shown correcly in the Jamulus main window / not calculated correctly in
CClient::EstimatedOverallDelay. It uses a factor of 0.7 for local and remote buffers - with the comment that the buffers usually a bit larger than required. That may be true, and the achievable delay would be less if the buffers were set correctly.
But I aussume that the display of delay should show the delay currently experienced, not the delay potentially achievable with better buffer setting.
Or do I miss something?
To Reproduce
No other measurement is available for the real experienced delay, so the value shown may just be off / too low compared to real experienced delay.
Expected behavior
My understanding of the buffer implementation is that it evens out network delay and jitter, and adds a fixed delay, so the delay of e.g. a buffer size of 4 is 4 times the block duration.
so the code should read
I am prototyping a statisitics console on connection quality that should help to monitor long time quality of connections to the server. So I read a lot of jamulus source code and try to figure out the statistics calculations currently used. This when I encountered this calculation that I do not understand.
The text was updated successfully, but these errors were encountered:
Thanks for reporting.
Although I can't say much about the calculation of the delay, I also think that the delay is not fully accurate (and can never be as the hardware delay needs to be added too). Volker or @softins might know more about this.
It's not an area of code I have ever studied, but the OP's point sounds valid. I guess some experimentation might be worthwhile, while running Wireshark on the client machine to capture traffic in both directions. Connected to a remote server with no other traffic, one could try sending, say a sound with a hard attack, e.g. a piano note, while observing the displayed overall delay. Then subsequently analyse the Wireshark capture to determine the actual delay between the outgoing attack edge and that in the returned audio. Maybe repeating the exercise at various manual jitter buffer settings. That should indicate whether the 0.7f is required or not, or maybe needs a more accurate formula.
I might have the time to try the above later this week, but if anyone else is able to as well, that would be great.
Describe the bug
This is a very minor thing I guess, but to my understanding the effect of buffering is not shown correcly in the Jamulus main window / not calculated correctly in
CClient::EstimatedOverallDelay. It uses a factor of 0.7 for local and remote buffers - with the comment that the buffers usually a bit larger than required. That may be true, and the achievable delay would be less if the buffers were set correctly.
But I aussume that the display of delay should show the delay currently experienced, not the delay potentially achievable with better buffer setting.
Or do I miss something?
To Reproduce
No other measurement is available for the real experienced delay, so the value shown may just be off / too low compared to real experienced delay.
Expected behavior
My understanding of the buffer implementation is that it evens out network delay and jitter, and adds a fixed delay, so the delay of e.g. a buffer size of 4 is 4 times the block duration.
so the code should read
const float fTotalJitterBufferDelayMs = fSystemBlockDurationMs * ( GetSockBufNumFrames() + GetServerSockBufNumFrames() );
instead of
const float fTotalJitterBufferDelayMs = fSystemBlockDurationMs * ( GetSockBufNumFrames() + GetServerSockBufNumFrames() ) * 0.7f;
Operating system
any
Version of Jamulus
3.9.1
Additional context
I am prototyping a statisitics console on connection quality that should help to monitor long time quality of connections to the server. So I read a lot of jamulus source code and try to figure out the statistics calculations currently used. This when I encountered this calculation that I do not understand.
The text was updated successfully, but these errors were encountered: