Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement #214 JUICE_CONCURRENCY_MODE_USER #226

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

N7Alpha
Copy link

@N7Alpha N7Alpha commented Nov 6, 2023

Thanks for helping me figure this out @paullouisageneau. I think I've addressed most of the things discussed in the original issue. I also added a test too. There might be some things you want to change though so this PR is open to maintainer edits. Also some guesswork and copying-and-pasting was done, so there might be some dead code or things that don't make sense. And I think I have my tabs and spaces straight... although my sanity has been permanently lost

Some Things you Might Want to Review

  1. I just made DNS resolution's defined behavior blocking in JUICE_CONCURRENCY_MODE_USER. There were some things that confused me about asynchronous DNS handling namely I think conn_impl can be NULL until DNS resolution is finished? I also just like the fact that it means there are no threads required when using this concurrency mode. I think it's reasonable to make DNS resolution the user's problem, but feel free to rewrite this part if you want non-blocking DNS resolution.

  2. Generally the smallest way to use juice_user_poll is a little verbose. It is basically like recv(3) even to the point where juice_user_poll returns for all packets including those only used for ICE. This may have some practical use, but it's more of a side-effect of the implementation. It's also a bit disjoint since everything should still be handled in on_recv. This at least

    int ret;
    do {
        char buffer[1500];
        int ret = juice_user_poll(agent, buffer, sizeof(buffer));
    } while (ret > 0);
    
    if (ret < 0) {
        // Handle error
    }

    The benefit of pushing the loop out of juice_user_poll here rather than calling on_recv in the function over and over is to facilitate zero-copys which I give an example of in user.c.

  3. Each agent has its own lock like we discussed in Add single-threaded mode #214 so agent use across threads should be thread safe for everything except juice_destroy if I did everything correctly

  4. The last condition in this expression seems like it shouldn't be needed, but I needed it to make the connection establish in a reasonable amount of time. I'm thinking there is something going wrong with how the timer is set, but it's a bit beyond my understanding to fix this.

@N7Alpha
Copy link
Author

N7Alpha commented Nov 9, 2023

Now that I think about it juice_user_poll should probably be named juice_user_recv or juice_user_recvfrom... Maybe even give the option to pass platform flags? I'm not sure how practical that would be though

Copy link
Owner

@paullouisageneau paullouisageneau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job, thank you for the PR! Sorry for the late review, it got buried in my todo list.

  • I just made DNS resolution's defined behavior blocking in JUICE_CONCURRENCY_MODE_USER. There were some things that confused me about asynchronous DNS handling namely I think conn_impl can be NULL until DNS resolution is finished? I also just like the fact that it means there are no threads required when using this concurrency mode. I think it's reasonable to make DNS resolution the user's problem, but feel free to rewrite this part if you want non-blocking DNS resolution.

This is fine by me. To asnwer your question, conn_impl is NULL before the call to agent_gather_candidates(), where it is initialized., however it is initialized before the DNS resolution.

  • Generally the smallest way to use juice_user_poll is a little verbose. It is basically like recv(3) even to the point where juice_user_poll returns for all packets including those only used for ICE. This may have some practical use, but it's more of a side-effect of the implementation. It's also a bit disjoint since everything should still be handled in on_recv. This at least
int ret;
do {
    char buffer[1500];
     int ret = juice_user_poll(agent, buffer, sizeof(buffer));
} while (ret > 0);
       
if (ret < 0) {
    // Handle error
}

The benefit of pushing the loop out of juice_user_poll here rather than calling on_recv in the function over and over is to facilitate zero-copys which I give an example of in user.c.

I'm not a fan of this design. It's quite awkward, makes the usage convoluted, and impacts performance for no clear benefit. Why not simply loop to receive all available packets in an internal buffer and update once if necessary on each poll?

  • Each agent has its own lock like we discussed in Add single-threaded mode #214 so agent use across threads should be thread safe for everything except juice_destroy if I did everything correctly

I've seen no obvious issue, it looks good to me.

@@ -41,4 +41,6 @@ int conn_send(juice_agent_t *agent, const addr_record_t *dst, const char *data,
int ds);
int conn_get_addrs(juice_agent_t *agent, addr_record_t *records, size_t size);

#define CONN_MODE_IS_CONCURRENT(mode) mode != JUICE_CONCURRENCY_MODE_USER
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#define CONN_MODE_IS_CONCURRENT(mode) mode != JUICE_CONCURRENCY_MODE_USER
#define CONN_MODE_IS_CONCURRENT(mode) (mode != JUICE_CONCURRENCY_MODE_USER)

Comment on lines +33 to +35
if (milliseconds >= 1000)
sleep(milliseconds / 1000);
usleep((milliseconds % 1000) * 1000);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (milliseconds >= 1000)
sleep(milliseconds / 1000);
usleep((milliseconds % 1000) * 1000);
usleep(milliseconds * 1000);

Comment on lines +118 to +127
juice_state_t state = juice_get_state(agent);
if (state == JUICE_STATE_CONNECTED) {
// Send three messages
while (agent_data[i].sent < 3) {
char message[50];
snprintf(message, sizeof(message), "Message %d from Agent %d", agent_data[i].sent + 1, agent_data[i].id);
juice_send(agent, message, strlen(message));
agent_data[i].sent++;
}
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should send messages in the state callback instead, so you'd not have to keep a record of sent count.

char buffer[BUFFER_SIZE];
if (size > BUFFER_SIZE - 1)
size = BUFFER_SIZE - 1;
memcpy(buffer, data, size);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It understand that you keep an history of packets as an example to achieve zero-copy, but it seems to me that copying here defeats the purpose. I think in practice keeping a packet history is not that useful as there is typically a depacketization step that involves parsing and copying the data one way or another.

if ( ret > 0 // We just received a datagram
|| conn_impl->next_timestamp <= current_timestamp()
|| agent->state != JUICE_STATE_COMPLETED) {
if (agent_conn_update(agent, &conn_impl->next_timestamp) != 0) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving the loop out of juice_user_poll() is actually problematic performance-wise: you have to call the agent_conn_update() process after each received packet, which comes at a performance cost when receiving a lot of application traffic. You should instead receive all available datagrams, then update the agent once.

src/conn_user.c Show resolved Hide resolved
Copy link
Owner

@paullouisageneau paullouisageneau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last condition in this expression seems like it shouldn't be needed, but I needed it to make the connection establish in a reasonable amount of time. I'm thinking there is something going wrong with how the timer is set, but it's a bit beyond my understanding to fix this.

I think the issue is that an agent calling conn_*_interrupt() expects you it schedule a call to agent_conn_update(), but it's not done by conn_user_interrupt().

}

int conn_user_interrupt(juice_agent_t *agent) {
// juice_user_poll does not block when polling, so there's nothing to interrupt
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still need to schedule a call to agent_conn_update() on the next poll, it could be achieved by setting resetting next_timestamp to 0. This is why you currently need to add the state != JUICE_STATE_COMPLETED in juice_user_poll().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants