Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add message streaming mode #1012

Closed
wants to merge 26 commits into from
Closed

Add message streaming mode #1012

wants to merge 26 commits into from

Conversation

vankoven
Copy link
Contributor

@vankoven vankoven commented May 7, 2018

fix #498

Streaming of http messages

Directive proxy_buffering defines maximum buffered message size. Messages
shorter than proxy_buffering size are fully buffered, longer messages
contains from two parts: buffered and streamed.

Headers are always buffered. Full set of headers is required for processing
inside of TempestaFW during whole message lifetime.
Body may be split into two part: streamed and buffered. Body is processed only
right after parsing in cache or health monitor and not required in further
processing. Thus body may be safely streamed to target connection and freed
at any time.
Trailer header is also must be buffered. It extends headers, so it may be
required for message processing. Also trailer headers can be modified before
forwarding (e.g. resp_hdr_set directive), this can be done only on fully
buffered trailer.

In the text I say 'headers', 'body' and 'trailer', but 'headers' actually stands
for buffered message part: HTTP headers and buffered part of the body.

Performance impacts

In general case the incoming message can't be streamed to target connection
right away. When a client message is received, it's placed into a server
connection forwarding queue (fwd_queue). Under the load it's never empty, so
immediate streaming is not possible and the message will be scheduled
for forwarding. When a server message is received, it's forwarded to client
with respect to client's seq_queue. If client pipelines requests, seq_queue
may be not empty and the there can be a number of unanswered requests,
so immediate streaming is not possible. Both situations may be worse, if
target connection is occupied by another streamed transmission.

But in the same time servers are much faster than clients, the request
may climb to the begging of the fwd_queue before the client finished the
transmission. Same for the backend to the client transmission: seq_queue
shouldn't be big, and all previous requests from the queue may be responded
faster than transmission of current response.

This means two things:

  • some times messages can be fully assembled before streaming.
  • the same message can be processed by two threads, since it's placed into
    seq_queue and fwd_queue in incomplete state.

There is a couple solutions to deal with the first problem.

  • Streaming from backend to clients (response streaming): Configuration level.
    Use sticky sessions. In this case all requests from one client will be always
    forwarded to the same backend connection. So any response can be effectively
    streamed to the client.
  • Application-aware response streaming. When a new request is received,
    user-defined rules are used to check that expected response is to be streamed.
    If that true, don't forward the request (and following) to the backends until
    all previous requests are responded.
  • Request streaming. The request is likely to be buffered if fwd_queue is
    not empty. Keep a separate pool of backend connections for streaming only
    or dynamically allocate a new server connection.

Long-polling requests from multiple clients may fully block backend connections.
Application-aware approach or dynamically allocated backend connections are
required.

This topic requires dedicated discussion.

Processing of streamed messages

The main difference between buffered and streamed messages is processing
stages. Processing of a buffered connection is straight forward:

( receive -> parse ) -> process -> forward
repeat until message
 is fully received

Processing of streamed messages is bit complicated:

( receive -> parse ) -> [ process -> forward ] ->
 repeat until headers     forward headers to
 are fully received       target connection


 -> ( receive -> parse -> process -> stream ) ->
      repeat until all body fragments are
                    streamed


-> ( receive -> parse ) -> process -> stream
     repeat until trailer
     headers are fully
          received

Forwarding may happen in a different tread, so multiple access to the same
message is possible. First idea was to use spinlock to protect the message
from simultaneous parsing and forwarding. But only one spinlock is not enough
and a lot of additional checks and flags are required. Imagine the situation:
Part of request was received, scheduled to backend connection, a new part
is being parsed and processed so the lock is acquired. In the same time
other thread starts forwarding and tries to acquire the lock. Processing error
happens during processing in the first thread and the message must be destroyed.
In this approach reference counting and error flag is required. The same
situation takes place in other cases, so the code becomes very complicated.

Here is the used solution. Stream message part only from the same thread where
the message is received. Forwarding of headers may (and will) happen in other
thread but that thread must not send streamed message part, only buffered.

So processing looks like:

Thread 1                                    Thread 2
--------                                    --------
    |                                           |
(receive headers)                               |
    |                                           |
(schedule to forward)                           |
    |                                           |
(receive, process body part)                    |
    * N                                         |
    |                                           |
    |                             (forward headers only to target connection)
    |                                           |
(receive, process body parts)                   |
    |                                           |
(stream al pending)                             |
    |                                           |
(receive, process, stream body part)            |
    * N                                         |
    |                                           |
(destroy the message)                           |

Forwarding in Thread 2 may happen only after the whole streamed message was
received. In this case Thead 1 will never forward streamed part, so the Thread 1
is responsible for streaming and releasing the message.

When a streamed message is forwarded to some connection, no messages allowed to
be forwarded though that connection until the forwarded message is fully
transmitted. Thus connection's seq_qlock or fwd_qlock is acuired on
streaming message part. In addition to thread role management described above,
this gives thread safety.

Note, that a response to the streamed request may appear earlier than request
is fully received. E.g. sticky cookie violation, serving message from cache.
In this case, request must be received fully to proceed to the next request.

All functions that processes message body, must be capable to process
the body by parts, since full body is never available for streamed messages.

Close connection on streaming

Sometimes it's needed to close the egress connection after the messaged is fully
transmitted. In this case CONN_CLOSE flag must be added to ss_flags only
when the last message part is streamed.

Error handling

Streamed message is partially received, but not forwarded to target connection.
-> Close ingress connection.

Streamed message is partially received and is being streamed to target
connection. -> Close both connection.

Streamed message is fully received and is being streamed to target
connection. -> Close egress connection.

Request was streamed to backend, but backend connection was closed and reopened.
-> Evict request, it's not stored in Tempesta, so it's not possible to
forward it once more time.

Early response is available for the streamed request (from sticky or cache
modules) but the request failed security limits (frang) or other processing
on receiving a new body fragment. -> If the response
is not sent yet (mostly false condition), drop the response and forward
a new one via tfw_http_error_resp_and_log() if applied. Close client
connection.

Caching streamed responses

Message streaming is used to avoid assembling full messages and spending too
lot of memory for just a couple of messages, e.g. DVD images. Same apply
to cache. A new directive is required to limit cache entry size. Caching 1M
small responses may be more effective, than caching just a couple of big ones.

Cache entry size required to store the response may be unknown for streamed
responses. Response in chunked encoding may be very huge. Content-length header
is not provided in this case. It's required to effectively enlarge cache entry to
save a new response fragment. A new response fragment may be not only body, but
trailer headers too.

@vankoven vankoven force-pushed the ik-streaming branch 4 times, most recently from a0c9501 to ecb5972 Compare May 9, 2018 09:17
Copy link
Contributor

@krizhanovsky krizhanovsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main goal for #498 wasn't implemented. @aleksostapenko please review the patch, especially the new HTTP state machine in tfw_http_{req,resp}_process() - I ran out of time and didn't review it carefully enough. Also comments and thoughts on the comment for tfw_http_parse_skb() are very appreciated - this is very complex and interesting question.

@@ -383,6 +375,27 @@
# Default:
# listen 80;

# TAG: proxy_buffering
#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the doc to the Wiki

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment is still not addressed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not about this particular place, but the PR has some merge conflicts - tls/dummy_headers went away somehow, so it's not buildable now. Please rebase the branch.

if (tfw_cfg_parse_int(ce->vals[1], &limit)) {
TFW_ERR_NL("Unable to parse http limit value: '%s'\n",
ce->vals[1]);
if (tfw_cfg_parse_int(ce->vals[1], &limit))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, the PR is still unfinished and I can not run Tempesta with error configuration to see what it prints, but it seems that now it just prints that it can not parse some invalid integer, w/o localization which exactly configuration option causes the problem. If a user specifies a string for several configuration options and only one of them is actually an integer, then the user must check documentation for all the options and that's not user friendly. I think the unification of error messages in 71043eb is wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like how Tempesta complains about parsing errors, e.g. for line server 127.0.0.1:8080 conns_n=1a;:

On master:

[tempesta] ERROR: Invalid value: '1a'
[tempesta] ERROR: configuration parsing error:
   5: server 127.0.0.1:8080 conns_n=1a;
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

On my branch:

[tempesta] ERROR: Can't parse '1a' as 'int'
[tempesta] ERROR: configuration parsing error:
   5: server 127.0.0.1:8080 conns_n=1a;
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

As for me, last option is more verbose. Yes, that commit made the output less verbose for some options. But before changes it looked ugly: every directive has it's own unique error message, error processing is also unique for every directive. Some errors described very good, while descriptions for others are completely missed. My variant is much easier to support.

The thing I completely dislike in parsing errors is reversed error message. Ideal message for me is below:

[tempesta] ERROR: configuration parsing error:
   5: server 127.0.0.1:8080 conns_n=1a;
                                    ^^
[tempesta] ERROR: Can't parse '1a' as 'int', allowed values are [1, 65535]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, the change is actually good since we still have

   5: server 127.0.0.1:8080 conns_n=1a;
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

And yes,

   5: server 127.0.0.1:8080 conns_n=1a;
                                    ^^

would look much better.

@@ -176,6 +176,7 @@ __str_grow_tree(TfwPool *pool, TfwStr *str, unsigned int flag, int n)
TfwStr *
tfw_str_add_compound(TfwPool *pool, TfwStr *str)
{
BUILD_BUG_ON(_TFW_STR_B_NUM > TFW_STR_FBITS + 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I don't see much sense in the change: you check the flags in str.c while the flags are defined in str.h, just near from TFW_STR_FBITS definitions (so one can quickly find what's the limitation). Why do we need TFW_STR_B_*? They're used only once? Why you check the limit here instead of using #if + #error in str.h? The patch is doubtful, but it doesn't hurt anything and you've already did the same with http flags, so we can leave with it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment isn't addressed.

bytes_read = kernel_read(fp, out_buf + offset, read_size,
&offset);
bytes_read = vfs_read(fp, out_buf + offset, read_size, \
&offset);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no vfs_read call now - seems dirty merge w/ master. This is the reason for the broken build.

/*
* Need to split skb into parts before parsing the rest of the message.
*/
TFW_SPLIT = SS_SPLIT,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the return code is just equal to TFW_POSTPONE since the both tfw_http_req_process() and tfw_http_resp_process() (the only functions checking for the value) do the same as for TFW_POSTPONE.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code is required for ss_skb_process https://github.com/tempesta-tech/tempesta/pull/1012/files#diff-6892ca70f781b6c0fb03c6b20fc4cd0eR908
I need to intentionally split skb between buffered and streamed parts. But if the split take place on skb fragments borders it's impossible to identify whether skb splitting required or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the points of #1012 (comment) is that likely you should not change ss_skb_process(): all skbs just must be immediately processed as previously and you don't need to modify ss_skb_process().

@@ -279,6 +280,7 @@ typedef struct {
TfwStr _tmp_chunk;
TfwStr hdr;
TfwHttpHbhHdrs hbh_parser;
struct sk_buff *skb;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, now we have 3 skb lists, but this is the only one last skb which we should parse at particular time - why not to just more carefully peek it from the lists?

TfwHttpParser definitely should be in http_parser.h.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's easier to save currently parsed skb in parser structure and use it in routines from http_msg.c, than check on every header which list should be picked: head or trailer.

Anyway I think that only fully parsed skb can be added to the message. It's completely issue-free. Unlike current master. E.g. request stealing attack is possible on current master:

tempesta/tempesta_fw/http.c

Lines 2807 to 2813 in ad1ab0f

if (!req_conn_close && (data_off < skb_len)) {
/*
* Pipelined requests: create a new sibling message.
* @skb is replaced with pointer to a new SKB.
*/
hmsib = tfw_http_msg_create_sibling((TfwHttpMsg *)req,
&skb, data_off);

If client sets connection: close header and pipelines more requests after it, skb splitting won't take place and all remaining requests in the skb will be glued to current and sent to backend server.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, makes sense. My concern is that you're growing the parser data structured for 16 bytes and the structure is inlined to each HTTP message descriptor. In general growing data structures, which are aggressively allocated, is a bad thing. The community makes effort to keep struct sk_buff as small as possible and this was also the point for our #166. So if you find a way to efficiently peek the right skb and keep the overhead smaller, that would be great.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in the chat, TfwHttpParser is to be moved to TfwConn. Messages from connection are received one-by-one, there is no need to allocate parser data for every received message.

The change is OK for upcoming HTTP/2 implementation. Although it defines multiple streams inside a connection, messages in each stream are received one-by-one.

stage = tfw_http_parse_stage(hm);
parser->skb = *skb;
data_off = *off;
r = ss_skb_process(*skb, off, to_read, actor, hm);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The buffering significantly affects TCP behaviour, so please develop a functional TCP test which analyzes TCP flow (ACKs and receive window announcements) as well as changings in TCP receive and send buffers for both the server and client sides. Also check drop counters for the sockets (add them if necessary, but I reckon the stack already accounts them).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the test isn't done yet...

p += 1; \
if (unlikely(msg->flags & TFW_HTTP_F_STREAM)) { \
__fsm_const_state = to; /* start from state @to next time */\
__FSM_EXIT(TFW_SPLIT); \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems there is no difference in handling TFW_SPLIT and TFW_POSTPONE, so it turns out that there is no need for the macro as well.

*off += to_read;
max_read -= to_read;
r = actor(objdata, frag_addr + offset, to_read);
if ((r != SS_POSTPONE) || !max_read)
return r;
offset = 0;
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in comment for tfw_http_parse_skb(), it seems the changes are wrong - we should handle the buffering at lower layer.

Http_Msg_Stream,
/* Send response and drop connection if applicable. */
Http_Msg_Conn_Drop,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the state machine is separate from our current HTTP state machine with states in http.h, HTTP Generic FSM states?

}
else {
TFW_DBG2("Add skb %p to message %p\n", *skb, hm);
tfw_http_skb_queue(hm, *skb, stage);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like skbs with message body might be not only in msg.body_skb but also in msg.head_skb. It this is expected behavior, the names of lists are somewhat discouraging.

@@ -1150,6 +1305,9 @@ tfw_http_conn_fwd_unsent(TfwSrvConn *srv_conn, struct list_head *eq)
list_for_each_entry_safe_from(req, tmp, fwd_queue, fwd_list) {
if (!tfw_http_req_fwd_single(srv_conn, srv, req, eq))
continue;
/* Stop forwarding if the request is in streaming. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that next attempt to forward next request will be successful, even if the previous request still be in not finished streamed state - as we unconditionally do srv_conn->msg_sent = (TfwMsg *)req; in tfw_http_req_fwd_single().

int p_stage = tfw_http_parse_stage(hmreq);

if (unlikely(p_stage == TFW_HTTP_PARSE_HEADERS))
__HTTP_FSM_EXIT();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stage is called TFW_HTTP_PARSE_HEADERS, but since hm->crlf.flags & TFW_STR_F_COMPLETE - headers had been parsed already, so: 1. why we need to stay in Http_Msg_Buffer state; 2. stage name is somewhat confusing.
(The same question for tfw_http_resp_process()).

if (unlikely(p_stage == TFW_HTTP_PARSE_HEADERS))
__HTTP_FSM_EXIT();
if (likely((p_stage == TFW_HTTP_PARSE_DONE) || stream_mode))
__HTTP_FSM_JUMP(Http_Msg_Headers);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We move to Http_Msg_Headers state when we parse the whole message or when we in streamed mode (in any place of the message after headers) - maybe Http_Msg_Headers state should be renamed to something like Http_Msg_Parsed_Or_Stream or one more intermediate state is needed here.

}

/* Headers are fully parsed, message processing can be started. */
__HTTP_FSM_STATE(Http_Msg_Headers) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we already parsed the headers - maybe the state name should be something like Http_Msg_Headers_{End, Done}.

/*
* The time the request was received is used for age
* calculations in cache, and for eviction purposes.
*/
req->cache_ctl.timestamp = tfw_current_timestamp();
req->jrxtstamp = jiffies;

/* Fall though to Http_Msg_Buffer */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

though -> through (and in several similar places below).

req->httperr.reason);
else
r = tfw_srv_client_drop(req, req->httperr.status,
req->httperr.reason);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially, I mistakenly believed that tfw_srv_client_drop() will be used in server connections context only and gave it that awkward name. If we use this function in client connections context also, its name (in comparison with tfw_client_drop ()) does not reflect either its essence nor its purpose, is rather confusing and should be changed (maybe tfw_client_drop_light or something like that will be more appropriate).

@@ -383,6 +375,27 @@
# Default:
# listen 80;

# TAG: proxy_buffering
#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment is still not addressed

@@ -383,6 +375,27 @@
# Default:
# listen 80;

# TAG: proxy_buffering
#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not about this particular place, but the PR has some merge conflicts - tls/dummy_headers went away somehow, so it's not buildable now. Please rebase the branch.

#define TFW_STR_F_VALUE (1U << TFW_STR_B_VALUE)
#define TFW_STR_F_HBH_HDR (1U << TFW_STR_B_HBH_HDR)
#define TFW_STR_F_TRAILER_HDR (1U << TFW_STR_B_TRAILER_HDR)
#define TFW_STR_F_ETAG_WEAK (1U << TFW_STR_B_ETAG_WEAK)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't like the change: we have to write code in two places if we want to add a new flag. Please use usual flags and test_bit() as the kernel does this.

@@ -176,6 +176,7 @@ __str_grow_tree(TfwPool *pool, TfwStr *str, unsigned int flag, int n)
TfwStr *
tfw_str_add_compound(TfwPool *pool, TfwStr *str)
{
BUILD_BUG_ON(_TFW_STR_B_NUM > TFW_STR_FBITS + 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment isn't addressed.

@@ -568,14 +570,14 @@ __mark_hbh_hdr(TfwHttpMsg *hm, TfwStr *hdr)
if ((hid >= ht->off) || (TFW_STR_EMPTY(&ht->tbl[hid])))
return false;

ht->tbl[hid].flags |= TFW_STR_HBH_HDR;
ht->tbl[hid].flags |= TFW_STR_F_HBH_HDR;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert the change

* Current parsing stage: headers, body, trailer headers or parsing is finished.
*/
tfw_http_parse_stage_t
tfw_http_parse_stage(TfwHttpMsg *hm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment wasn't addressed.

&& !tfw_http_msg_is_processed((TfwHttpMsg *)req))
{
break;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a backend server already respond us, why should we wait for the end of the request?

stage = tfw_http_parse_stage(hm);
parser->skb = *skb;
data_off = *off;
r = ss_skb_process(*skb, off, to_read, actor, hm);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the test isn't done yet...

Copy link
Contributor

@krizhanovsky krizhanovsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of comments weren't addressed since the last review, please go through all of them and fix them.

.range = { 1, LONG_MAX },
},
.allow_none = true,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please write appropriate documentation in etc/tempesta_fw.conf and the Wiki

.range = { -1, LONG_MAX },
},
.allow_none = true,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update etc/tempesta_fw.conf and the Wiki, explain how the Tempesta FW limits work with standard TCP receive buffers.

if (unlikely(r == TFW_BLOCK))
return TFW_BLOCK;
if (hm->msg.len >= proxy_buff_sz)
hm->flags |= TFW_HTTP_F_STREAM;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't see how the comment #1012 (comment) is addressed. If a message is in streaming mode and we have buffer size configured as e.g. 10KB, then how do you guarantee that the socket receive queue won't exceed the size? Well, this is guaranteed by the fact that we immediately process each ingress skb, so a client can just flood us and we'll be forwarding unlimiteed amount of data.

* Maximum size of memory used to store requests. Used for receive window size
* steering
*/
static size_t recv_window_sz;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable isn't used, it seems you just forgot to do the most important change for #1012 (comment)

*/
typedef struct {
TFW_CONN_COMMON;
struct list_head seq_queue;
spinlock_t seq_qlock;
spinlock_t ret_qlock;
TfwMsg *msg_stream;
TfwConn *conn_stream;
size_t recv_window_sz;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable isn't used, it seems you just forgot to do the most important change for #1012 (comment)

@krizhanovsky
Copy link
Contributor

krizhanovsky commented Jun 20, 2018

Please also consider a fix for #995, probably as a separate pull request like for #639. #995 and #498 seems relating. Point 2 in issue #391 requires implementation of a function(s) to calculate and set our receive window for a clinet, so this also must be considered in upcoming changes.

@vankoven
Copy link
Contributor Author

Obsoleted by #1067

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

HTTP message buffering and streaming
3 participants