When we are at flush_max_threshold and the next bucket is a metadata (i.e. next->length == 0), we still need to re-check for flush_max_threshold and associated optimisation (is_in_memory_bucket()) when we process this metadata bucket in the next iteration of the loop.

Follow-up to r1892450.

git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1909966 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Christophe Jaillet
2023-05-21 17:46:22 +00:00
parent afad6e2a78
commit bbe60a5b3d

View File

@ -584,9 +584,8 @@ static apr_status_t send_brigade_nonblocking(apr_socket_t *s,
if (!nvec) {
delete_meta_bucket(bucket);
}
continue;
}
else {
/* Make sure that these new data fit in our iovec. */
if (nvec == ctx->nvec) {
if (nvec == NVEC_MAX) {
@ -619,6 +618,7 @@ static apr_status_t send_brigade_nonblocking(apr_socket_t *s,
ctx->vec[nvec].iov_base = (void *)data;
ctx->vec[nvec].iov_len = length;
nvec++;
}
/* Flush above max threshold, unless the brigade still contains in
* memory buckets which we want to try writing in the same pass (if