Conversation
weissi
left a comment
There was a problem hiding this comment.
I don't think this achieves quite what we want
| return body.stream(HTTPClient.Body.StreamWriter { part in | ||
| context.eventLoop.assertInEventLoop() | ||
| return context.writeAndFlush(self.wrapOutboundOut(.body(part))).map { | ||
| context.write(self.wrapOutboundOut(.body(part))).whenSuccess { |
There was a problem hiding this comment.
hmm, this would now never flush any body which means we'll load the whole body into memory before sending.
I think the right thing to do is:
- if you know that whole body straight away (without streaming), then we should do
write(.head),.write(.body),.write(end),flush - if we actually want to stream the body, then
writeAndFlush(.head),writeAndFlush(.body),writeAndFlush(.end)
There was a problem hiding this comment.
We also need a test actually that shows that this PR doesn't work. We should have a test which checks that body chunks do arrive at the other end before the next bit is sent out.
|
Just as a heads up, the main development branch has been changed to This PR has been re-targeted to main and should just work. However when performing rebases etc please keep this in mind -- you may want to fetch the main branch and rebase onto the |
Right now we call
flushon every body part write, this can be suboptimal, closes #203Motivation:
Library users can write data in small chunk, in this case we are not buffering enough data, better solution would be to buffer at the NIO level in this case.
Modifications:
Body part write is not not flushing, just
writeResult:
Body part writes are not flushing anymore