On Tue Sep 12 16:50:56 2023 +0000, Paul Gofman wrote:
We also had some chat with Zeb to discuss this. To summarize:
- saying that Windows (or Linux) is dropping packets in case of TCP is
not quite correct, it depends on what server as doing which is retransmitting the packet essentially blocking further send;
- the other question raised was whether _RCVBUF does anything at all on
Windows (e. g., shouldn't we just always set it to 64k);
I designed an ad-hoc test which tries to measure the effect of Windows rcvbuf in the following way:
- the receiving part (Windows) sets SO_RCVBUF being tested and waits for
2sec after accept before calling receive;
- sending part (Linux_ tries to send as much data as possible during
~300ms calling async send;
- then receiving part wakes and receives everything, prints how much
data actually received. So testing like that confirms that setting values below 64k (up to date Windows default reported RCVBUF) doesn't seem to change anything, while increasing the value above does increase amount of data recevied this way. 64k is probably not random, it is about maximum UDP packet size and TCP fragment size. So I think bottom line of this is:
- avoiding setting short _RCVBUF still makes sense;
- minding Unix system default probably doesn't, we can better just use
64k minimum;
- we still need to set _RCVBUF for bigger values, it affects thing (in
somewhat similar way on Windows and Unix). I've sent an updated patch.
As a separate note (not quite related here), spotted while testing this, SO_SNDBUF with 0 size also behaves quite different on Windows compared to what we have now. For one, on Windows setting SO_SNDBUF to 0 makes send() always block even if socket is created with WSA_FLAG_OVERLAPPED and FIONBIO is set. I am not aware currently of any app depending on that but I guess it is possible.