Den mån 3 maj 2021 23:43Tim Newsome <t...@sifive.com> skrev:

> Change
> https://sourceforge.net/p/openocd/code/ci/7dd323b26d93e49e409e02053e30f53ac8138cd5/
> cut remote bitbang performance (when talking to the spike RISC-V simulator)
> approximately in half. This change "removes the file write stream, replaces
> it with socket write calls." Previously performance was better because the
> individual byte-size writes were buffered, and didn't result in a system
> call until fflush() was called.
>
> I assume the right way to fix this is to implement a buffer/flush
> mechanism inside OpenOCD, on top of the socket calls, so that the
> optimization also works on Windows. Is that right? Is anybody motivated to
> take that on?
>
> Tim
>

Hi!

It feels entirely unnecessary to implement a buffering mechanism if the C
library already has a perfectly good implementation.

I think either live with the performance degradation*, or reimplement the
mentioned change to use socket calls only on windows, via a common wrapper
and some autotools magic.

* the TCP socket should be buffering small writes as well, via the Nagle
algorithm. Is perhaps TCP_NODELAY set on the socket?

/Andreas

>


Reply via email to