On 12. 12. 25 20:02, Daniel Sahlberg wrote:
Den tors 11 dec. 2025 kl 19:43 skrev Branko Čibej<[email protected]>:
On 11. 12. 25 19:59, Daniel Sahlberg wrote:
Hi,
Prompted by Evgeny's suggestion to move forward with releasing Subversion
1.15.0[1], I'd like to suggest getting a new minor release of Serf out of
the door as well - preferrably first!
There are a few features in Subversion that depend on recent development
in
Serf (for example the error reporting) and it would be nice to have it
released together.
Brane has made a summary in Jira, see SERF-208[2], open points copied
below:
* Generalized error callbacks, discussed in [3]. From what I can see we
have an error callback mechanism that works for SSL errors. There is also
code in Subversion to support this. We need to decide if [a] We are happy
with the current situation, or [b] Can someone step up to improve it.
Personally, I'm leaning towards [a].
I'm not happy with the current custom callback just for SSL errors. I'd
be -0.9 to release it as a public API. I sort of dropped the ball .
In the previous thread, I proposed a patch which replaced the callback with
a more generic serf_error_cb_t, with an addtiional error_type parameter.
Basically:
[[[
-typedef apr_status_t (*serf_ssl_error_cb_t)(
+typedef apr_status_t (*serf_error_cb_t)(
+ serf_error_type_t error_type,
void *baton,
apr_status_t status,
const char *message);
]]]
I did NOT implement this anywhere else apart from the ssl code but it
should be easy enough to add to other parts of Serf as well.
WDYT?
The problem is that the error callback is global, not connection- or at
least context-specific. That simply doesn't scale. Ideally, we'd have
global, context-, connection- and request-specific callbacks, all
optional, all deferring to the next up the chain if not defined.
But of course the SSL context doesn't have a reference to any of those,
so we'd have to first somehow hint to the SSL functions where they're
being used. That would require a pretty serious refactoring of the API,
which is why I've not even started.
Having only a global callback with no context is unhelpful in anything
but completely trivial use cases. And, yes, Subversion counts as a
trivial use case.
However, starting with a global callback is better than nothing,
assuming we design the API so that other, more specific callbacks can be
added later.
* Issue SERF-195, which is a substantial rewrite of the request queues.
The
code is in a separate branch SERF-195. The code looks good to me but I
haven't analyzed in detail. It should be possible to merge to trunk.
The patch works, in the sense that it causes no regressions, but I
haven't been able to reproduce the reported issue. It's also makes the
code slightly more readable, but I couldn't find the mentioned race
condition during review, it looks like just a refactoring of the
original code.
I think we need another pair of eyes to do a careful review of the
branch changes.
I'll try to reproduce the initial issue and see if I can figure out what
goes wrong. Do you have any ideas where I should start? I assume a
multithreaded client (with multiple connections) would be needed?
Multi-threading on the client has no effect here. Request queues are
per-context and contexts require single-threaded access.
The original issue submission just adds a sleep() in
serf__setup_request_basic_auth(), so I guess you don't need a
multi-threaded client, you just need a server that always requires
authentication. That's why, e.g., linking Subversion with that and doing
read from the Apache repos won't work. You could try to set up a local
multi-threaded http server that requires basic auth and serves some
data -- easy to do in Python -- then write a Serf-based client that
sends requests in a loop. Serf-get would be a good starting point.
I think the important points here are:
* all requests to the server must require authentication;
* the server must be able to process requests in parallel and out of
order: this is the only way I can think of where processed requests
could be removed from the middle of the queue, not the beginning or
the end;
* the server must be "slow" enough that the client request queue fills
up, so it needs both an upper limit to the number of parallel
requests and a fake latency for the responses.
I'll give the server script a go.
-- Brane