Am 25.09.25 um 14:25 schrieb Dmitry Olshansky:
On Thursday, 25 September 2025 at 07:24:00 UTC, Sönke Ludwig wrote:
Am 23.09.25 um 17:35 schrieb Dmitry Olshansky:
On Monday, 22 September 2025 at 11:14:17 UTC, Sönke Ludwig wrote:
Am 22.09.25 um 09:49 schrieb Dmitry Olshansky:
On Friday, 19 September 2025 at 17:37:36 UTC, Sönke Ludwig wrote:
So you don't support timeouts when waiting for an event at all? Otherwise I don't see why a separate API would be required, this should be implementable with plain Posix APIs within vibe-core- lite itself.

Photon's API is the syscall interface. So to wait on an event you just call poll.
Behind the scenes it will just wait on the right fd to change state.

Now vibe-core-light wants something like read(buffer, timeout) which is not syscall API but maybe added. But since I'm going to add new API I'd rather have something consistent and sane not just a bunch of adhoc functions to satisfy vibe.d interface.

Why can't you then use poll() to for example implement `ManualEvent` with timeout and interrupt support? And shouldn't recv() with timeout be implementable the same way, poll with timeout and only read when ready?

Yes, recv with timeout is basically poll+recv. The problem is that then I need to support interrupts in poll. Nothing really changed. As far as manual event goes I've implemented that with custom cond var and mutex. That mutex is not interruptible as it's backed by semaphore on slow path in a form of eventfd. I might create custom mutex that is interruptible I guess but the notion of interrupts would have to be introduced to photon. I do not really like it.

I'd probably create an additional event FD per thread used to signal interruption and also pass that to any poll() that is used for interruptible wait.

poll could be made interruptible w/o any additions it's really a yield + waiting for events. I could implement interruptible things by simply waking fiber up with special flag
passed (it has such to determine which event waked us up anyway).

The problem is, I do not see how to provide adequate interface for this functionality. I have an idea though - use EINTR or maybe devise my own code for interrupts, then I could interrupt at my syscall interface level. Next is the question of how do I throw from a nothrow context. Here I told you that I would need a separate APIs for interruptible things - the ones that allow interrupts. *This* is something I do not look forward to.

I think we have a misunderstanding of what vibe.d is supposed to be. It seems like you are only focused on the web/server role, while to me vibe-core is a general-purpose I/O and concurrency system with no particular specialization in server tasks. With that view, your statement to me sounds like "Clearly D is not meant to do multi- threading, since main() is only running in a single thread".

The defaults are what is important. Go defaults to multi-threading for instance. D defaults to multi-threading because TLS by default is certainly a mark of multi-threaded environment. std.concurrency defaults to new thread per spawn, again this tells me it's about multithreading. I intend to support multi-threading by default. I understand that we view this issue differently.

But you are comparing different defaults here. With plain D, you also have to import either `core.thread` or `std.concurrency`/ `std.paralellism` to do any multi-threaded work. The same is true for vibe-core. What you propose would be more comparable to having foreach() operate like parallelForeach(), with far-reaching consequences.

If we are just talking about naming - runTask/runWorkerTask vs. go/ goOnSameThread - that is of course debatable, but in that case I think it's blown very much out of proportion to take that as the basis to claim "it's meant to be used single-threaded".

So runTask is assumed to run on the same core while runWorkerTask to be run on any available core? Didn't occur to me. I thought worker pool is for blocking tasks, as there is such a pool in photon. I can just switch runTask to goOnSameThread to maximize compatibility with vibed.

Yes, I think that should be enough to make the semantics compatible. runWorkerTask is kind of dual-use in that regard and is mostly meant for CPU workloads. There is a separate I/O worker pool for blocking I/O operations to avoid computationally expensive worker tasks getting blocked by I/O. This is definitely the area where Photon can shine, working fine for all kinds of workloads with just a single pool.

Channels tame the complexity. Yes, channels could get more expansive in multi-threaded scenario but we already agreed that it's not CPU bound.

If you have code that does a lot of these things, this just degrades code readability for absolutely no practical gain, though.

I humbly disagree. I'd take explicit channels over global TLS variables any day.

It wouldn't usually be TLS, but just a delegate that gets passed from the UI task to the I/O task for example, implicitly operating on stack data, or on some UI structures referenced from there.

This doesn't make sense, in the original vibe-core, you can simply choose between spawning in the same thread or in "any" thread. `shared`/`immutable` is correctly enforced in the latter case to avoid unintended data sharing.

I have go and goOnSameThread. Guess which is the encouraged option.

Does go() enforce proper use of shared/immutable when passing data to the scheduled "go routine"?

It goes with the same API as we have for threads - a delegate, so sharing becomes user's responsibility. I may add function + args for better handling of resources passed to the lambda.

That means that this is completely un`@safe` - C++ level memory safety. IMO this is an unacceptable default for web applications.

Yeah, I'm not in the @safe world mostly. But as I said to make it more upstreamable I will switch the defaults, so that vibe-core-light provides the same guarantees as regular vibe-core does.

malloc() will also always be a bottleneck with the right load. Just the n times larger amount of virtual address space required may start to become an issue for memory heavy applications. But even if ignore that, ruling out using the existing GC doesn't sound like a good idea to me.

The existing GC is basically 20+ years old, ofc we need better GC and
thread cached allocation solves contention in multi-threaded environments. Alternative memory allocator is doing great on 320 core machines. I cannot tell you which allocator that is or what exactly these servers are. Though even jemalloc does okayish.

And the fact is that, even with relatively mild GC use, a web application will not scale properly with many cores.

Only partially agree, Java's GC handles load just fine and runs faster than vibe.d(-light). It does allocations on its serving code path.

I was just talking about the current D GC here. Once we have a better implementation, this can very well become a much weaker argument!

However, speaking more generally, the other arguments for preferring to scale using processes still stand, and even with a better GC I would still argue that leading library users to do multi-threaded request handling is not necessarily the best default (of course it still *can* be for some applications).

I'm betting more on the threaded approach, but we are just different. See also my reply on the numbers - processes are only about 1-2% faster (and the noise is easily in 0.5% range) once the GC bottleneck is handled that is.

Anyway, the main point from my side is just that the semantics of what *is* in vibe-core-light should really match the corresponding functions in vibe-core. Apart from that, I was just telling you that your impression of it being intended to be used single-threaded is not right, which doesn't mean that the presentation shouldn't probably emphasize the multi-threaded functionality and multi-threaded request processing more.

Given the number of potential expectations from the user side it seems I need to update vibe-core-light to use goOnSameThread for runTask. I do not like it how I need to do extra work to launch a multi-threaded server though which is something that got me started on the whole "defaults argument".

Maybe we can at least think about a possible reintroduction of a direct `listenHTTPDist`/`ListenHTTPMultiThreaded`/... API that provides a `@safe` interface - there used to be a `HTTPServerOption.distribute` that did that, but it didn't enforce `shared` properly and lead to race-conditions in practical applications, because people were not aware of the implicitly shared data or of the implications thereof.
  • Re: vibe.d-li... Sönke Ludwig via Digitalmars-d-announce
    • Re: vibe... Dmitry Olshansky via Digitalmars-d-announce
      • Re: ... Sönke Ludwig via Digitalmars-d-announce
        • ... Dmitry Olshansky via Digitalmars-d-announce
          • ... Sönke Ludwig via Digitalmars-d-announce
            • ... Dmitry Olshansky via Digitalmars-d-announce
              • ... Sönke Ludwig via Digitalmars-d-announce
              • ... Dmitry Olshansky via Digitalmars-d-announce
              • ... Sönke Ludwig via Digitalmars-d-announce
              • ... Dmitry Olshansky via Digitalmars-d-announce
              • ... Sönke Ludwig via Digitalmars-d-announce
              • ... Dmitry Olshansky via Digitalmars-d-announce
          • ... Richard (Rikki) Andrew Cattermole via Digitalmars-d-announce
          • ... IchorDev via Digitalmars-d-announce
  • Re: vibe.d-li... Hipreme via Digitalmars-d-announce
  • Re: vibe.d-li... Kagamin via Digitalmars-d-announce
    • Re: vibe... Dmitry Olshansky via Digitalmars-d-announce

Reply via email to