If they do something like deferring the blocking operation on to a real
thread and use an async callback then the wasted cpu cycles is going to
shoot through the roof but that might be a trade off acceptable to some.

Uring was doing something similar using their worker thread model for AIO
and the cpu usage skyrockets because the design is horribly inefficient so
much that a newer version has a way to bypass it.

On Tue, May 19, 2020 at 12:06 PM Jonathan Valliere <[email protected]>
wrote:

> I expected they would have to change the IO API in some way to hack that
> together.  It will be interesting to see what they actually had to change.
> The IO API is already hugely inefficient as-is compared to calling the
> native functions directly via JNI.
>
> On Tue, May 19, 2020 at 11:49 AM Emmanuel Lécharny <[email protected]>
> wrote:
>
>>
>> On 19/05/2020 17:25, Jonathan Valliere wrote:
>> > Right, I’m not sure how loom is going to make any difference other than
>> > being able to resource limit certain groups of threads.  The problem
>> With
>> > virtual threads is pausing the thread during io; I’m not sure it is even
>> > possible to do.
>>
>>
>> "The following blocking operations are /virtual thread friendly in/ the
>> current prototype; these methods do not pin the carrier thread when the
>> operation blocks."
>>
>> (https://wiki.openjdk.java.net/display/loom/Networking+IO)
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>>

Reply via email to