re stacks.

Here is go's story and some numbers

https://docs.google.com/document/d/1wAaf1rYoM4S4gtnPh0zOlGzWtrZFQ5suE8qr2sD8uWQ/pub

Here is rust

https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html


> If you're going to go with a stack guard page you can just pause and
>> expand the stack it should not be that often... Yes you can just attach a
>> new segment but swapping in a whole new stack is also attractive as copies
>> are pretty damn fast and you dont need to calculate segments and locality
>> can be better ( though does require precise collection) . That is what Go
>> is going with now  though their previous implementation had checks in every
>> function ( as per the LLVM default)..  In the case of the paper they were
>> using C ..
>>
>
> You may not be able to relocate the stack if there is a C call frame on
> it, because people do all kinds of stupid stuff in C. When you *can* copy
> the stack, it is fortunate that stacks are allocated as large objects (so
> you don't copy them - you remap them).
>

Good reason to use as little c as possible and at least insist its behaved
,it also needs big stack segments  like you said C could do all sorts of
stupid stuff.

>
>
>> Singularity which used a similar technique to this paper  also records
>>> segmented stacks have some cost since they mention possible HW assistance.
>>>
>>

> Concerns about scientific integrity aside, it's just plain unfortunate,
> because they claimed a lot of things that it would be useful to actually
> *know*, but we have no idea which parts of their claims are
> scientifically valid.
>

Agree  , there concurency number was on some poor C#  web server and the
case was highly concurrent  (though they did say that) .  I  do note on
uclinux  ( basically a linux kernel about 6 months behind the real kernel
for small / embedded devices)  with mmu off the results are surprisingly
good ( beside TLB hit  rates , context switches are at least 10 * faster
but no fork COW so some apps are  fast other slow .) .

Ben
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to