On Sun, Jul 7, 2013 at 11:06 PM, Bennie Kloosteman wrote:
> "You think Linux is not well-engineered?"
>
> Nope .. its the same piece of 1970s crap that all the other popular OS use ,
> with trivial differences people make a bit deal about.. You really think the
> difference between Vista and Linux
>From what I gather, you only really use Windows... yet you're trying to
argue about Unix-like systems. They are not even similar to Windows at all,
so your attempt to argue that in fact they're all the same is amusing...
and saddening. I didn't switch away from Windows because I grew a neckbeard
o
On Sun, Jul 7, 2013 at 11:06 PM, Bennie Kloosteman wrote:
> "You think Linux is not well-engineered?"
>
> Nope .. its the same piece of 1970s crap that all the other popular OS use ,
> with trivial differences people make a bit deal about.. You really think the
> difference between Vista and Linux
I wasn't aware that Linus Torvalds possessed time travel technology. Either
way, to say that Linux, OSX and the Windows kernel are the same but with
miniscule differences is a pretty broad statement.
On 8 Jul 2013 13:06, "Bennie Kloosteman" wrote:
> "You think Linux is not well-engineered?"
>
> N
"You think Linux is not well-engineered?"
Nope .. its the same piece of 1970s crap that all the other popular OS use
, with trivial differences people make a bit deal about.. You really think
the difference between Vista and Linux is the kernel when you complain
about X.org ? XP ,Vista , Windows
I should have asked earlier, but better late than never: this
conversation's gone off the rails and well into the "non-courteous,
non-productive" territory we ask people to keep off our lists. See
"Conduct" here:
https://github.com/mozilla/rust/wiki/Note-development-policy
We're going to supp
You think Linux is not well-engineered? That statement just took the wind
out of your sails. There are components that run *on top of *Linux (and
similar Unix-like systems) that are poorly engineered, X.org chief among
them, but that doesn't make the Linux kernel poorly engineered. Making
intangibl
On Sun, Jul 7, 2013 at 5:01 PM, james wrote:
> On 05/07/2013 23:05, Daniel Micay wrote:
>
> On Fri, Jul 5, 2013 at 5:43 PM, james wrote:
>
>> On 05/07/2013 08:37, Graydon Hoare wrote:
>
>>>
>>> I agree that it's higher than it seems it "needs to be". But it will
>>> always be unnecessary overhead
On 05/07/2013 23:05, Daniel Micay wrote:
On Fri, Jul 5, 2013 at 5:43 PM, james wrote:
>On 05/07/2013 08:37, Graydon Hoare wrote:
>>
>>I agree that it's higher than it seems it "needs to be". But it will
>>always be unnecessary overhead on x64; it really makes no sense there. The
>>address spac
On Fri, Jul 5, 2013 at 11:07 PM, Daniel Micay wrote:
> On Fri, Jul 5, 2013 at 4:58 PM, Bill Myers wrote:
> > I believe that instead of segmented stacks, the runtime should determine
> a
> > tight upper bound for stack space for the a task's function, and only
> > allocate a fixed stack of that s
On Fri, Jul 5, 2013 at 6:43 PM, Thomas Daede wrote:
> On Fri, Jul 5, 2013 at 5:05 PM, Daniel Micay wrote:
>> You can rely on it, it's the standard behaviour on Linux. The actual
>> consumed memory will be equal to the size of the pages that have been
>> touched.
>
> Has anyone actually tested the
On Fri, Jul 5, 2013 at 5:05 PM, Daniel Micay wrote:
> You can rely on it, it's the standard behaviour on Linux. The actual
> consumed memory will be equal to the size of the pages that have been
> touched.
Has anyone actually tested the performance of a a highly fragmented
page table resulting fr
On Fri, Jul 5, 2013 at 5:43 PM, james wrote:
> On 05/07/2013 08:37, Graydon Hoare wrote:
>>
>> I agree that it's higher than it seems it "needs to be". But it will
>> always be unnecessary overhead on x64; it really makes no sense there. The
>> address space is enormous and it's all lazily committ
On 05/07/2013 08:37, Graydon Hoare wrote:
I agree that it's higher than it seems it "needs to be". But it will
always be unnecessary overhead on x64; it really makes no sense there.
The address space is enormous and it's all lazily committed.
I don't think you can rely on 'lazily committed'.
On Fri, Jul 5, 2013 at 4:58 PM, Bill Myers wrote:
> I believe that instead of segmented stacks, the runtime should determine a
> tight upper bound for stack space for the a task's function, and only
> allocate a fixed stack of that size, falling back to a large "C-sized" stack
> if a bound cannot
I believe that instead of segmented stacks, the runtime should determine a
tight upper bound for stack space for the a task's function, and only allocate
a fixed stack of that size, falling back to a large "C-sized" stack if a bound
cannot be determined.
Such a bound can always be computed if t
On 5 July 2013 09:37, Graydon Hoare wrote:
> It's all done by comparing the stack pointer to a reserved segment register.
> The growth-prologue on x64 looks like this:
I am curious, has moving stack checks to the caller been considered so
a function code can assume that it always has enough space
On 13-07-04 10:54 PM, james wrote:
If the segments are allocated from a selection of standard sizes, can
you not have a (hardware) thread-local pool for each size and dump to a
shared pool?
No. We should be, but we've not moved to a single page-size pool yet. I
hope to move to this design as
On 13-07-04 07:30 PM, Thad Guidry wrote:
On Windows there's a better way than DWARF or SJLJ to catch exceptions I
learned today. (I have no idea what I am talking about, but here goes
from what I read...)
Windows uses it's own exception handling mechanism known as SEH. GCC
supports SEH on Win64
On 04/07/2013 23:33, Patrick Walton wrote:
stack segment allocation is the slow path in C malloc, because it's in
a high storage class.
If the segments are allocated from a selection of standard sizes, can
you not have a (hardware) thread-local pool for each size and dump to a
shared pool?
On Windows there's a better way than DWARF or SJLJ to catch exceptions I
learned today. (I have no idea what I am talking about, but here goes from
what I read...)
Windows uses it's own exception handling mechanism known as SEH. GCC
supports SEH on Win64 (and has plans I think later for Win32?)
On Thu, Jul 4, 2013 at 10:16 PM, Benjamin Striegel
wrote:
> Would moving away from segmented stacks also allow us to bring jemalloc
> back?
Not if we still have them as an optional feature. We'll still hit the
same problem as before.
___
Rust-dev mailin
Would moving away from segmented stacks also allow us to bring jemalloc
back?
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
On Thu, Jul 4, 2013 at 7:49 PM, Daniel Micay wrote:
> On Thu, Jul 4, 2013 at 6:33 PM, Patrick Walton wrote:
>> On 7/4/13 12:58 PM, Daniel Micay wrote:
>>>
>>> You can create many threads with fixed stacks, they just start off
>>> using 4K instead of however much smaller our segmented stacks will
On Thu, Jul 4, 2013 at 6:33 PM, Patrick Walton wrote:
> On 7/4/13 12:58 PM, Daniel Micay wrote:
>>
>> You can create many threads with fixed stacks, they just start off
>> using 4K instead of however much smaller our segmented stacks will be.
>> A scheduler will just be more expensive than a regul
On Thu, Jul 4, 2013 at 5:33 PM, Patrick Walton wrote:
> 1. There is no way for the compiler or runtime to know ahead of time how
> much stack any given task will need, because this is based on dynamic
> control flow.
There is, unfortunately, no easy way for the user to know ahead of
time either.
On 7/4/13 12:58 PM, Daniel Micay wrote:
You can create many threads with fixed stacks, they just start off
using 4K instead of however much smaller our segmented stacks will be.
A scheduler will just be more expensive than a regular lightweight
task.
The 15-100% performance hit from segmented st
27 matches
Mail list logo