brucee wrote:
> coding at home and getting something significant working
> is what i meant. but by all means write to the list instead.
i agree totally that it's a Good Thing to go and actually code something
up, but i also think that it's a positive thing to have technical discussions
out in the
> this thread is now twice as big as the swap and proc code.
> is it really that much fun to do research this way?
> yes, it has been done. no, my current work has nothing
> to do with this issue.
>
> someone written a line of relevant code during this "discussion"?
would you care to give refere
On 2007-Sep-6, at 21:37 , Bruce Ellis wrote:
coding at home and getting something significant working
is what i meant. but by all means write to the list instead.
So how do you see the way out of a brk() implementation while still
coding in a pointer based environment? Code only follows e
this thread is now twice as big as the swap and proc code.
is it really that much fun to do research this way?
yes, it has been done. no, my current work has nothing
to do with this issue.
someone written a line of relevant code during this "discussion"?
brucee
On 9/7/07, Roman Shaposhnik <[EMA
coding at home and getting something significant working
is what i meant. but by all means write to the list instead.
brucee
On 9/7/07, Lyndon Nerenberg <[EMAIL PROTECTED]> wrote:
>
> On 2007-Sep-6, at 21:09 , Bruce Ellis wrote:
>
> > someone written a line of relevant code during this "discussi
On 2007-Sep-6, at 21:09 , Bruce Ellis wrote:
someone written a line of relevant code during this "discussion"?
Be careful. Coding to mailing list discussions results in Linux.
FWIW, the discussion here has made more sense than any and all
arguments/conversations I've had with UNIX vendors
On Thu, 2007-09-06 at 12:38 -0700, ron minnich wrote:
> On 9/6/07, Joel C. Salomon <[EMAIL PROTECTED]> wrote:
>
> > I can't imagine that either of these uses are nearly compelling enough
> > to open this can of worms Has anyone truly felt confined by Plan
> > 9's fork+exec model?
>
> yes, be
I wonder if this is interesting but the user mode processes creation device is,
in effect, what I and brucee have been working on (seperately) for plan9
binaries
on windows and what russ did for linux binaries on plan9.
BTW the latter has seen some development recently - somone managed to get
dy
> yes, because exec takes a pathname. that's a pull model. That is
9p can remove your pain
On 9/6/07, Joel C. Salomon <[EMAIL PROTECTED]> wrote:
> I can't imagine that either of these uses are nearly compelling enough
> to open this can of worms Has anyone truly felt confined by Plan
> 9's fork+exec model?
yes, because exec takes a pathname. that's a pull model. That is
pretty awf
> > > That's why many OSes have a "spawn" primitive that combines fork-and-exec.
> >
> > the problem with spawn is it requires a mechanism to replace that little
> > block of code between the fork and exec. that code is hardly ever the
> > same so spawn keeps growing arguments.
>
> Yes, on the oth
erik quanstrom wrote:
> > That's why many OSes have a "spawn" primitive that combines fork-and-exec.
> the problem with spawn is it requires a mechanism to replace that little
> block of code between the fork and exec. that code is hardly ever the
> same so spawn keeps growing arguments.
Yes, on
> That's why many OSes have a "spawn" primitive that combines fork-and-exec.
the problem with spawn is it requires a mechanism to replace that little
block of code between the fork and exec. that code is hardly ever the
same so spawn keeps growing arguments.
- erik
Charles Forsyth wrote:
> you'll need to ensure that each fork reserves as many physical pages as are
> currently
> shared in the data space, for the life of the shared data,
> so that every subsequent copy-on-write is guaranteed to succeed.
> this will prevent some large processes from forking to
On 9/4/07, erik quanstrom <[EMAIL PROTECTED]> wrote:
> > Also, [swap is] broken, broken, broken on Plan 9
>
> but could you describe what antisocial behavior it exhibits and how one
> could reproduce this behavior?
My cpu/auth/file server is a poor little headless P100 with 24MB RAM
(there's 32 i
> > if one wishes to be remotely standards-compliant, sending a note on
> > allocation
> > failure is not an option. k&r 2nd ed. p. 252.
>
> i was discussing something about it in practice, and not in a 1970's
> environment,
> where the approach didn't really work well even then. the `recover
On 9/5/07, ron minnich <[EMAIL PROTECTED]> wrote:
> On 9/4/07, sqweek <[EMAIL PROTECTED]> wrote:
> > Well, I guess the next question is: Is malloc's interface
> > sufficient/practical?
>
> sure. I use it all the time, I'm using it now.
>
> I use it several hundred times a second without knowing i
On 9/4/07, sqweek <[EMAIL PROTECTED]> wrote:
> Well, I guess the next question is: Is malloc's interface
> sufficient/practical?
sure. I use it all the time, I'm using it now.
I use it several hundred times a second without knowing it, I bet.
ron
On 9/3/07, erik quanstrom <[EMAIL PROTECTED]> wrote:
> if((p = malloc(Size)) == 0)
> /* malloc recovery code */
> /* why bother? the kernel could be lying to us anyway. */
Having run into issues writing codes with large memory footprints on
linux (which al
On 9/4/07, erik quanstrom <[EMAIL PROTECTED]> wrote:
>
> On Tue Sep 4 09:39:37 EDT 2007, [EMAIL PROTECTED] wrote:
>
> > Yep, I've seen code with totally erroneous use of realloc work perfectly
> on
> > Linux for example, due to it's behavior. Then I built it on FreeBSD and
> it
> > failed appropr
Two cases so far: running out of stack and allocating overcommited memory.
You can easily catch stack growth failure in the OS. The proc gets a
note. The proc has the
option of the equivalent of 'echo growstack xyz > /proc/me/ctl'.
For overcommits, 'echo faultall > /proc/me/ctl'.
can we catch ev
i can't remember whether anyone pointed this out as well (probably):
you'll need to ensure that each fork reserves as many physical pages as are
currently
shared in the data space, for the life of the shared data,
so that every subsequent copy-on-write is guaranteed to succeed.
this will prevent s
> if one wishes to be remotely standards-compliant, sending a note on allocation
> failure is not an option. k&r 2nd ed. p. 252.
i was discussing something about it in practice, and not in a 1970's
environment,
where the approach didn't really work well even then. the `recovery' that
resulted
On Tue Sep 4 09:39:37 EDT 2007, [EMAIL PROTECTED] wrote:
> Yep, I've seen code with totally erroneous use of realloc work perfectly on
> Linux for example, due to it's behavior. Then I built it on FreeBSD and it
> failed appropriately :-).
what does this have to do with memory overcommitment?
> that's a slightly different aspect. the note should not be "page fault" but
> "out of memory" (or some such thing). that's much better than a nil return.
> most errors on shared resoruces are better expressed as exceptions (notes),
> because that's what they are: they are a failure of the under
On 9/4/07, Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
>
> >> malloc just moves the brk, but the backing pages don't get
> >> allocated until the pages are accessed (during memset).
>
> "erik quanstrom" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>
> > i'm just suprised that plan 9
>> malloc just moves the brk, but the backing pages don't get
>> allocated until the pages are accessed (during memset).
"erik quanstrom" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> i'm just suprised that plan 9 overcommits. this makes
> this code nonsensical from user space
>
"erik quanstrom" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> but why introduce unpredictability? who really programs as if
> memory is not overcommited?
Practically everybody.
> i would bet that acme and most
> residents of /sys/src/cmd could do quite bad things to you in thes
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> ... The big exception is stack growth. ...
That has indeed been a longstanding problem, and if the OSes
want to grow up they need to solve that problem. It is obvious
how to solve it if speed isn't an issue; just test upon each funct
<[EMAIL PROTECTED]> wrote...
> there's no conceivable reason anyone would want swap, and operating
> systems with working swap suck ;)
Actually there have been many successful OSes with swapping/demand paging.
A way to make it work is for process initiation to include resource
allocation, especi
> On the pc, Plan 9 currently limits user-mode stacks to 16MB.
> On a CPU server with 200 processes (fairly typical), that's
> 3.2GB of VM one would have to commit just for stacks. With
> 2,000 processes, that would rise to 32GB just for stacks.
There's probably no simple answer which is correct
> No, and it would be hard to do it because you'd need ways to compact
> fragmented memory after a lot of mallocs and frees. And then, you'd
> need a way to fix the pointers after compacting.
Is it all localised, or is the code scattered across multiple kernel
modules? Many years ago I put a lot
> perhaps i've been unclear. i don't have any problem dealing with failed
> alloc. malloc has always been able to return 0.
>
> dealing with a page fault due to overcommit is a different story.
that's a slightly different aspect. the note should not be "page fault" but
"out of memory" (or some
>> to any process. suppose i start a program that allocates 8k but between
>> the malloc and the memset, another program uses the last available
>> page in memory, then my original program faults.
>
> yes, and you'll always have to deal with that in some form or another.
> i've started a program,
> to any process. suppose i start a program that allocates 8k but between
> the malloc and the memset, another program uses the last available
> page in memory, then my original program faults.
yes, and you'll always have to deal with that in some form or another.
i've started a program, it alloc
On Mon Sep 3 16:48:59 EDT 2007, [EMAIL PROTECTED] wrote:
> If your machines are regularly running out of VM, something is wrong
> in your environment. I would argue that we'd be better off fixing
> upas/fs to be less greedy with memory than contorting the system to
> try to avoid overcommitting m
> One option for Erik: try changing the segment allocator so that it
> faults in all segment pages on creation. Would this do what you want?
> I will try this if I get time later today. Assuming it is as simple as
> my simple-minded description makes it sound.
grudgingly, i admit it would -- assum
I was thinking that this was probably what we wanted to do for HPC
also, having the option of turning off zero-filling pages
-eric
On 9/3/07, ron minnich <[EMAIL PROTECTED]> wrote:
> One option for Erik: try changing the segment allocator so that it
> faults in all segment pages on
One option for Erik: try changing the segment allocator so that it
faults in all segment pages on creation. Would this do what you want?
I will try this if I get time later today. Assuming it is as simple as
my simple-minded description makes it sound.
If it would, maybe a simple
echo faultall > /
> if we allow overcomitted memory, *any* access of brk'd memory might page
> fault. this seems like a real step backwards in error recovery as most
> programs
> assume that malloc either returns n bytes of valid memory or fails. since
> this assumption is false, either we need to make it true or
venti/copy is just an example; programs may legitimately have large
stacks.
If your machines are regularly running out of VM, something is wrong
in your environment. I would argue that we'd be better off fixing
upas/fs to be less greedy with memory than contorting the system to
try to avoid overc
If you don't want to swap, rename or delete any swap partitions you
have and don't run swap in termrc nor cpurc.
> One might allocate at least 3.2GB of swap for a 4GB machine, but many
> of our machines run with no swap, and we're probably not alone. And
> 200 processes are not a lot. Would you really have over 32GB of swap
> allocated for a 4GB machine with 2,000 processes?
>
> Programs can use a surprisi
>> The current swap just frustrates people who expect it to work, and
>> then have their systems freeze randomly. Maybe by disabling/remove
>> swap support, then if someone really needs swap he will fix it first
>> and then we can add it back.
>
> i'm not sure all the random freezes are caused by
Can't find out for sure until someone fixes it :))
One more reason IMHO why we are best off having it disabled, so when
things freeze we know it is something else.
Best wishes
uriel
On 9/3/07, Charles Forsyth <[EMAIL PROTECTED]> wrote:
> > The current swap just frustrates people who expect it t
> The current swap just frustrates people who expect it to work, and
> then have their systems freeze randomly. Maybe by disabling/remove
> swap support, then if someone really needs swap he will fix it first
> and then we can add it back.
i'm not sure all the random freezes are caused by swap.
in
>> Well, when I used it on an old 32 MB laptop (terminal) and a 64 MB
>> desktop (cpu server), swap would seem to work all right until you
>> hit about 30-40% usage. This was the case with both systems; when
>> I asked about it, a couple other people mentioned the same behavior.
>> The thing is, it
This has been discussed before, but given how hard it is to debug
this, and that nobody seems to have enough interest/motivation to do
so, wouldn't it make more sense to totally remove swap support? I
really can't understand why things like il are removed when they
actually work, and swap is left a
> I don't actually need the swap partition, it's just there... ummm... not
> sure why; I installed on this machine before I found out that swap is
> broken. And it's not that I *think* swap is broken; it's been confirmed
it worked adequately to cover minor shortfalls in memory, which could happe
One might allocate at least 3.2GB of swap for a 4GB machine, but many
of our machines run with no swap, and we're probably not alone. And
200 processes are not a lot. Would you really have over 32GB of swap
allocated for a 4GB machine with 2,000 processes?
Programs can use a surprising amount of
>>> Also, it's broken, broken, broken on Plan 9
>>
>> but could you describe what antisocial behavior it exhibits and how one
>> could reproduce this behavior? i have never used to-disk paging on plan 9,
>> so i don't know.
>>
>
> Well, when I used it on an old 32 MB laptop (terminal) and a 64
>> Also, it's broken, broken, broken on Plan 9
>
> but could you describe what antisocial behavior it exhibits and how one
> could reproduce this behavior? i have never used to-disk paging on plan 9,
> so i don't know.
>
Well, when I used it on an old 32 MB laptop (terminal) and a 64 MB
desktop
> but could you describe what antisocial behavior it exhibits and how one
> could reproduce this behavior? i have never used to-disk paging on plan 9,
> so i don't know.
Last time I tried the machine did freeze, like rock solid. It happen at some
point after the swap partition was being used (sa
> Also, it's broken, broken, broken on Plan 9
but could you describe what antisocial behavior it exhibits and how one
could reproduce this behavior? i have never used to-disk paging on plan 9,
so i don't know.
> and nobody wants to fix it.
this has been a good discussion so far. let's not go o
> On 9/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>>
>> Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
>> The upside to this is that we can just say how we don't want it anyway,
>> there's no conceivable reason anyone would want swap, and operating
>> systems with
Fixing something nobody uses might be fun, but probably it's low on the stacks
of most people because we don't consume too much memory.
On 9/3/07, Gorka Guardiola <[EMAIL PROTECTED]> wrote:
> On 9/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> > Also, it's broken, broken, broken on Plan
On 9/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
> The upside to this is that we can just say how we don't want it anyway,
> there's no conceivable reason anyone would want swap, and operating
> systems with working
[EMAIL PROTECTED] wrote:
> also, why should i have to have swap? i really don't want it. it
> introduces new failure modes and could introduce wide latency
> variations. linux called, it wants it's choppy, laggy ui back.
>
Also, it's broken, broken, broken on Plan 9 and nobody wants to fix it.
>> It would be cool to be able to get a handle on being able to shrink
>> the memory occupied by an application dynamically. Malloc (through
>> brk()) grows the memory footprint, but free does not shrink it.
>> The same is true for the stack. Once allocated, it doesn't get freed
>> until the proc
> Most applications probably use much less than 1 MB, but a lot depends
> on who wrote the program. Our threaded programs typically have a 4K
> or 8K (K, not M) fixed-size stack per thread and that works fine,
> although you have to remember not to declare big arrays/structs as
> local variables.
>> If system calls were the only way to change memory allocation, one
>> could probably keep a strict accounting of pages allocated and fail
>> system calls that require more VM than is available. But neither Plan
>> 9 nor Unix works that way. The big exception is stack growth. The
>> kernel aut
>> would have to commit just for stacks. With 2,000 processes, that
>> would rise to 32GB just for stacks.
>
> With 4GB RAM, wouldn't you allocate at least that much swap
> no matter what?
that's pretty expensive if you're booting from flash and not using a remote
fileserver. 8GB flash is expen
> If system calls were the only way to change memory allocation, one
> could probably keep a strict accounting of pages allocated and fail
> system calls that require more VM than is available. But neither Plan
> 9 nor Unix works that way. The big exception is stack growth. The
> kernel automati
Except that swap, is, as far as I have been able to figure out, broken.
uriel
On 3 Sep 2007 01:35:14 -0400, Scott Schwartz <[EMAIL PROTECTED]> wrote:
> On Sun, Sep 02, 2007 at 11:38:44PM -0400, [EMAIL PROTECTED] wrote:
> > would have to commit just for stacks. With 2,000 processes, that
> > woul
On Sun, Sep 02, 2007 at 11:38:44PM -0400, [EMAIL PROTECTED] wrote:
> would have to commit just for stacks. With 2,000 processes, that
> would rise to 32GB just for stacks.
With 4GB RAM, wouldn't you allocate at least that much swap
no matter what?
On Sun, Sep 02, 2007 at 06:47:17PM -0700, ron minnich wrote:
> If you can't live with overcommit, maybe you need a wrapper that:
> sets up to catch the note (I am assuming here that you get one; do you?)
That's still a race. Getting all the memory at once is different from
probing for one page at
If system calls were the only way to change memory allocation, one
could probably keep a strict accounting of pages allocated and fail
system calls that require more VM than is available. But neither Plan
9 nor Unix works that way. The big exception is stack growth. The
kernel automatically exte
> but most people can live with the overcommit, as witness the fact that
> most of us do and never know it.
>
> If you can't live with overcommit, maybe you need a wrapper that:
> sets up to catch the note (I am assuming here that you get one; do you?)
> malloc
> zero memory you malloc'ed (which w
> but most people can live with the overcommit, as witness the fact that
> most of us do and never know it.
>
> If you can't live with overcommit, maybe you need a wrapper that:
> sets up to catch the note (I am assuming here that you get one; do you?)
> malloc
> zero memory you malloc'ed (which w
but most people can live with the overcommit, as witness the fact that
most of us do and never know it.
If you can't live with overcommit, maybe you need a wrapper that:
sets up to catch the note (I am assuming here that you get one; do you?)
malloc
zero memory you malloc'ed (which will get the pa
Russ:
| you could argue for some kind of accounting that would
| ensure pages were available, but this could only be
| terribly pessimistic, especially in the case of stacks
| and fork.
Still, that's the way unix worked. You can deal with the pessimism by
allocating lots of backing store, whereas
> the problem is not really as easy as it might seem at first.
> malloc just moves the brk, but the backing pages don't get
> allocated until the pages are accessed (during memset).
>
i'm just suprised that plan 9 overcommits. this makes
this code nonsensical from user space
if((p = mal
> this means that the malloc *succeeded* it wasn't until i forced
> the pagefault with the memset that i ran out of memory. what's
> going on here?
you know what's going on here. read the subject you wrote.
the problem is not really as easy as it might seem at first.
malloc just moves the brk,
i was trying to tickle a kernel panic, but instead
i think i found a bug. this program was run on
a machine with 1800 MB user space available.
(3552/464510 user)
#include
#include
enum{
Big = 1024*1024*1790,
};
void
74 matches
Mail list logo