On Wed, Sep 28, 2005 at 01:46:15PM +0200, Blaisorblade wrote:
> And I am wondering about whether the recent "eactivate_all_fds failed, errno 
> = 
> 9" 

I found this one, although I didn't notice the missing 'd'.  It turns out that
close_chan closes file descriptors, but never freed the associated IRQs.

> But still, Jeff, how can we expect that malloc won't stomp over all our data 
> which we preallocated with kmalloc and such?

Because I leave a decent amount of room between the brk and the start of 
kmalloc-able memory (which is free_pages-d during boot).  It's a couple meg,
I think.  No guarantees, of course, but there isn't a lot of mallocing
happening.

> There's no single mention of that in your original changelog, and this is 
> untrivial, so I can assume you didn't realize this issue.
> 
> The git commit is 026549d28469f7d4ca7e5a4707f0d2dc4f2c164c.
> 
> On the other side, could you explain why you don't like kmalloc in first 
> place? It surely works.

I'm not sure - let me think about it.  But I fixed that for a reason - it
was causing a crash somehow, I just don't remember the details.

> The real solution for this warning is to replace um_kmalloc with malloc(), 
> and 
> set, during shutdown, kmalloc_only_atomic - which would switch 
> __wrap_malloc() from um_kmalloc to um_kmalloc_atomic.

Yeah, that's userspace code, so it should probably just use malloc.

> Or better yet, simply test in_atomic() and irqs_disabled() to choose between 
> the atomic and normal versions.

I don't really like testing in_atomic() because you should generally know
what context you're running in.

                                Jeff


-------------------------------------------------------
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
_______________________________________________
User-mode-linux-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel

Reply via email to