When we run a two nfs client and a nfs server in the following way, we
met a livelock / starvation condition.
MachineAMachineB
Client1 Client2
Server
As shown in the figure, we run a client and server on one machine, and
run another client on another machine. When Client1 and
2006/12/27, yunfeng zhang <[EMAIL PROTECTED]>:
To multiple address space, multiple memory inode architecture, we can introduce
a new core object -- section which has several features
Do you mean "in-memory inode" or "memory node(pglist_data)" by "memory inode" ?
The idea issued by me is whethe
2006/12/26, yunfeng zhang <[EMAIL PROTECTED]>:
In the patch, I introduce a new page system -- pps which can improve
Linux swap subsystem performance, you can find a new document in
Documentation/vm_pps.txt. In brief, swap subsystem should scan/reclaim
pages on VMA instead of zone::active list ...
On 9/2/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi, everyone.
>
>I know kernel oops can be seen by run 'dmesg', but if
> kernel crashed, we can not run it. so I reconfigure syslogd
> to support remote forward, the debug machine content of
> syslogd.conf is:
When the panic is calle
On 9/2/05, Richard Hayden <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> It appears there is no protection in badness() (called by
> out_of_memory() for each process) when it reads p->mm->total_vm. Another
> processor (or a kernel preemption) could presumably run do_exit and then
> exit_mm, freeing the
2005/9/1, jmerkey <[EMAIL PROTECTED]>:
> Bernd,
>
> It might be helpful for someone to look at these sections of code I had
> to patch in 2.6.9.
> I discovered a case where the kernel scheduler will pass NULL for the
> array argument
> when I started hitting the extreme upper range > 200MB/S combin
6 matches
Mail list logo