* men...@google.com [2009-07-13 15:15:45]:
> On Tue, Jul 7, 2009 at 5:56 PM, KAMEZAWA
> Hiroyuki wrote:
> >
> > I know people likes to wait for file descriptor to get notification in
> > these days.
> > Can't we have "event" file descriptor in cgroup layer and make it reusable
> > for
> > other
Hi,
On wto, lip 14, 2009 at 09:47:44 -0700, Sukadev Bhattiprolu wrote:
> I don't have any beyond what is in the lxc-source examples. Maybe
> Daniel Lezcano has some.
Just asking :)
> |BTW, where's the canonical source for ns_exec?
>
> It is here: git://git.sr71.net/~hallyn/cr_tests.git
Thanks
Grzegorz Nosek [r...@localdomain.pl] wrote:
| On pon, lip 13, 2009 at 11:49:05 -0700, Sukadev Bhattiprolu wrote:
| > Grzegorz Nosek [r...@localdomain.pl] wrote:
| > | Simply run it as container init. Sometimes it oopses immediately,
| >
| > I am trying to reproduce this too and just trying to make
Dave Hansen wrote:
> On Tue, 2009-07-14 at 14:26 -0700, Benjamin Blum wrote:
>> This method looks to be a compromise between Andrew's proposed
>> generalized solution ( http://lkml.org/lkml/2009/7/2/518 ) and the
>> current quick-fix. The problem with it is that it'll require a layer
>> between who
Add checkpoint/restart support for epoll files. This is the minimum
support necessary to recreate the epoll item sets without any pending
events.
This is an RFC to show where I'm going with the patch and give an idea
of how much code I expect it will take. Compiles and boots on x86 but
I hav
On Tue, Jul 14, 2009 at 01:38:30PM -0700, Paul Menage wrote:
> On Tue, Jul 14, 2009 at 10:43 AM, Paul Menage wrote:
> >
> > I've been trying to think of a way to do that. AFAICS the only way to
> > do that reliably would be to move the call to cgroup_fork() that hooks
> > into the parent's cgroup i
On Tue, Jul 14, 2009 at 2:49 PM, Dave Hansen wrote:
> On Tue, 2009-07-14 at 14:26 -0700, Benjamin Blum wrote:
>> This method looks to be a compromise between Andrew's proposed
>> generalized solution ( http://lkml.org/lkml/2009/7/2/518 ) and the
>> current quick-fix. The problem with it is that it'
In file included from ipc/util.h:15,
from ipc/compat.c:35:
include/linux/checkpoint_hdr.h:42:32: error: asm/checkpoint_hdr.h: No such file
or directory
In file included from ipc/util.h:15,
from ipc/compat.c:35:
include/linux/checkpoint_hdr.h:419: error: ‘CKPT_ARCH
On Tue, 2009-07-14 at 14:26 -0700, Benjamin Blum wrote:
> This method looks to be a compromise between Andrew's proposed
> generalized solution ( http://lkml.org/lkml/2009/7/2/518 ) and the
> current quick-fix. The problem with it is that it'll require a layer
> between whoever's using the array an
Indeed.
Alternatively, I could make it case on KMALLOC_MAX_SIZE as follows:
if (size > KMALLOC_MAX_SIZE) {
/* use vmalloc directly */
} else {
/* try kmalloc, and, expecting fragmentation, if that fails, use vmalloc */
}
As the free wrapper uses is_vmalloc_addr, it'd work fine and be abl
On Tue, Jul 14, 2009 at 11:34 AM, Dave Hansen wrote:
> On Fri, 2009-07-10 at 16:01 -0700, Ben Blum wrote:
>> +struct cgroup_pidlist {
>> + /* protects the other fields */
>> + struct rw_semaphore mutex;
>> + /* array of xids */
>> + pid_t *list;
>> + /* how many elemen
On Tue, Jul 14, 2009 at 10:43 AM, Paul Menage wrote:
>
> I've been trying to think of a way to do that. AFAICS the only way to
> do that reliably would be to move the call to cgroup_fork() that hooks
> into the parent's cgroup inside the lock on the group leader's thread
> list, and move the fork c
Since syscalls return a long, do_checkpoint() and do_restart() need to also
return a long. On a 64-bit platform that uses a general-purpose register
for the return value, this is needed to avoid corrupting the value of that
saved register if checkpointed while in userspace.
Signed-off-by: Dan Smi
On Jul 13, 2009, at 6:43 PM, KOSAKI Motohiro wrote:
> I like multiple threshold and per-thresold file-descriptor.
> it solve multiple waiters issue.
>
> but How about this?
>
> /cgroup
> /group1
> /notifications
> /threashold-A
> /threashold-B
Why are you making thi
On Fri, 2009-07-10 at 16:01 -0700, Ben Blum wrote:
> +struct cgroup_pidlist {
> + /* protects the other fields */
> + struct rw_semaphore mutex;
> + /* array of xids */
> + pid_t *list;
> + /* how many elements the above list has */
> + int length;
> + /* h
On Tue, Jul 14, 2009 at 10:47 AM, Dave Hansen wrote:
>
> How big were those allocations that were failing? The code made it
> appear that order-2 (PAGE_SIZE*4) allocations were failing. That's a
> bit lower than I'd expect the page allocator to start failing.
I think it depends on how much fragm
On Tue, 2009-07-14 at 10:28 -0700, Paul Menage wrote:
> On Mon, Jul 13, 2009 at 9:25 PM, KAMEZAWA
> Hiroyuki wrote:
> > My point is
> > - More PIDs, More time necessary to read procs file.
>
> Right now, many pids => impossible to read procs file due to kmalloc
> failure. (This was always the cas
On Tue, Jul 14, 2009 at 10:34 AM, Benjamin Blum wrote:
> procs file). While that's preferable to a global lock, if we can add a
> field to task_struct, a (lockless) flag-based approach might be
> possible.
>
I've been trying to think of a way to do that. AFAICS the only way to
do that reliably wou
On Tue, Jul 14, 2009 at 3:16 AM, Balbir Singh wrote:
> * men...@google.com [2009-07-13 23:49:16]:
>> As a first cut, we were planning to add an rwsem that gets taken for
>> read in cgroup_fork(), released in cgroup_post_fork(), and taken for
>> write when moving an entire process to a new cgroup;
On Mon, Jul 13, 2009 at 9:25 PM, KAMEZAWA
Hiroyuki wrote:
> My point is
> - More PIDs, More time necessary to read procs file.
Right now, many pids => impossible to read procs file due to kmalloc
failure. (This was always the case with cpusets too). So using kmalloc
in those cases is a strict imp
On Tue, Jul 14, 2009 at 12:25 AM, KAMEZAWA
Hiroyuki wrote:
> My point is
> - More PIDs, More time necessary to read procs file.
> This patch boost it ;) Seems like "visit this later again" ,or FIXME patch.
>
> Thanks,
> -Kame
Indeed. You'll notice the TODOs in the code here referring to the
discu
On pon, lip 13, 2009 at 11:49:05 -0700, Sukadev Bhattiprolu wrote:
> Grzegorz Nosek [r...@localdomain.pl] wrote:
> | Simply run it as container init. Sometimes it oopses immediately,
>
> I am trying to reproduce this too and just trying to make sure I get
> your environment correctly. I have just
These "MIA" messages are false alarms, but once given they
break the restart. I'm working on a fix.
Oren.
Sukadev Bhattiprolu wrote:
>
> Sorry. I was on the wrong machine which has a different kernel. Will
> re-run the test tomorrow.
>
>
> Sukadev Bhattiprolu [suka...@linux.vnet.ibm.com] wrot
Munehiro Ikeda wrote:
> Vivek Goyal wrote, on 07/13/2009 12:03 PM:
>> On Fri, Jul 10, 2009 at 09:56:21AM +0800, Gui Jianfeng wrote:
>>> Hi Vivek,
>>>
>>> This patch exports a cgroup based per group request limits interface.
>>> and removes the global one. Now we can use this interface to perform
>>
Vivek Goyal wrote:
> On Fri, Jul 10, 2009 at 09:56:21AM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> This patch exports a cgroup based per group request limits interface.
>> and removes the global one. Now we can use this interface to perform
>> different request allocation limitation for differen
Sorry. I was on the wrong machine which has a different kernel. Will
re-run the test tomorrow.
Sukadev Bhattiprolu [suka...@linux.vnet.ibm.com] wrote:
|
| When I run the 'process-tree/run-pthread1.sh' test on ckpt-v17-rc1,
| after restart, I see following three processes running in 'ps'
|
|
* men...@google.com [2009-07-13 23:49:16]:
> On Mon, Jul 13, 2009 at 10:56 PM, Balbir Singh
> wrote:
> >>
> >> Waiting for the next scheduling point might be too long, since a
> >> thread can block for arbitrary amounts of time and keeping the marker
> >> around for arbitrary time (unless we add
When I run the 'process-tree/run-pthread1.sh' test on ckpt-v17-rc1,
after restart, I see following three processes running in 'ps'
root 21165 21114 0 00:11 ttyp000:00:00 ../ns_exec -cpuimP pid.pthread1
-- /home/suka/ckpt/user-cr/mktree -vd
root 21167 21165 0 00:11 ttyp000:00:0
28 matches
Mail list logo