[Lxc-users] Hiding container processes from Host/HN's 'ps'
Hi all - In openvz, a certain sysctl parameter, kernel.pid_ns_hide_child = 1 when executed at HN system startup will hide any processes that run inside the running containers from appearing in the output of 'ps'. This makes for a cleaner 'ps' output in the hardware node, and prevents inadvertent container malfunctions when something like 'killall -9 httpd' is executed in the command line of the HN. How can i do the same with LXC? My google searches draw up a blank. - Ian -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Hiding container processes from Host/HN's 'ps'
On Tue, 2011-05-03 at 18:53 +0800, ian sison (mailing list) wrote: Hi all - In openvz, a certain sysctl parameter, kernel.pid_ns_hide_child = 1 when executed at HN system startup will hide any processes that run inside the running containers from appearing in the output of 'ps'. This makes for a cleaner 'ps' output in the hardware node, and prevents inadvertent container malfunctions when something like 'killall -9 httpd' is executed in the command line of the HN. How can i do the same with LXC? My google searches draw up a blank. - Ian AFAIK, there's no such thing in the mainline kernel for the moment. This could be valuable though in the scenario you're exposing. -- Gregory Kurz gk...@fr.ibm.com Software Engineer @ IBM/Meiosys http://www.ibm.com Tel +33 (0)534 638 479 Fax +33 (0)561 400 420 Anarchy is about taking complete responsibility for yourself. Alan Moore. -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Hiding container processes from Host/HN's 'ps'
On Tue, 2011-05-03 at 09:47 -0500, Serge Hallyn wrote: Quoting ian sison (mailing list) (ian.si...@gmail.com): Hi all - In openvz, a certain sysctl parameter, kernel.pid_ns_hide_child = 1 when executed at HN system startup will hide any processes that run inside the running containers from appearing in the output of 'ps'. This makes for a cleaner 'ps' output in the hardware node, and prevents inadvertent container malfunctions when something like 'killall -9 httpd' is executed in the command line of the HN. How can i do the same with LXC? My google searches draw up a blank. It's not currently implemented anywhere that I know of, but you should be able to pretty easily hack lxc-ps (take a look at the script) to show you all tasks which are not in a container. I think that would be a nice patch to push to upstream lxc-ps. 'lxc-ps --host' or something. That would be a nice to have _best effort_ solution indeed. But it wouldn't solve the general use case like killing a task with killall or top for example. Cheers. thanks, -serge -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Gregory Kurz gk...@fr.ibm.com Software Engineer @ IBM/Meiosys http://www.ibm.com Tel +33 (0)534 638 479 Fax +33 (0)561 400 420 Anarchy is about taking complete responsibility for yourself. Alan Moore. -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Hiding container processes from Host/HN's 'ps'
Thanks all for your answers. At least I won't need to scrape any more google results for answers to this. As mentioned, it would certainly be a useful patch if ever it gets implemented in mainline. I hope someone from the lxc kernel developers are listening to this thread... :) - Ian On Tue, May 3, 2011 at 10:59 PM, Greg Kurz gk...@fr.ibm.com wrote: On Tue, 2011-05-03 at 09:47 -0500, Serge Hallyn wrote: Quoting ian sison (mailing list) (ian.si...@gmail.com): Hi all - In openvz, a certain sysctl parameter, kernel.pid_ns_hide_child = 1 when executed at HN system startup will hide any processes that run inside the running containers from appearing in the output of 'ps'. This makes for a cleaner 'ps' output in the hardware node, and prevents inadvertent container malfunctions when something like 'killall -9 httpd' is executed in the command line of the HN. How can i do the same with LXC? My google searches draw up a blank. It's not currently implemented anywhere that I know of, but you should be able to pretty easily hack lxc-ps (take a look at the script) to show you all tasks which are not in a container. I think that would be a nice patch to push to upstream lxc-ps. 'lxc-ps --host' or something. That would be a nice to have _best effort_ solution indeed. But it wouldn't solve the general use case like killing a task with killall or top for example. Cheers. thanks, -serge -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Gregory Kurz gk...@fr.ibm.com Software Engineer @ IBM/Meiosys http://www.ibm.com Tel +33 (0)534 638 479 Fax +33 (0)561 400 420 Anarchy is about taking complete responsibility for yourself. Alan Moore. -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mapping host PID - container PID
On Thu, 2011-04-28 at 09:41 -0500, Serge Hallyn wrote: Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de): Is there a way to get the corresponding host PID for a container PID? For example: inside the the container the process init has always PID 1. But what PID has this process in the host process table? ps aux | grep ... is not what I am looking for, I want more robust solution. There is nothing that gives you a 100% guaranteed correct race-free correspondence right now. You can look under /proc/pid/root/proc/ to see the pids valid in the container, and you can relate output of lxc-ps --forest to ps --forest output. But nothing under /proc that I know of tells you this task is the same as that task. You can't even look at /proc/pid inode numbers since they are different filesystems for each proc mount. It's tempting to say that we should put a per-task unique id under /proc/pid for each task. However that would likely be nacked because it introduces a new namespace of its own. An alternative could be to expose the container pid in /proc/pid/status. Could such a patch make it to mainline ? --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -337,6 +337,12 @@ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) seq_putc(m, '\n'); } +static void task_vpid(struct seq_file *m, struct task_struct *task) +{ + struct pid_namespace *ns = task_active_pid_ns(task); + seq_printf(m, Vpid:\t%d\n, ns ? task_pid_nr_ns(task, ns) : 0); +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -354,6 +360,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, task_cpus_allowed(m, task); cpuset_task_status_allowed(m, task); task_context_switch_counts(m, task); + task_vpid(m, task); return 0; } Signed-off-by: Greg Kurz gk...@fr.ibm.com -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mapping host PID - container PID
Quoting Greg Kurz (gk...@fr.ibm.com): On Thu, 2011-04-28 at 09:41 -0500, Serge Hallyn wrote: Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de): Is there a way to get the corresponding host PID for a container PID? For example: inside the the container the process init has always PID 1. But what PID has this process in the host process table? ps aux | grep ... is not what I am looking for, I want more robust solution. There is nothing that gives you a 100% guaranteed correct race-free correspondence right now. You can look under /proc/pid/root/proc/ to see the pids valid in the container, and you can relate output of lxc-ps --forest to ps --forest output. But nothing under /proc that I know of tells you this task is the same as that task. You can't even look at /proc/pid inode numbers since they are different filesystems for each proc mount. It's tempting to say that we should put a per-task unique id under /proc/pid for each task. However that would likely be nacked because it introduces a new namespace of its own. An alternative could be to expose the container pid in /proc/pid/status. Could such a patch make it to mainline ? Potentially. With the seccomp+ftrace patchset there was some pushback against adding its info to /proc/pid/status, but that tossed potentially much more info in (a list of filters). Anyway, if there is is a complaint about that with this patch, then we can just find somewhere else to put it. The nice thing about this is that it avoids introducing a new namespace - Since we should only see this value for or own or child pid namespaces, and those will be preserved accross c/r, this is actually a safe thing to export. So let's try to push this Acked-by: Serge Hallyn serge.hal...@ubuntu.com Thanks, Greg. -serge --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -337,6 +337,12 @@ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) seq_putc(m, '\n'); } +static void task_vpid(struct seq_file *m, struct task_struct *task) +{ + struct pid_namespace *ns = task_active_pid_ns(task); + seq_printf(m, Vpid:\t%d\n, ns ? task_pid_nr_ns(task, ns) : 0); +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -354,6 +360,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, task_cpus_allowed(m, task); cpuset_task_status_allowed(m, task); task_context_switch_counts(m, task); + task_vpid(m, task); return 0; } Signed-off-by: Greg Kurz gk...@fr.ibm.com -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mapping host PID - container PID
On 05/03/2011 05:36 PM, Greg Kurz wrote: On Thu, 2011-04-28 at 09:41 -0500, Serge Hallyn wrote: Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de): Is there a way to get the corresponding host PID for a container PID? For example: inside the the container the process init has always PID 1. But what PID has this process in the host process table? ps aux | grep ... is not what I am looking for, I want more robust solution. There is nothing that gives you a 100% guaranteed correct race-free correspondence right now. You can look under /proc/pid/root/proc/ to see the pids valid in the container, and you can relate output of lxc-ps --forest to ps --forest output. But nothing under /proc that I know of tells you this task is the same as that task. You can't even look at /proc/pid inode numbers since they are different filesystems for each proc mount. It's tempting to say that we should put a per-task unique id under /proc/pid for each task. However that would likely be nacked because it introduces a new namespace of its own. An alternative could be to expose the container pid in /proc/pid/status. Could such a patch make it to mainline ? --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -337,6 +337,12 @@ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) seq_putc(m, '\n'); } +static void task_vpid(struct seq_file *m, struct task_struct *task) +{ + struct pid_namespace *ns = task_active_pid_ns(task); + seq_printf(m, Vpid:\t%d\n, ns ? task_pid_nr_ns(task, ns) : 0); +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -354,6 +360,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, task_cpus_allowed(m, task); cpuset_task_status_allowed(m, task); task_context_switch_counts(m, task); + task_vpid(m, task); return 0; } Signed-off-by: Greg Kurzgk...@fr.ibm.com I think we should propose this patch for mainline inclusion. The vpid does not give, by its own, enough information for the pid namespace. How can we rebuild a pid ns tree ? I guess we can look for the vpid 1 as the root node of the process tree no ? Otherwise: Acked-by: Daniel Lezcano daniel.lezc...@free.fr -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mapping host PID - container PID
Quoting Daniel Lezcano (daniel.lezc...@free.fr): On 05/03/2011 05:36 PM, Greg Kurz wrote: On Thu, 2011-04-28 at 09:41 -0500, Serge Hallyn wrote: Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de): Is there a way to get the corresponding host PID for a container PID? For example: inside the the container the process init has always PID 1. But what PID has this process in the host process table? ps aux | grep ... is not what I am looking for, I want more robust solution. There is nothing that gives you a 100% guaranteed correct race-free correspondence right now. You can look under /proc/pid/root/proc/ to see the pids valid in the container, and you can relate output of lxc-ps --forest to ps --forest output. But nothing under /proc that I know of tells you this task is the same as that task. You can't even look at /proc/pid inode numbers since they are different filesystems for each proc mount. It's tempting to say that we should put a per-task unique id under /proc/pid for each task. However that would likely be nacked because it introduces a new namespace of its own. An alternative could be to expose the container pid in /proc/pid/status. Could such a patch make it to mainline ? --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -337,6 +337,12 @@ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) seq_putc(m, '\n'); } +static void task_vpid(struct seq_file *m, struct task_struct *task) +{ +struct pid_namespace *ns = task_active_pid_ns(task); +seq_printf(m, Vpid:\t%d\n, ns ? task_pid_nr_ns(task, ns) : 0); +} + int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { @@ -354,6 +360,7 @@ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, task_cpus_allowed(m, task); cpuset_task_status_allowed(m, task); task_context_switch_counts(m, task); +task_vpid(m, task); return 0; } Signed-off-by: Greg Kurzgk...@fr.ibm.com I think we should propose this patch for mainline inclusion. The vpid does not give, by its own, enough information for the pid namespace. How can we rebuild a pid ns tree ? I guess we can look for the vpid 1 as the root node of the process tree no ? You mean find pid 1 for the task's container, and print out it's pid in current's pid_ns, i.e. Container_init: pid That'd be very useful, and, again, does not AFAICS risk introducing a new namespace. Otherwise: Acked-by: Daniel Lezcano daniel.lezc...@free.fr -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mapping host PID - container PID
Quoting Daniel Lezcano (daniel.lezc...@free.fr): Yes. And I think the positive side effect is we can determine if the pid belongs to the same pid namespace than the current one when the container_init is 1, no ? Yup. (Presumably if one happens to access a /proc for a non-descendent pid-namespace, we'll print 0 for both the vpid and the container_init pid) Sounds great, thanks guys. -serge -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users