Re: [Lxc-users] GUI container
Hi, On Tue, Feb 15, 2011 at 01:20:15AM -0800, Nirmal Guhan wrote: > >> I have set up an short howto on setting up an xserver in an lxc linux > >> container, > >> see > >> http://box.matto.nl/lxcxserver.html [ .. ] > > Am trying these steps and installed X, xdm, xterm and blackbox in the > > lxc container (which is fedora 12). Restarted my container and I see > > that xdm service is running. Ah, great :) > > However a "Xnest :1 -query > ip>" from my workstation shows up just a black window. On the > > container log file, I see > > (WW) xf86OpenConsole: setpgid failed: Operation not permitted > > (WW) xf86OpenConsole: setsid failed: Operation not permitted > > Fatal server error: > > xf86OpenConsole: Cannot open virtual console 8 (No such file or directory) > > > > Do you have any clues? selinux is disabled in my system. Also though I > > installed blackbox in my container, not sure how that will be used > > since xdm does not have references to it. Can you clarify please? I have no recent experience with fedora, I will try to set up a fedora container this weekend and look into it. Cheers, Matto -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Zombie container
On 2/14/2011 6:50 PM, Trent W. Buck wrote: > Daniel Lezcano writes: > >> As a quick fix, I suggest you look what application created the new >> namespace. Launch your container and then look at >> /cgroup/blackbird/1234/tasks and look for the command line associated >> with the pid in this file. I suspect vsftpd could be the culprit. If >> this is the case, there is an option to disable the namespace >> creation. > > Or, of course, pick a different application :-) > > If it is vsftpd, I *strongly* recommend switching to SFTP (part of SSH) > for writes, and HTTP for reads. http://mywiki.wooledge.org/FtpMustDie Well, of course, but what's that got to do with LXC or the namespace trick that vsftpd happens to use? Your observations, which everyone already knows, show that the ftp protocol is problematic. Granted but so what? The discussion here is how to get all commonly used tools working within containers, using lxc, that are currently used outside of containers, not what tools to use. 3 things: 1) The vstftpd problem is not a problem with the ftp protocol. Apache or any other service or app that meets your religious or aesthetic approval might have the same or similar problem at any time. Here we are only interested in containerizing anything that currently is done on traditional servers. For better or for worse, FTP is widely used on trandtional servers, and specifically vsftpd is. And so the discussion is about how to use vsftpd within a container, not whether to use ftp. 2) As if everyone has any choice in the matter anyway, since most use of any communication protocol, such as ftp, involve two different parties, not yourself at both ends. Even if you were so gauche as to try to dictate internal IT policies and procedures and technologies to your own customers and vendors, you still don't get to dictate to 2nd or more removed customers and vendors of your own customers and vendors. So when _big honking global bank/manufacturer/retailer/shipper/etc_ says they will ftp to you or you to them, you just *&^*7 do it. Oh you can offer the alternatives, and occasionally you get lucky, but that doesn't remove the need to make ftp work. Same goes for every other commonly used technology that you don't happen to personally like. 3) What makes http so special only for reading and sftp so special only for writing? Depending on my security needs and other factors I routinely use http for writing and/or sftp for reading. I also use rsync (native, not via ssh or rsh) for both reading and writing in many situations where most people use ftp or sftp or http. Conversely I never use nfs and only use samba extremely rarely, but I'm sure these technologies are perfectly justifiable and required for other people in other situations. Choice of tool is completely dependent on the job at hand and it's utterly silly to try to say what should and should not be used except within the context of a specific job, and then the answer only applies to that one specific job in that one specific context. -- bkw -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] cgroup.net_cls and tc
Em 14/02/2011, Andre Nathanescreveu: > The container is configured with the following line: > > lxc.cgroup.net_cls.classid = 0x10002 > > And I have the following tc rules: > > tc qdisc add dev eth0 root handle 1: htb default 30 > tc class add dev eth0 parent 1: classid 1:2 htb rate 1mbit > tc filter add dev eth0 protocol ip parent 1:0 prio 1 handle 1: cgroup As an update, setting the default to "2" made it work. So that confirms tc is working, but for some reason it can't see the net_cls.classid in the container's traffic. Andre-- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Zombie container
> "DL" == Daniel Lezcano writes: DL> * simply do rm -rf /cgroup/blackbird (don't care about the DL> errors). >> >> This fails with "Operation not permitted" and the problem >> persists. DL> Do you try to remove the directories as root when the container DL> exited ? Yes. DL> It is not a kernel problem, it's the expected behavior but DL> unfortunately the cgroup automatic creation does not really fit DL> with the namespace concept. This is why the ns_cgroup will be DL> removed in the next kernel version in order to manage the cgroup DL> consistenly. OK, I have to simply live with the problem (it's not fatal) until then. -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] FUSE and capabilities
> "TWB" == Trent W Buck writes: TWB> I suppose if I had to support desktop wank, I would set up a TWB> udev rule on the host to mount removable devices in TWB> /media/, and then rbind-mount /media into the TWB> container(s). This might be a good idea for some systems, but it wouldn't work well for things like formatting, burning or using FUSE. Perhaps the proper solution would be to add a new capability for secure mounts to the kernel. The question is how much damage can be done in theory to the host and other containers when a container is given the CAP_SYS_ADMIN capability, assuming lxc.cgroup.devices are set properly? I don't care much about DoS problems as those can happen with almost any non-paranoid setup. But can CAP_SYS_ADMIN significantly increase risk of compromising the host or other containers? -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Zombie container
On 02/15/2011 10:17 AM, Milan Zamazal wrote: >> "DL" == Daniel Lezcano writes: > DL> It is probable you have an application creating new namespaces > DL> in the container. That's triggering a new cgroup creation which > DL> is nested with the container's one. This is a kernel feature > DL> (removed for the next kernel version). > > Thank you for explanation. > > By watching when these subdirectories get created I discovered the > problem appears when I run `fusermount -u'. > > DL>* simply do rm -rf /cgroup/blackbird (don't care about the > DL>errors). > > This fails with "Operation not permitted" and the problem persists. Do you try to remove the directories as root when the container exited ? > DL> Launch your container and then look at > DL> /cgroup/blackbird/1234/tasks and look for the command line > DL> associated with the pid in this file. > > The `tasks' file is empty. But it must be fusermount or something > related to its invocation. Ok. Interesting. > DL> Hope that helps. > > Thank you for help. Now I know what creates the problem, but I still > don't know how to safely prevent it or remedy it. Maybe it's a kernel > problem (I use standard kernel 2.6.32 from Debian)? It is not a kernel problem, it's the expected behavior but unfortunately the cgroup automatic creation does not really fit with the namespace concept. This is why the ns_cgroup will be removed in the next kernel version in order to manage the cgroup consistenly. http://git.kernel.org/?p=linux/kernel/git/sfr/linux-next.git;a=blob;f=Documentation/feature-removal-schedule.txt;h=ada3db8fc9f6307b0b9b51b503353a96b995b62d;hb=b7bbcc2b04070ebd77c827e8ebbd08a5b7493004 -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Batch invocation of apt-get?
> "TWB" == Trent W Buck writes: TWB> Sounds like you want a change management system, like puppet or TWB> cfengine. I just run about 20 virtual servers / containers, each of them running a different kind of service, so these tools are apparently not for me. TWB> By blowing away and rebuilding containers that aren't in TWB> production (which I have fully scripted), and by ssh'ing into TWB> containers that are in production. OK, so it seems there is no special tool to do it. In such a case I can use something similar as you. Another way might be to initiate automated upgrades from the host, e.g. by putting something into .../CONTAINER/etc/cron.d/ or so when upgrades should be performed; this would also make stopped host to perform the upgrade as soon as they are started. Thank you for help. -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] GUI container
On Mon, Feb 14, 2011 at 5:06 PM, Nirmal Guhan wrote: > On Fri, Dec 17, 2010 at 10:46 AM, matto fransen wrote: >> Hi, >> >> On 17 December 2010 11:28, Matto Fransen wrote: Do I need to start container with X (level 5?). I tried these steps : >> >> I have set up an short howto on setting up an xserver in an lxc linux >> container, >> see >> http://box.matto.nl/lxcxserver.html >> >> Cheers, >> >> Matto >> > > Hi, > > Am trying these steps and installed X, xdm, xterm and blackbox in the > lxc container (which is fedora 12). Restarted my container and I see > that xdm service is running. However a "Xnest :1 -query ip>" from my workstation shows up just a black window. On the > container log file, I see > (WW) xf86OpenConsole: setpgid failed: Operation not permitted > (WW) xf86OpenConsole: setsid failed: Operation not permitted > Fatal server error: > xf86OpenConsole: Cannot open virtual console 8 (No such file or directory) > > Do you have any clues? selinux is disabled in my system. Also though I > installed blackbox in my container, not sure how that will be used > since xdm does not have references to it. Can you clarify please? > > Thanks, > ~nirmal > -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Zombie container
> "DL" == Daniel Lezcano writes: DL> It is probable you have an application creating new namespaces DL> in the container. That's triggering a new cgroup creation which DL> is nested with the container's one. This is a kernel feature DL> (removed for the next kernel version). Thank you for explanation. By watching when these subdirectories get created I discovered the problem appears when I run `fusermount -u'. DL> * simply do rm -rf /cgroup/blackbird (don't care about the DL> errors). This fails with "Operation not permitted" and the problem persists. DL> Launch your container and then look at DL> /cgroup/blackbird/1234/tasks and look for the command line DL> associated with the pid in this file. The `tasks' file is empty. But it must be fusermount or something related to its invocation. DL> Hope that helps. Thank you for help. Now I know what creates the problem, but I still don't know how to safely prevent it or remedy it. Maybe it's a kernel problem (I use standard kernel 2.6.32 from Debian)? -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users