Re: [Users] Debian-style init scripts considered harmful?
Kir Kolyshkin wrote: Steve Wray wrote: Hi there, Debian uses start-stop-daemon in the init scripts to, for one thing, stop services. From the man page: Note: unless --pidfile is specified, start-stop-daemon behaves similar to killall(1). start-stop-daemon will scan the process table looking for any processes which match the process name, uid, and/or gid (if specified). Any matching process will prevent --start from starting the daemon. All matching processes will be sent the KILL signal if --stop is specified. For daemons which have long-lived children which need to live through a --stop you must specify a pidfile. For example, nfs-kernel-server does not use --pidfile. It looks for nfsd processes to kill. Suppose that the Openvz host and one of its guests were running NFS and, on the host, one were to run /etc/init.d/nfs-kernel-server stop As I understand it this would have the side-effect of killing off the nfsd processes on the guest. That is right, and this is just one of the reasons why we don't recommend to run anything (but the needed bare minimum like sshd) on the host system. In my case, this isn't practical; I use cfengine to manage and maintain virtually all of our servers. We have a lot of servers. In fact, it was cfengine which brought this to my attention; I restarted it on the openvz host and then started to get nagios alerts about cfengine not running on any of the guests. It was at this point that I realised that openvz isn't a virtualisation environment; its a very *very* sophisticated chroot. There is a solution and a workaround for the problem. The solution is, right, to fix bad initscripts. I mean, it's not OpenVZ-specific -- relying on process names is wrong, any user can run a process named nfsd and it should not be killed. The workaround is to introduce a feature to hide guests' processes from the host system. This is implemented in OpenVZ kernels >= 2.6.24 as per bug #511 (http://bugzilla.openvz.org/511). Well I look forward to trying this out some time! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] Debian-style init scripts considered harmful?
Hi there, Debian uses start-stop-daemon in the init scripts to, for one thing, stop services. From the man page: Note: unless --pidfile is specified, start-stop-daemon behaves similar to killall(1). start-stop-daemon will scan the process table looking for any processes which match the process name, uid, and/or gid (if specified). Any matching process will prevent --start from starting the daemon. All matching processes will be sent the KILL signal if --stop is specified. For daemons which have long-lived children which need to live through a --stop you must specify a pidfile. For example, nfs-kernel-server does not use --pidfile. It looks for nfsd processes to kill. Suppose that the Openvz host and one of its guests were running NFS and, on the host, one were to run /etc/init.d/nfs-kernel-server stop As I understand it this would have the side-effect of killing off the nfsd processes on the guest. If true, this would seem somewhat... harsh? ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] running with no limits?
Benoit Branciard wrote: Steve Wray a écrit : No answers? Its been a while... We have a bunch of openvz VMs, nothing 'in production'. The host has 4G of RAM. I want all the VMs to have access to 4G of RAM and all the sockets and other stuff that they may need at any time; I don't have time to carefuly tune the parameters of all of them to just what they need and no more. I don't mind or care that they are over-committed I just want them to have max resources. vzsplit -n 1 -f max-limits then for each container NNN: vzctl set NNN --applyconfig max-limits --save Fantastic, thanks guys. The thing is that almost all of the time almost all of these VMs will be unused. But when they are used they can use a lot of resources. However, stopping and starting them as they are needed isn't an option. I need to have them all running at all times. Unfortunately, the applications running on them can behave quite mysteriously when things like tcpsndbuf hit a limit. And I've seen some very very strange behavior out of applications when these limits get hit... its been truly baffling sometimes! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] running with no limits?
No answers? Its been a while... We have a bunch of openvz VMs, nothing 'in production'. The host has 4G of RAM. I want all the VMs to have access to 4G of RAM and all the sockets and other stuff that they may need at any time; I don't have time to carefuly tune the parameters of all of them to just what they need and no more. I don't mind or care that they are over-committed I just want them to have max resources. Thanks Steve Wray wrote: Hi there, I have a server running OpenVZ with several VMs running on it. At the moment I have to specify various limits to each VM configuration and, when they hit their limits strange things can happen. Ideally I'd let them all have full access to all the resources available on the physical server. They can fight it out among themselves if they want to compete for the resources. Most of these VMs are quiescent and not actually doing much at all most of the time anyway. I've not been able to figure out how to configure OpenVZ like this though. Is it something I have to set in each VM config file? Or is it a server-wide thing? Any ideas? Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] running with no limits?
Hi there, I have a server running OpenVZ with several VMs running on it. At the moment I have to specify various limits to each VM configuration and, when they hit their limits strange things can happen. Ideally I'd let them all have full access to all the resources available on the physical server. They can fight it out among themselves if they want to compete for the resources. Most of these VMs are quiescent and not actually doing much at all most of the time anyway. I've not been able to figure out how to configure OpenVZ like this though. Is it something I have to set in each VM config file? Or is it a server-wide thing? Any ideas? Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] kernel exploit in the wild
John Maclean wrote: There's a kernel exploit in the wild [0]. I've run it on a couple of nodes and it __does__ allow a non-root user root access. Has any one tried it on a Hardware node or within a VE? Within a VE all I got was a kernel oops and it was too low-level for me to decypher... [0] https://bugzilla.redhat.com/show_bug.cgi?id=432229 I tried it. On a VE it gives a segfault ./a.out --- Linux vmsplice Local Root Exploit By qaaz --- [+] mmap: 0x0 .. 0x1000 [+] page: 0x0 [+] page: 0x20 [+] mmap: 0x4000 .. 0x5000 [+] page: 0x4000 [+] page: 0x4020 [+] mmap: 0x1000 .. 0x2000 [+] page: 0x1000 [+] mmap: 0xb7e37000 .. 0xb7e69000 Segmentation fault If you go and have a look at the host theres an oops: BUG: unable to handle kernel NULL pointer dereference at virtual address The system becomes unstable after this. ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] Cloning and permissions
Gregor Mosheh wrote: Peter Machell wrote: vzctl stop xx cp -R /vz/private/xx /vz/private/xxx cp -R /etc/vz/conf/xx.conf /etc/vz/conf/xxx.conf vzctl start xxx To have cp preserve permissions, use the -p flag. I use tar instead of cp, for situation like this. Tar, unlike cp, is smart enough to handle symbolic links, permissions, ownerships, and sparse files. Use this age-old Unix Jedi trick: mkdir /home/vz/private/2 cd /home/vz/private/1 tar cf - * | ( cd ../2 ; tar xvf - ) That doesn't need a -p option? ie 'tar xpvf' I tend to use 'tar --numeric-owner -cf' as well, just in case. ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] strange problem with nagios nrpe server
Listaccount wrote: Zitat von Gregor Mosheh <[EMAIL PROTECTED]>: Kirill Korotaev wrote: Just for the history/other users the resolution of the problem Steve had: OpenVZ was installed on XFS WOW, good work Kirill. That must have been a gnarly one to figure out, I never even thought of the filesystem type combined with a bug in NRPE. Sorry, have not listen close enough to this. Why is it a problem with XFS. The filesystem should not matter for applications running so it would be a XFS bug? I'd say it was an NRPE bug as NRPE (as in Debian Sarge) wasn't handling the value returned by the filesystem. Hence, the NRPE option to read conf files from a directory didn't work; the directory listing returned no entries. So far as NRPE was concerned, the directory was empty. Since later versions of NRPE appear to do so, I'd say it was a bug in NRPE. When I was diagnosing the issue and comparing the Xen instances with VZ I forgot that the important part of the VZ system (where the 'private' and 'root' directories are) was under an XFS mount point rather than ext3. Hence all my testing was, unwittingly, comparing OpenVZ VMs residing on XFS against Xen VMs residing on ext3. This made my diagnosis somewhat less than useful until I created a whole new OpenVZ Debian Sarge VM in a fresh partition which was formatted with ext3. When this one worked fine I started to look more closely and realised my blunder. I have to say, I've used both Nagios and XFS for many many years and never has something like this occured. Had I realised that the important directories were XFS I may have found this: http://osdir.com/ml/network.nagios.devel/2004-05/msg00044.html From 2004!!! Amazing this wasn't fixed in Debian Sarge really. :( Sorry to have wasted anyones time... ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] strange problem with nagios nrpe server
Just one other possible data point. I may have just dismissed these problems as some kind of creeping senility but I've seen some other bizarre issues with VMs migrated into OpenVZ. One of these is to do with Samba filesharing. When the VM is migrated into OpenVZ from Xen, samba fileshares on the VM can be accessed from Windows *only* by FQDN not by bare hostname. Note that this broke *existing* mapped network drives for Windows users. Also note that this did *not* affect Linux nor OSX clients; only Windows. Since I've verified that this wierdness is *only* apparent when the VM was running under OpenVZ not under Xen I'm not inclined to believe that I am going insane when I find that NRPE under Debian Sarge has a problem when running under OpenVZ and not under Xen. It starts to seem that OpenVZ can produce all *kinds* of unpredictable behavior... either that or I really am going mad complete with hallucinations :-/ Not discounting that possibility out of hand... Steve Wray wrote: Gregor Mosheh wrote: The good news is that I use Nagios with our VPSs, and it works brilliantly. include_dir=/etc/nagios/nrpe.d I have found that while this directive works under Xen this does not work under openvz. I find that surprising. Are you sure that the permissions didn't get mangled when you copied it over to ovz? That's the first thing I'd check: making sure that /etc/nagios/nrpe.d is in fact a directory, and that's readable by the user who runs nrpe (user nagios?). Believe me, thats the first thing I checked. I've run nrpd under strace and see nothing out of the ordinary; it finds the correct number of files in the nrpd.d directory. Not much of a whizz with strace tho so don't know where to go from here. ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] strange problem with nagios nrpe server
Gregor Mosheh wrote: The good news is that I use Nagios with our VPSs, and it works brilliantly. include_dir=/etc/nagios/nrpe.d I have found that while this directive works under Xen this does not work under openvz. I find that surprising. Are you sure that the permissions didn't get mangled when you copied it over to ovz? That's the first thing I'd check: making sure that /etc/nagios/nrpe.d is in fact a directory, and that's readable by the user who runs nrpe (user nagios?). Believe me, thats the first thing I checked. I've run nrpd under strace and see nothing out of the ordinary; it finds the correct number of files in the nrpd.d directory. Not much of a whizz with strace tho so don't know where to go from here. ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] strange problem with nagios nrpe server
Hi there, I just took some filesystems from servers which have been running under Xen for some time and converted them to openvz. I've found a very strange issue. We monitor our servers with Nagios and each server runs the Nagios nrpe server. Our config system for Nagios involves several Nrpe config files in /etc/nagios/nrpd.d/ and a config directive in nrpe.cfg pointing to this directory with: include_dir=/etc/nagios/nrpe.d I have found that while this directive works under Xen this does not work under openvz. I've tested this by taking the filesystem back and forth between Xen and Openvz and it is definitely only a problem in Openvz. Also, our VMs are running Debian. Debian Etch does not exhibit this problem; only Debian Sarge. I'm at a loss to explain this... it seems really wierd. /proc/user_beancounters shows no failcnt for anything. Surely I am missing something here? Any advice on debugging this would be appreciated. Thanks ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] reset user_beancounters?
Hi there, excuse me if this is a really obvious FAQ... How do I reset the /proc/user_beancounters (noteably the fail counts)? I've tried stopping and restarting the vz instance but, surprisingly (to me) the numbers (specifically the fail counts) stay the same... :( Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] swap space?
Hi there, I'm noticing that 'free' shows no swap space in a VE. I had a good dig through the wiki and man pages and I can't find any references to being able to configure a VE to have swap (of its own). Is this something thats abstracted away (ie the VE gets to use the hosts swap as needed) or is there a way to configure swap availability for each VE? Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] openvz naming conventions; numeric vs symbolic
Kir Kolyshkin wrote: Gregor Mosheh wrote: Steve Wray wrote: Steve Wray wrote: There seems to be a slight inconsistency across the tool set here. vzctl does respect the given 'name' however vzquota does not appear to and seems to require the numeric id. Quite true. Did you check the bugtracker for the project, or log that as a bug? I'd love to see that fixed! Can you tell me the use case for that? I mean, I never use vzquota directly; it's vzctl that calls it whenever needed. I guess you use vzquota show or vzquota stat and want to use VE name instead of ID, is that right? Well, on the basis that consistency is a Good Thing, yes. I'm only just getting started with OpenVZ so am unsure of the real use case for this. But I am busily finding the things that confuse, confound or seem inconsistent :) Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] openvz naming conventions; numeric vs symbolic
Steve Wray wrote: Kir Kolyshkin wrote: See vzctl set --name Well thats a nice start. Now, to follow on from that great progress, how do I get it so that the directory where the root filesystem lives corresponds to the name I set instead of the numeric VEID? Thanks! There seems to be a slight inconsistency across the tool set here. vzctl does respect the given 'name' however vzquota does not appear to and seems to require the numeric id. ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] openvz naming conventions; numeric vs symbolic
Kir Kolyshkin wrote: Steve Wray wrote: Kir Kolyshkin wrote: See vzctl set --name Well thats a nice start. Now, to follow on from that great progress, how do I get it so that the directory where the root filesystem lives corresponds to the name I set instead of the numeric VEID? No standard way. I guess you can create a symlink; something like this: vzctl set $VEID --name $VENAME --save (cd /vz/root && ln -s $VEID $VENAME) Same for /vz/private if you need it. I did find that after one has created a virtual machine configuration one can edit its config file and add: VE_ROOT="/var/lib/vz/root/vz1" VE_PRIVATE="/var/lib/vz/private/vz1" for example. I have yet to figure out the 'vzctl create' commands though; they appear to require an OS template tarball. While I dropped a root filesystem tarball into the required place, vzctl create didn't like it. I'll keep plugging away. OpenVZ looks pretty good for performance scaleability but what I'd love to see is better management scaleability. If there are any tools which abstract away some of the detail for management of multiple virtual machines I'd like to know. I did try easyvz (http://sourceforge.net/projects/easyvz) but there were problems with the python dependencies. I run Debian Etch; when I tried to run the gui there were issues with strange characters in the python script. Thanks! Steve Wray wrote: Hi there, I'm a long time user of Xen virtualisation and have been evaluating OpenVZ as a replacement for certain applications. OpenVZ appears to be technically superior under certain conditions and I hope to iron out the issues that I have come across. The main issue confronting me at this time is scalability of management; OpenVZ may scale well with respect to performance and resource usage but at this time I don't see it scaling well when it comes to management of virtual machines. I am sure that I must be missing something obvious since its a pretty basic issue. I've searched extensively for some info on this but found nothing. The problem? Numeric rather than symbolic identification of virtual machines. When I start a domU (a Xen virtual machine) in Xen I direct 'xm create' at the config file the name of which corresponds to the name of that domU. When I list currently running machines in Xen I see a listing of the names of the Xen domUs and their corresponding numeric IDs. When I create a logical volume for a Xen domU I create that volume based on the name of the corresponding Xen instance. In each case I try to ensure consistency by making the names of the Xen domUs correspond to the hostnames of the servers which those domUs are running. Host foo is on the domU named foo and is in a logical volume named foo. To start domU foo I run 'xm create /etc/xen/domains/foo.conf'. This scales well and makes things very nice and obvious. OpenVZ seems to do away with symbolic names referring in all instances to numeric ids, a bit like not using DNS but putting an IP address into a URL. I have an awful feeling that when the pager goes off at 2am the person on call, bleary-eyed and tired, will make some horrible mistake when trying to mentally map numeric identifiers to server hostnames. This is what I mean by 'not scaling well'. Use of numeric identifiers may work ok when there is only one or two, but when there may be a dozen things will get out of hand. I am sure that there must be a way to use symbolic names instead of numbers in OpenVZ but I can't for the life of me find out how. Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
Re: [Users] openvz naming conventions; numeric vs symbolic
Kir Kolyshkin wrote: See vzctl set --name Well thats a nice start. Now, to follow on from that great progress, how do I get it so that the directory where the root filesystem lives corresponds to the name I set instead of the numeric VEID? Thanks! Steve Wray wrote: Hi there, I'm a long time user of Xen virtualisation and have been evaluating OpenVZ as a replacement for certain applications. OpenVZ appears to be technically superior under certain conditions and I hope to iron out the issues that I have come across. The main issue confronting me at this time is scalability of management; OpenVZ may scale well with respect to performance and resource usage but at this time I don't see it scaling well when it comes to management of virtual machines. I am sure that I must be missing something obvious since its a pretty basic issue. I've searched extensively for some info on this but found nothing. The problem? Numeric rather than symbolic identification of virtual machines. When I start a domU (a Xen virtual machine) in Xen I direct 'xm create' at the config file the name of which corresponds to the name of that domU. When I list currently running machines in Xen I see a listing of the names of the Xen domUs and their corresponding numeric IDs. When I create a logical volume for a Xen domU I create that volume based on the name of the corresponding Xen instance. In each case I try to ensure consistency by making the names of the Xen domUs correspond to the hostnames of the servers which those domUs are running. Host foo is on the domU named foo and is in a logical volume named foo. To start domU foo I run 'xm create /etc/xen/domains/foo.conf'. This scales well and makes things very nice and obvious. OpenVZ seems to do away with symbolic names referring in all instances to numeric ids, a bit like not using DNS but putting an IP address into a URL. I have an awful feeling that when the pager goes off at 2am the person on call, bleary-eyed and tired, will make some horrible mistake when trying to mentally map numeric identifiers to server hostnames. This is what I mean by 'not scaling well'. Use of numeric identifiers may work ok when there is only one or two, but when there may be a dozen things will get out of hand. I am sure that there must be a way to use symbolic names instead of numbers in OpenVZ but I can't for the life of me find out how. Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users
[Users] openvz naming conventions; numeric vs symbolic
Hi there, I'm a long time user of Xen virtualisation and have been evaluating OpenVZ as a replacement for certain applications. OpenVZ appears to be technically superior under certain conditions and I hope to iron out the issues that I have come across. The main issue confronting me at this time is scalability of management; OpenVZ may scale well with respect to performance and resource usage but at this time I don't see it scaling well when it comes to management of virtual machines. I am sure that I must be missing something obvious since its a pretty basic issue. I've searched extensively for some info on this but found nothing. The problem? Numeric rather than symbolic identification of virtual machines. When I start a domU (a Xen virtual machine) in Xen I direct 'xm create' at the config file the name of which corresponds to the name of that domU. When I list currently running machines in Xen I see a listing of the names of the Xen domUs and their corresponding numeric IDs. When I create a logical volume for a Xen domU I create that volume based on the name of the corresponding Xen instance. In each case I try to ensure consistency by making the names of the Xen domUs correspond to the hostnames of the servers which those domUs are running. Host foo is on the domU named foo and is in a logical volume named foo. To start domU foo I run 'xm create /etc/xen/domains/foo.conf'. This scales well and makes things very nice and obvious. OpenVZ seems to do away with symbolic names referring in all instances to numeric ids, a bit like not using DNS but putting an IP address into a URL. I have an awful feeling that when the pager goes off at 2am the person on call, bleary-eyed and tired, will make some horrible mistake when trying to mentally map numeric identifiers to server hostnames. This is what I mean by 'not scaling well'. Use of numeric identifiers may work ok when there is only one or two, but when there may be a dozen things will get out of hand. I am sure that there must be a way to use symbolic names instead of numbers in OpenVZ but I can't for the life of me find out how. Thanks! ___ Users mailing list Users@openvz.org https://openvz.org/mailman/listinfo/users