Re: [Vserver] CIFS-mounts in vserver guests: solved
Am Dienstag, 3. April 2007 schrieb Roderick A. Anderson: Wilhelm Meier wrote: Am Montag, 2. April 2007 schrieb Wilhelm Meier: after our conversion I got the quick cifs hack running (using a special CLONE-flag for the cifs-thread). The I got this patch, which changes the api to kthread_run. But, the problem remains. I still got this error in dmesg: I've to correct myself! I had a configuration flaw ... if the patch is in place, it works as expected. CIFS-shares can be mounted inside the guests. Wilhelm, Would you be willing to put some instructions together on what it takes to do this? Ok., get the patch from the list and apply it to /usr/src/linux-vserver/fs/cifs/connect.c (or whatever you kernel source path is). Recompile the kernel and/or modules (if cifs is a module). be sure to load the newly compiled module ;-) Boot the host into the new kernel or just un/load the cifs module. Set the ccaps of a guest to binary_mount and secure_mount. Restart the guest. Enter the guest and do a mount.cifs '\\windowsserver\share' /mnt/test -o user=windowsusername,pass=password inside the guest. That's all. Now you can use cifs-mounts inside a guest (like nfs-mounts). -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] CIFS-mounts in vserver guests: solved
Am Dienstag, 3. April 2007 schrieb Daniel Hokka Zakrisson: Has the patch been submitted to (and reviewed by) linux-kernel@vger.kernel.org, [EMAIL PROTECTED] and [EMAIL PROTECTED] Note that it already doesn't follow the typical coding style used in the kernel (regarding the if/while( x ) thing). I posted it to [EMAIL PROTECTED] Steve French told me that it is actually in the cifs-2.6 git. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] CIFS-mounts in vserver guests
Hi Herbert, Am Freitag, 30. März 2007 schrieb Herbert Poetzl: cya there then ... after our conversion I got the quick cifs hack running (using a special CLONE-flag for the cifs-thread). The I got this patch, which changes the api to kthread_run. But, the problem remains. I still got this error in dmesg: vxW: xid=115 tried to spawn a kernel thread. CIFS VFS: cifs_mount failed w/return code = -12 The patch works if I try to mount on the host. Any suggestions? - Wilhelm Index: connect.c === --- connect.c (.../2.6.19.1) (revision 20) +++ connect.c (.../kthread_support) (revision 20) @@ -30,6 +30,7 @@ #include linux/mempool.h #include linux/delay.h #include linux/completion.h +#include linux/kthread.h #include linux/pagevec.h #include asm/uaccess.h #include asm/processor.h @@ -119,7 +120,7 @@ struct mid_q_entry * mid_entry; spin_lock(GlobalMid_Lock); - if(server-tcpStatus == CifsExiting) { + if( kthread_should_stop() ) { /* the demux thread will exit normally next time through the loop */ spin_unlock(GlobalMid_Lock); @@ -181,7 +182,7 @@ spin_unlock(GlobalMid_Lock); up(server-tcpSem); - while ((server-tcpStatus != CifsExiting) (server-tcpStatus != CifsGood)) + while ( (!kthread_should_stop()) (server-tcpStatus != CifsGood)) { try_to_freeze(); if(server-protocolType == IPV6) { @@ -198,7 +199,7 @@ } else { atomic_inc(tcpSesReconnectCount); spin_lock(GlobalMid_Lock); - if(server-tcpStatus != CifsExiting) + if( !kthread_should_stop() ) server-tcpStatus = CifsGood; server-sequence_number = 0; spin_unlock(GlobalMid_Lock); @@ -344,7 +345,6 @@ int isMultiRsp; int reconnect; - daemonize(cifsd); allow_signal(SIGKILL); current-flags |= PF_MEMALLOC; server-tsk = current; /* save process info to wake at shutdown */ @@ -360,7 +360,7 @@ GFP_KERNEL); } - while (server-tcpStatus != CifsExiting) { + while (!kthread_should_stop()) { if (try_to_freeze()) continue; if (bigbuf == NULL) { @@ -399,7 +399,7 @@ kernel_recvmsg(csocket, smb_msg, iov, 1, 4, 0 /* BB see socket.h flags */); - if (server-tcpStatus == CifsExiting) { + if ( kthread_should_stop() ) { break; } else if (server-tcpStatus == CifsNeedReconnect) { cFYI(1, (Reconnect after server stopped responding)); @@ -523,7 +523,7 @@ total_read += length) { length = kernel_recvmsg(csocket, smb_msg, iov, 1, pdu_length - total_read, 0); - if((server-tcpStatus == CifsExiting) || + if( kthread_should_stop() || (length == -EINTR)) { /* then will exit */ reconnect = 2; @@ -756,7 +756,6 @@ GFP_KERNEL); } - complete_and_exit(cifsd_complete, 0); return 0; } @@ -1779,10 +1778,11 @@ so no need to spinlock this init of tcpStatus */ srvTcp-tcpStatus = CifsNew; init_MUTEX(srvTcp-tcpSem); - rc = (int)kernel_thread((void *)(void *)cifs_demultiplex_thread, srvTcp, - CLONE_FS | CLONE_FILES | CLONE_VM); - if(rc 0) { -rc = -ENOMEM; + srvTcp-tsk = kthread_run((void *)(void *)cifs_demultiplex_thread, srvTcp, cifsd); + if( IS_ERR(srvTcp-tsk) ) { +rc = PTR_ERR(srvTcp-tsk); +cERROR(1,(error %d create cifsd thread, rc)); +srvTcp-tsk = NULL; sock_release(csocket); kfree(volume_info.UNC); kfree(volume_info.password); @@ -1973,7 +1973,7 @@ spin_unlock(GlobalMid_Lock); if(srvTcp-tsk) { send_sig(SIGKILL,srvTcp-tsk,1); -wait_for_completion(cifsd_complete); +kthread_stop(srvTcp-tsk); } } /* If find_unc succeeded then rc == 0 so we can not end */ @@ -1987,9 +1987,9 @@ temp_rc = CIFSSMBLogoff(xid, pSesInfo); /* if the socketUseCount is now zero */ if((temp_rc == -ESHUTDOWN) - (pSesInfo-server-tsk)) { + (pSesInfo-server) (pSesInfo-server-tsk)) { send_sig(SIGKILL,pSesInfo-server-tsk,1); - wait_for_completion(cifsd_complete); + kthread_stop(pSesInfo-server-tsk); } } else cFYI(1, (No session or bad tcon)); @@ -3273,7 +3273,7 @@ cFYI(1,(Waking up socket by sending it signal)); if(cifsd_task) { send_sig(SIGKILL,cifsd_task,1); - wait_for_completion(cifsd_complete); + kthread_stop(cifsd_task); } rc = 0; } /* else - we have an smb session ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] CIFS-mounts in vserver guests: solved
Am Montag, 2. April 2007 schrieb Wilhelm Meier: after our conversion I got the quick cifs hack running (using a special CLONE-flag for the cifs-thread). The I got this patch, which changes the api to kthread_run. But, the problem remains. I still got this error in dmesg: I've to correct myself! I had a configuration flaw ... if the patch is in place, it works as expected. CIFS-shares can be mounted inside the guests. - Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] CIFS-mounts in vserver guests
Hi all, I would like to reactivate an old topic, that is mounting cifs-shares inside a vserver guest. I tried this some time ago with no luck: http://www.paul.sladen.org/vserver/archives/200610/0032.html Was there any activity on this topic in the mean time? If there is interest in this, I would like to offer some time to do the testing ;-) -- Wilhelm Meier ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] CIFS in guests [was: NFS mounts in guests [was: how to set capabilities in Debian]]
Am Sonntag, 1. Oktober 2006 13:56 schrieb Herbert Poetzl: On Sun, Oct 01, 2006 at 10:37:15AM +0200, Wilhelm Meier wrote: Am Samstag, 30. September 2006 13:23 schrieb Daniel Hokka Zakrisson: Wilhelm Meier wrote: snip Could you try applying http://people.linux-vserver.org/~dhozac/p/k/delta-nfs-fix01.diff to your kernel and see if that changes anything? This seems to have fixed NFS mounting from guests with binary_mount and secure_mount for me. Thank you Daniel very much! It works too with 2.6.17-vs2.1.1-rc31-gentoo. Now it is possible with all combinations of nfs over udp,tcp,nfsvers=[23]. Small patch, big difference! Is this going to be part of the dev-sources now? yep, was already included when you tried, I guess :) just no new release since ... (i.e. will be in the next one) Is there any effort to make CIFS-mounting inside guests possible (without CAP_SYS_ADMIN)? Even with CAP_SYS_ADMIN I get (cifs-module on host is loaded): vs01 / # strace mount.cifs //192.168.39.1/home/lmeier /home -o user=lmeier ioctl(3, SNDCTL_TMR_CONTINUE or TCSETSF, {B38400 opost isig icanon echo ...}) = 0 close(3)= 0 munmap(0xb7fb5000, 4096)= 0 mount(//192.168.39.1/home/lmeier, /home, cifs, MS_MANDLOCK, unc=//192.168.39.1/home\\lmeier,i...) = -1 ENOMEM (Cannot allocate memory) fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7fb5000 write(1, mount error 12 = Cannot allocate..., 40mount error 12 = Cannot allocate memory ) = 40 write(1, Refer to the mount.cifs(8) manua..., 60Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) ) = 60 munmap(0xb7fb5000, 4096)= 0 exit_group(-1) = ? Process 8701 detached - Wilhelm best, Herbert Thanks, Wilhelm -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] CIFS in guests [was: NFS mounts in guests [was: how to set capabilities in Debian]]
Am Sonntag, 1. Oktober 2006 13:56 schrieb Herbert Poetzl: On Sun, Oct 01, 2006 at 10:37:15AM +0200, Wilhelm Meier wrote: Am Samstag, 30. September 2006 13:23 schrieb Daniel Hokka Zakrisson: Wilhelm Meier wrote: snip Could you try applying http://people.linux-vserver.org/~dhozac/p/k/delta-nfs-fix01.diff to your kernel and see if that changes anything? This seems to have fixed NFS mounting from guests with binary_mount and secure_mount for me. Thank you Daniel very much! It works too with 2.6.17-vs2.1.1-rc31-gentoo. Now it is possible with all combinations of nfs over udp,tcp,nfsvers=[23]. Small patch, big difference! Is this going to be part of the dev-sources now? yep, was already included when you tried, I guess :) just no new release since ... (i.e. will be in the next one) Is there any effort to make CIFS-mounting inside guests possible (without CAP_SYS_ADMIN)? Even with CAP_SYS_ADMIN I get (cifs-module on host is loaded): vs01 / # strace mount.cifs //192.168.39.1/home/lmeier /home -o user=lmeier ioctl(3, SNDCTL_TMR_CONTINUE or TCSETSF, {B38400 opost isig icanon echo ...}) = 0 close(3)= 0 munmap(0xb7fb5000, 4096)= 0 mount(//192.168.39.1/home/lmeier, /home, cifs, MS_MANDLOCK, unc=//192.168.39.1/home\\lmeier,i...) = -1 ENOMEM (Cannot allocate memory) fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7fb5000 write(1, mount error 12 = Cannot allocate..., 40mount error 12 = Cannot allocate memory ) = 40 write(1, Refer to the mount.cifs(8) manua..., 60Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) ) = 60 munmap(0xb7fb5000, 4096)= 0 exit_group(-1) = ? Process 8701 detached - Wilhelm best, Herbert Thanks, Wilhelm -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] NFS mounts in guests [was: how to set capabilities in Debian]
Am Samstag, 30. September 2006 13:23 schrieb Daniel Hokka Zakrisson: Wilhelm Meier wrote: snip Could you try applying http://people.linux-vserver.org/~dhozac/p/k/delta-nfs-fix01.diff to your kernel and see if that changes anything? This seems to have fixed NFS mounting from guests with binary_mount and secure_mount for me. Thank you Daniel very much! It works too with 2.6.17-vs2.1.1-rc31-gentoo. Now it is possible with all combinations of nfs over udp,tcp,nfsvers=[23]. Small patch, big difference! Is this going to be part of the dev-sources now? Thanks, Wilhelm -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] how to set capabilities in Debian
Am Mittwoch, 27. September 2006 16:40 schrieb Herbert Poetzl: H242-meier vserver.nfs # sysctl -a | grep sun error: Operation not permitted reading key net.ipv4.route.flush sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 1 sunrpc.nfs_debug = 0 sunrpc.rpc_debug = 1 different values here will enable different debug output, I would prefer something like 65535 there (which will enable full output) The setting on the nfs-server: H242-meier ~ # sysctl -a | grep sun error: Operation not permitted reading key net.ipv4.route.flush sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 65535 sunrpc.nfs_debug = 0 sunrpc.rpc_debug = 65535 H242-meier ~ # The log on the nfs-server: Sep 28 07:55:31 H242-meier device vmnet1 entered promiscuous mode Sep 28 07:55:49 H242-meier rpc.mountd: MNT3(/home) called Sep 28 07:55:49 H242-meier rpc.mountd: authenticated mount request from vs01:1009 for /home ( /home) Sep 28 07:55:50 H242-meier nfsd: exp_rootfh(/home [f235c628] *:hda2/2277377) Sep 28 07:55:50 H242-meier nfsd: fh_compose(exp 03:02/2277377 //home, ino=2277377) Sep 28 07:56:09 H242-meier device vmnet1 left promiscuous mode The settings on the vserver-host: gs ~ # sysctl -a | grep sun error: Success reading key dev.parport.parport0.autoprobe3 error: Success reading key dev.parport.parport0.autoprobe2 error: Success reading key dev.parport.parport0.autoprobe1 error: Success reading key dev.parport.parport0.autoprobe0 error: Success reading key dev.parport.parport0.autoprobe error: Operation not permitted reading key net.ipv4.route.flush sunrpc.max_resvport = 1023 sunrpc.min_resvport = 650 sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 0 sunrpc.nfs_debug = 65535 sunrpc.rpc_debug = 65535 gs ~ # The log on the vserver-host: Sep 27 22:13:18 gs rpciod_up: users 0 Sep 27 22:13:18 gs RPC: setting up tcp-ipv4 transport... Sep 27 22:13:18 gs RPC: created transport cf91b400 with 16 slots Sep 27 22:13:18 gs RPC: xprt_create_proto created xprt cf91b400 Sep 27 22:13:18 gs RPC: creating nfs client for 192.168.39.1 (xprt cf91b400) Sep 27 22:13:18 gs RPC: destroying transport cf91b400 Sep 27 22:13:18 gs RPC: xs_destroy xprt cf91b400 Sep 27 22:13:18 gs RPC: disconnected transport cf91b400 Sep 27 22:13:18 gs nfs_create_client: cannot create RPC client. Error = -812534784 Sep 27 22:13:18 gs rpciod_down sema 1 Sep 27 22:13:18 gs nfs_get_sb: bad mount version ( ) This doesn't semm to look good? Attached the tcpdump. could you try with a v3,tcp mount too? The trace of the mount inside the vs: vs01 / # strace mount 192.168.39.1:/home /home -o nfsvers=3,nolock,tcp execve(/bin/mount, [mount, 192.168.39.1:/home, /home, -o, nfsvers=3,nolock,tcp], [/* 26 vars */]) = 0 uname({sys=Linux, node=vs01, ...}) = 0 brk(0) = 0x8063000 access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or directory) open(/etc/ld.so.cache, O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=12117, ...}) = 0 mmap2(NULL, 12117, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f4e000 close(3)= 0 open(/lib/libblkid.so.1, O_RDONLY)= 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\0\35\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=28764, ...}) = 0 mmap2(NULL, 30740, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f46000 mmap2(0xb7f4d000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x6) = 0xb7f4d000 close(3)= 0 open(/lib/libuuid.so.1, O_RDONLY) = 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\n\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=9600, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f45000 mmap2(NULL, 11544, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f42000 mmap2(0xb7f44000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x1) = 0xb7f44000 close(3)= 0 open(/lib/libc.so.6, O_RDONLY)= 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240T\1..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=122, ...}) = 0 mmap2(NULL, 1158452, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e27000 mmap2(0xb7f3c000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x115) = 0xb7f3c000 mmap2(0xb7f4, 7476, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_ANONYMOUS, -1, 0) = 0xb7f4 close(3)= 0 mprotect(0xb7f3c000, 4096, PROT_READ) = 0 mprotect(0xb7f64000, 4096, PROT_READ) = 0 munmap(0xb7f4e000, 12117) = 0 open(/dev/urandom,
Re: [Vserver] how to set capabilities in Debian
Am Donnerstag, 28. September 2006 16:42 schrieb Herbert Poetzl: On Thu, Sep 28, 2006 at 08:03:29AM +0200, Wilhelm Meier wrote: Am Mittwoch, 27. September 2006 16:40 schrieb Herbert Poetzl: H242-meier vserver.nfs # sysctl -a | grep sun error: Operation not permitted reading key net.ipv4.route.flush sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 1 sunrpc.nfs_debug = 0 sunrpc.rpc_debug = 1 different values here will enable different debug output, I would prefer something like 65535 there (which will enable full output) The setting on the nfs-server: H242-meier ~ # sysctl -a | grep sun error: Operation not permitted reading key net.ipv4.route.flush sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 65535 sunrpc.nfs_debug = 0 sunrpc.rpc_debug = 65535 H242-meier ~ # The log on the nfs-server: Sep 28 07:55:31 H242-meier device vmnet1 entered promiscuous mode Sep 28 07:55:49 H242-meier rpc.mountd: MNT3(/home) called Sep 28 07:55:49 H242-meier rpc.mountd: authenticated mount request from vs01:1009 for /home ( /home) Sep 28 07:55:50 H242-meier nfsd: exp_rootfh(/home [f235c628] *:hda2/2277377) Sep 28 07:55:50 H242-meier nfsd: fh_compose(exp 03:02/2277377 //home, ino=2277377) Sep 28 07:56:09 H242-meier device vmnet1 left promiscuous mode The settings on the vserver-host: gs ~ # sysctl -a | grep sun error: Success reading key dev.parport.parport0.autoprobe3 error: Success reading key dev.parport.parport0.autoprobe2 error: Success reading key dev.parport.parport0.autoprobe1 error: Success reading key dev.parport.parport0.autoprobe0 error: Success reading key dev.parport.parport0.autoprobe error: Operation not permitted reading key net.ipv4.route.flush sunrpc.max_resvport = 1023 sunrpc.min_resvport = 650 sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 0 sunrpc.nfs_debug = 65535 sunrpc.rpc_debug = 65535 gs ~ # The log on the vserver-host: Sep 27 22:13:18 gs rpciod_up: users 0 Sep 27 22:13:18 gs RPC: setting up tcp-ipv4 transport... Sep 27 22:13:18 gs RPC: created transport cf91b400 with 16 slots Sep 27 22:13:18 gs RPC: xprt_create_proto created xprt cf91b400 Sep 27 22:13:18 gs RPC: creating nfs client for 192.168.39.1 (xprt cf91b400) Sep 27 22:13:18 gs RPC: destroying transport cf91b400 Sep 27 22:13:18 gs RPC: xs_destroy xprt cf91b400 Sep 27 22:13:18 gs RPC: disconnected transport cf91b400 Sep 27 22:13:18 gs nfs_create_client: cannot create RPC client. Error = -812534784 Sep 27 22:13:18 gs rpciod_down sema 1 Sep 27 22:13:18 gs nfs_get_sb: bad mount version ( ) This doesn't semm to look good? Attached the tcpdump. could you try with a v3,tcp mount too? The trace of the mount inside the vs: vs01 / # strace mount 192.168.39.1:/home /home -o nfsvers=3,nolock,tcp execve(/bin/mount, [mount, 192.168.39.1:/home, /home, -o, nfsvers=3,nolock,tcp], [/* 26 vars */]) = 0 uname({sys=Linux, node=vs01, ...}) = 0 brk(0) = 0x8063000 access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or directory) open(/etc/ld.so.cache, O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=12117, ...}) = 0 mmap2(NULL, 12117, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f4e000 close(3)= 0 open(/lib/libblkid.so.1, O_RDONLY)= 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\0\35\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=28764, ...}) = 0 mmap2(NULL, 30740, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f46000 mmap2(0xb7f4d000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x6) = 0xb7f4d000 close(3)= 0 open(/lib/libuuid.so.1, O_RDONLY) = 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\n\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=9600, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f45000 mmap2(NULL, 11544, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f42000 mmap2(0xb7f44000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x1) = 0xb7f44000 close(3)= 0 open(/lib/libc.so.6, O_RDONLY)= 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240T\1..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=122, ...}) = 0 mmap2(NULL, 1158452, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e27000 mmap2(0xb7f3c000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED
Re: [Vserver] how to set capabilities in Debian
Am Dienstag, 26. September 2006 18:05 schrieb Herbert Poetzl: On Tue, Sep 26, 2006 at 11:50:57AM +0200, Wilhelm Meier wrote: Am Dienstag, 26. September 2006 11:10 schrieb Jim Wight: On Sat, 2006-09-23 at 18:40 +0200, Herbert Poetzl wrote: c) why would you want to add CAP_SYS_ADMIN to a guest? Taking 'you' in the sense of 'anyone', I would say for NFS. I don't want to hijack this thread, so can I refer you to one started by Wilhelm Meier on 13th Sep entitled 'How do I nfs-mount inside a vserver?', and which has gone quiet without being resolved. Thank you for reactivating! it was not forgot, it is on my todo list ... unfortunately I have no test systems available ATM to test an nfs setup, but I will try to recreate the setup with a QEMU network shortly I have never been able to get NFS to work without using CAP_SYS_ADMIN, even after upgrading to 2.6.17.11-vs2.0.2/0.30.210, Seems to be still impossible in dev-branch vs2.1.1 (BINARY_MOUNT should do the job but doesn't) in general, the answers to the following questions could be very helpful: - what NFS version and tcp or udp? - what is the actual error you get? - tcpdump of the ongoing negotiation? - logs on both, client and filer with the appropriate sysctl debug options enabled sunrpc.nfsd_debug (filer) sunrpc.nfs_debug (client) sunrpc.rpc_debug (both) O.k., here comes the information: On the NFS-Server (h242-meier): H242-meier vserver.nfs # rpcinfo -p program vers proto port 102 tcp111 portmapper 102 udp111 portmapper 1000241 udp 33321 status 1000241 tcp 32804 status 1000111 udp 4003 rquotad 1000112 udp 4003 rquotad 1000111 tcp 4003 rquotad 1000112 tcp 4003 rquotad 132 udp 2049 nfs 133 udp 2049 nfs 134 udp 2049 nfs 132 tcp 2049 nfs 133 tcp 2049 nfs 134 tcp 2049 nfs 1000211 udp 33322 nlockmgr 1000213 udp 33322 nlockmgr 1000214 udp 33322 nlockmgr 1000211 tcp 32805 nlockmgr 1000213 tcp 32805 nlockmgr 1000214 tcp 32805 nlockmgr 151 udp772 mountd 151 tcp775 mountd 152 udp772 mountd 152 tcp775 mountd 153 udp772 mountd 153 tcp775 mountd H242-meier vserver.nfs # sysctl -a | grep sun error: Operation not permitted reading key net.ipv4.route.flush sunrpc.tcp_slot_table_entries = 16 sunrpc.udp_slot_table_entries = 16 sunrpc.nlm_debug = 0 sunrpc.nfsd_debug = 1 sunrpc.nfs_debug = 0 sunrpc.rpc_debug = 1 H242-meier vserver.nfs # extracted from the log on the nfs-server when the vs tries to mount: Sep 27 11:46:42 H242-meier device vmnet1 entered promiscuous mode Sep 27 11:46:58 H242-meier rpc.mountd: MNT3(/home) called Sep 27 11:46:58 H242-meier rpc.mountd: authenticated mount request from vs01:637 for /home (/home) Sep 27 11:46:58 H242-meier rpc.mountd: MNT1(/home) called Sep 27 11:46:58 H242-meier rpc.mountd: authenticated mount request from vs01:641 for /home (/home) Sep 27 11:47:07 H242-meier device vmnet1 left promiscuous mode The tcpdump of the conversation is in the attached file. The error inside the vs (vs01) is the following: vs01 / # mount 192.168.39.1:/home /home -o nolock,tcp mount: permission denied vs01 / # The trace of this command: vs01 / # strace mount 192.168.39.1:/home /home -o nolock,tcp execve(/bin/mount, [mount, 192.168.39.1:/home, /home, -o, nolock,tcp], [/* 26 vars */]) = 0 uname({sys=Linux, node=vs01, ...}) = 0 brk(0) = 0x8063000 access(/etc/ld.so.preload, R_OK) = -1 ENOENT (No such file or directory) open(/etc/ld.so.cache, O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=12117, ...}) = 0 mmap2(NULL, 12117, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f76000 close(3)= 0 open(/lib/libblkid.so.1, O_RDONLY)= 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\0\35\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=28764, ...}) = 0 mmap2(NULL, 30740, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f6e000 mmap2(0xb7f75000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x6) = 0xb7f75000 close(3)= 0 open(/lib/libuuid.so.1, O_RDONLY) = 3 read(3, \177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\n\0..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=9600, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f6d000 mmap2(NULL, 11544, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f6a000 mmap2(0xb7f6c000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED| MAP_DENYWRITE, 3, 0x1) = 0xb7f6c000 close(3)= 0 open
Re: [Vserver] how to set capabilities in Debian
Am Dienstag, 26. September 2006 11:10 schrieb Jim Wight: On Sat, 2006-09-23 at 18:40 +0200, Herbert Poetzl wrote: c) why would you want to add CAP_SYS_ADMIN to a guest? Taking 'you' in the sense of 'anyone', I would say for NFS. I don't want to hijack this thread, so can I refer you to one started by Wilhelm Meier on 13th Sep entitled 'How do I nfs-mount inside a vserver?', and which has gone quiet without being resolved. Thank you for reactivating! I have never been able to get NFS to work without using CAP_SYS_ADMIN, even after upgrading to 2.6.17.11-vs2.0.2/0.30.210, Seems to be still impossible in dev-branch vs2.1.1 (BINARY_MOUNT should do the job but doesn't) and was on the point of raising the matter when that thread appeared. I too would like to know the circumstances under which NFS mounting can be achieved without resorting to CAP_SYS_ADMIN. Jim ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] How do I nfs-mount inside a vserver?
Hi, I googled for a while but I didn't find a solution for nfs-mounting inside the guest from a remote nfs-server. I had to export the dirs on the nfs-server to the guest AND to the host (why?). After that the host answers to the mount request. I gave the guest ccap secure_mount AND binary_mount. But a mount 192.168.39.1:/home /home -o nolock,tcp gives a permission denied. If I add CAP_SYS_ADMIN to bcap, it works fine. But that's not what I want. If I setup fstab.remote, it works (well, I don't know why!). What is the difference? I'm using 2.6.17-vs2.1.1-rc26-gentoo. Any ideas? -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] How do I nfs-mount inside a vserver?
Am Mittwoch, 13. September 2006 20:33 schrieb Herbert Poetzl: On Wed, Sep 13, 2006 at 10:22:21AM +0200, Wilhelm Meier wrote: Hi, I googled for a while but I didn't find a solution for nfs-mounting inside the guest from a remote nfs-server. I had to export the dirs on the nfs-server to the guest AND to the host (why?). After that the host answers to the mount request. as usual, what tools, what host/guest distro? Host: Gentoo Linux gs 2.6.17-vs2.1.1-rc31-gentoo Guest: Gentoo Host-Tools: sys-cluster/util-vserver-0.30.210-r18 which includes the following patches (according to 000_README): Numbering scheme -- FIXES 000_all - 195_all FEATURES 200_all - 395_all Patch descriptions: -- Patch: 000_all_nice.patch From: Daniel Hokka Zakrisson Desc: Fix obsolete usage of gnu tools (-1 vs -n 1) Patch: 005_all_remove-traditional-syscall.patch From: Herbert Poetzl Desc: Fix util-vserver breakage with gcc-3.4.* and -pie Patch: 010_all_bmask.patch From: Daniel Hokka Zakrisson Desc: vattribute resets bcaps when setting ccaps (upstream patch #4968) Patch: 015_all_chcontext-secure.patch From: Daniel Hokka Zakrisson Desc: Fix the --secure switch to work as expected Patch: 020_all_chcontext.8.patch From: Micah Anderson Desc: Change the section for the chcontext man-page (upstream #16083) Patch: 025_all_clone-arch.patch From: Daniel Hokka Zakrisson Desc: Various arch-specific clone updates (for sparc/sparc64/s390) Patch: 030_all_condrestart.patch From: Daniel Hokka Zakrisson Desc: Fix the condrestart (upstream #15678) Patch: 040_all_debootstrap-script.patch From: Micah Anderson Desc: Let the vserver-debootstrap wrapper accept options for custom scripts Patch: 045_all_fc5.patch From: Daniel Hokka Zakrisson Desc: Adding repos for Fedora Core 5 based/like distributions Patch: 050_all_fstab.patch From: Daniel Hokka Zakrisson Desc: Implement the opposite of mounting Patch: 055_all_remove-init-style-gentoo.patch From: Christian Heim Desc: Deprecate init-style gentoo in favour of plain Patch: 060_all_start-vservers.patch From: Daniel Hokka Zakrisson Desc: Fix the vserver-start all script Patch: 065_all_syscall-update.patch From: Herbert Poetzl Desc: Updating util-vserver's syscalls Patch: 070_all_testsuite-fix.patch From: Daniel Hokka Zakrisson Desc: Fix some issues within the testsuite Patch: 075_all_usage.patch From: Andreas John Desc: Fix the usage hint for the vserver command (upstream #15551) Patch: 080_all_vcontext-uid.patch From: Daniel Hokka Zakrisson Desc: Better handling of vcontext's --uid option (upstream patch #4966) Patch: 200_all_sharedportage.patch From: Benedikt Boehm Desc: Adding a example on how to setting up a shared portage dir Patch: 205_all_clone.patch From: Daniel Hokka Zakrisson Desc: Adding support for guest cloning Patch: 215_all_cpuset.patch From: Jan Rekorajski Desc: Support for cpuset's Patch: 220_all_delete.patch From: Thomas Champagne and Daniel Hokka Zakrisson Desc: Adding support for the delete command (upstream patch #4899) Patch: 225_all_gentoo-tools.patch From: Benedikt Boehm and Christian Heim Desc: Adding various Gentoo related scripts (vemerge, vdispatch-conf, ...) Patch: 235_all_namespace-cleanup.patch From: Bastian Blank and Daniel Hokka Zakrisson Desc: Adding support for namespace-cleanups (by default) Patch: 240_all_pkgmgmt-vsomething.patch From: Daniel Hokka Zakrisson Desc: Unifying some distribution specific commands Patch: 245_all_template.patch From: Daniel Hokka Zakrisson Desc: Create a vserver from a template archive Patch: 250_all_vlogin.patch From: Daniel Hokka Zakrisson / Benedikt Boehm Desc: Adding support for pts inside vservers (upstream patch #4969) Patch: 255_all_shell-completion.patch From: Thomas Champagne and Ben Voui(?) Desc: Adding bash/zsh completion scripts gs patches # The kernel got these additional gentoo kernel patches: gs ~ # tar jxvf /usr/portage/distfiles/genpatches-2.6.17-9.base.tar.bz2 2.6.17/_README 2.6.17/1000_linux-2.6.17.1.patch 2.6.17/1001_linux-2.6.17.2.patch 2.6.17/1002_linux-2.6.17.3.patch 2.6.17/1003_linux-2.6.17.4.patch 2.6.17/1004_linux-2.6.17.5.patch 2.6.17/1005_linux-2.6.17.6.patch 2.6.17/1006_linux-2.6.17.7.patch 2.6.17/1007_linux-2.6.17.8.patch 2.6.17/1008_linux-2.6.17.9.patch 2.6.17/1009_linux-2.6.17.10.patch 2.6.17/1010_linux-2.6.17.11.patch 2.6.17/1700_sparc-obp64-naming.patch 2.6.17/1705_sparc-U1-hme-lockup.patch 2.6.17/1710_alpha-ev56-kconfig.patch 2.6.17/1715_sparc64-pgtable.patch 2.6.17/1900_nfs-stall.patch 2.6.17/2300_usb-insufficient-power.patch 2.6.17/2500_via-irq-quirk-revert.patch 2.6.17/2600_logips2pp.patch 2.6.17/2700_alsa-hda-lenovo-3000.patch gs ~ # tar jxvf /usr/portage/distfiles/genpatches-2.6.17-9.extras.tar.bz2 2.6.17/4000_deprecate
Re: [Vserver] How do I nfs-mount inside a vserver?
Am Mittwoch, 13. September 2006 20:33 schrieb Herbert Poetzl: On Wed, Sep 13, 2006 at 10:22:21AM +0200, Wilhelm Meier wrote: Hi, I googled for a while but I didn't find a solution for nfs-mounting inside the guest from a remote nfs-server. I had to export the dirs on the nfs-server to the guest AND to the host (why?). After that the host answers to the mount request. as usual, what tools, what host/guest distro? In the previous posting I forgot to include this as info (guest vs01 has ctxid 1001): gs ~ # cat /proc/virtual/1001/status UseCnt: 20 Tasks: 6 Flags: 000402020110 BCaps: 344c04ff CCaps: 00070101 I gave the guest ccap secure_mount AND binary_mount. But a mount 192.168.39.1:/home /home -o nolock,tcp gives a permission denied. should be sufficient with recent kernels to do an nfs mount if the portmapper is reachable and working as expected If I add CAP_SYS_ADMIN to bcap, it works fine. But that's not what I want. that's at least interesting, but could be an already fixed bug in older kernels If I setup fstab.remote, it works (well, I don't know why!). What is the difference? main difference is that the fstab.remote is executed on the host but within the network context, which solves certain issues you see, like requiring host and guest ip to be allowed I'm using 2.6.17-vs2.1.1-rc26-gentoo. Any ideas? well, let's do an strace of the actual mount, to see where it fails, and check with rpcinfo and showmounts HTH, Herbert -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] NFS-mounting in guest only with CAP_SYS_ADMIN ? binary_mount not working?
Hi, I googled for a while but I didn't find a solution for nfs-mounting inside the guest from a remote nfs-server. I had to export the dirs on the nfs-server to the guest AND to the host (why?). After that the host answers to the mount request. I gave the guest ccap secure_mount AND binary_mount. But a mount 192.168.39.1:/home /home -o nolock,tcp gives a permission denied. If I add CAP_SYS_ADMIN to bcap, it works fine. But that's not what I want. If I setup fstab.remote, it works (well, I don't know why!). I'm using 2.6.17-vs2.1.1-rc26-gentoo. Any ideas? -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Nested contexts or chaining of context creation (proliferation)
Am Samstag, 9. September 2006 16:54 schrieb Herbert Poetzl: On Fri, Sep 08, 2006 at 09:27:45AM +0200, Wilhelm Meier wrote: Hi, I saw this PROLIFIC context flag and I wonder (haven't tried until now) if with this flag it is possoble to allow a context (not the root context) to create new contexts. Is this true? yes, thats the idea, but it isn't implemented (yet) (it's there for future development :) well, that sounds promising! It would be a _very_ nice to have. Are the contexts nested (assume no)? nope, but the structures for doing so are already there How can I enter the new context from the creating non-root context? basically the same way you do on the host now, given that the context has the required capabilities and flags HTC, Herbert -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Nested contexts or chaining of context creation (proliferation)
Hi, I saw this PROLIFIC context flag and I wonder (haven't tried until now) if with this flag it is possoble to allow a context (not the root context) to create new contexts. Is this true? Are the contexts nested (assume no)? How can I enter the new context from the creating non-root context? -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Using SECURE_MOUNT
Hi, I wonder how to use SECURE_MOUNT. I want to give a vserver secure access to a device so that mounting the device does not introduce any new device nodes. What do I have to include in /etc/vservers/vsxx/bcapabilities? CAP_SYS_ADMIN? Does this always imply the nodev-option for all mounts inside the vserver? -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Shares and Reservations in the token-bucket-algorithm
Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl: well, not all solaris does is a good idea per se and I think the current hard cpu scheduler is much more powerful than the solaris proportional stuff (i.e. you can consider the solaris settings a subset of what you can achieve with the hard cpu scheduler) If the solaris fss is a subset, how do I set the token bucket values? With 3 vservers and 0.5 , 0.25 , 0.25 as (fillrate/intervall) for these, how do I get 2/3 for the first and 1/3 for the second vserver if the third vserver is idle as in the solaris fss case? Where is the hidden parameter? I take it that you 'simply' want fair scheduling between three guests in a 2:1:1 ratio when all three guests are hogging cpu, yes? Yes, the above settings of (fillrate/intervall) should achieve this. in this case you simply forget the maximum values i.e. set them to something very low to give some kind of minimum amount of tokens per time unit, e.g. 1/100 and use a set of 2:1:1 for the idle time values e.g. 1/3,1/6 and 1/6. once hard cpu and indle time is enabled for those guests, they will run in the specified ratio, as long as host processes do not consume cpu resources, in which case the remaining cpu resources will be divided 2:1:1 between them But what happens if the third vserver falls to idle? In the solaris case the remaining two vservers would get 2/3 and 1/3 of the cpus (if the host ifself is idle as well). In the vserver case we get idle time now: 1/4 with the values above. Then this idle time is shared amon the three vservers according to the (fillrate/intervall)_2 for idle time? Well, I don't think this is the same as in the solaris case. (I mention the solaris case here because in some use cases this makes sense, not because I think it is superior. I want to get my understanding of vserver-scheduling right.) Where can one enable the idle time token bucket? The user-level tools don't seem to have support for this. -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Shares and Reservations in the token-bucket-algorithm
Am Donnerstag, 27. Juli 2006 13:20 schrieb Herbert Poetzl: On Thu, Jul 27, 2006 at 09:50:37AM +0200, Wilhelm Meier wrote: Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl: well, not all solaris does is a good idea per se and I think the current hard cpu scheduler is much more powerful than the solaris proportional stuff (i.e. you can consider the solaris settings a subset of what you can achieve with the hard cpu scheduler) If the solaris fss is a subset, how do I set the token bucket values? With 3 vservers and 0.5 , 0.25 , 0.25 as (fillrate/intervall) for these, how do I get 2/3 for the first and 1/3 for the second vserver if the third vserver is idle as in the solaris fss case? Where is the hidden parameter? I take it that you 'simply' want fair scheduling between three guests in a 2:1:1 ratio when all three guests are hogging cpu, yes? Yes, the above settings of (fillrate/intervall) should achieve this. in this case you simply forget the maximum values i.e. set them to something very low to give some kind of minimum amount of tokens per time unit, e.g. 1/100 and use a set of 2:1:1 for the idle time values e.g. 1/3,1/6 and 1/6. once hard cpu and indle time is enabled for those guests, they will run in the specified ratio, as long as host processes do not consume cpu resources, in which case the remaining cpu resources will be divided 2:1:1 between them But what happens if the third vserver falls to idle? In the solaris case the remaining two vservers would get 2/3 and 1/3 of the cpus (if the host ifself is idle as well). same here as there a 2/6 added for each idle time tick to the first guest and 1/6 for the second one, which still is 2:1 as in your 2/3 and 1/3 example ... In the vserver case we get idle time now: 1/4 with the values above. how do you come to this conclusion? I try to summarize: we have three vservers with 1/2, 1/4, 1/4 as values for (fillrate/intervall)_1 (not idle time). If all three vservers have runnable processes, they get 1/2, 1/4, 1/4 of a cpu (if we have more than one cpu we scale the values to sum up to the number of cpus). If the third vserver now gets idle, the first and second vserver still gets 1/2 and 1/4 of a cpu (if they have tokensmax in the bucket, they can get more of a cpu for a limited burst-time). In the long run there is 1/4 of a cpu idle. If we have setup the idle time bucket and the three vservers have the (fillrate/intervall)_2 values 1/2, 1/4, 1/4 for this also, the remaining 1/4 of the cpu (which is left idle from the normal tocken bucket) is given to the vservers. If the third vserver is still idle, the first vserver gets 1/4 * 1/2 from the idle token bucket and this sums to 1/2 + 1/8 = 5/8. The second vserver sums up to 1/4 + 1/4*1/4 = 5/16. So the ratio between the first and second vserver is still 2:1. But we left 1/16 of a cpu idle. And this leds to a recursion and then the two active vservers gets 2/3 and 1/3 of a cpu. Yes, I think I got it ;-) Thank you! Is this type of scheduler already in the stable version? ... as usual, all the features are supported by my 'hack' tools in various forms, this one is probably best to control with the vsched (0.02) or the vcmd tool (which is non trivial to use, I guess), but it would definitely be better to get this functionality into mainline userspace tools ... http://vserver.13thfloor.at/Experimental/TOOLS/ Thank you for the hint! -- Wilhelm ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] /proc/virtual/xid/sched Question
Am Mittwoch, 26. Juli 2006 17:27 schrieb Herbert Poetzl: On Sun, Jul 23, 2006 at 09:29:50PM +0200, Wilhelm Meier wrote: Hello, I get the following entries in gs ~ # more /proc/virtual/1001/sched FillRate: 4,1 Interval: 32,8 TokensMin:15 TokensMax: 125 PrioBias: 0 VaVaVoom: 0 cpu 0: 83 483 3919 552 0 R- 16 15 125 4/32 1/8 What is the meaning of the digits after the colon on the lines starting with FillRate and Interval? the values are FillRate2 and Interval2 (the values used in the advance idle time case) And what are the lines starting with VaVaVoom and cpu 0 ? vavavoom is the priority bonus calculated for the guest when you use the priority extension of the scheduler (which allows to give priority bonus to guests using little cpu over guests using a lot of cpu resources) o.k., the name isn't really self-explanatory ;-) I think this is what I get if switch to sched_prio in the vserver flags? How is the prio-bonus calculated? the cpu N lines give the current scheduler state for each cpu , the fields are: user-ticks sys-ticks hold-ticks token-time in ticks idle-time in ticks H/Ron hold/running I/-idle time/normal tokens tokens-min tokens-max fillrate/interval fillrate2/interval2 thanks! HTH, Herbert Thanks for any explanation. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Scheduling parameter and vschedcalc
Hello, the util-vserver comes with a tool called vschedcalc to calculate the values for the token-bucket-algorithm. These lines are from vschedcalc: # calculate token bucket let interval=100*${fillrate}/${avgcpu} let tokensmin=${hz}*${bursthold}*${fillrate}/${interval} let tokensmax=${hz}*${maxburst}-${maxburst}*${interval} I don't know If I understand the description right, but I think the line to compute tokensmax is wrong. The maxburst-time is the time the vserver can consume tokens from the initial filling, which is tokensmax/hz plus the additional time, which the vserver gets because of the the refilling-rate lasting for the maxburst-time, which is (fillrate/intervall)*maxburst. With this statement we get: tokensmax=maxburst*hz - maxburst * (fillrate/intervall)*hz So, please help me if I misinterpreted the things here. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Shares and Reservations in the token-bucket-algorithm
Hello, I have a question to the token-bucket-filter on top of the linux-scheduler according to the dokumentation (and i must state that I didn't check the source until now ...). The (fillrate/intervall) gives the shares of the number of cpus one vserver can get at maximum. But what happens if only one vserver has runnable processes? Then it gets only (fillrate/intervall) of all cpus (not taking tokensmax and tokensmin into account). Shouldn't this be called a reservation instead of a share? A share should be the amount of cpus a vserver gets if all vservers have runnable processes. If one vserver has no runnable processes, then the cpus should be given proportianal to the active vservers (at least this is what Solaris-10 does). In the paper http://www.cs.princeton.edu/~mef/research/vserver/paper.pdf I found the terms shares and reservations but I can't find the point to setup both types of parameters. I would be glad if someone could explain this to me. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] /proc/virtual/xid/sched Question
Hello, I get the following entries in gs ~ # more /proc/virtual/1001/sched FillRate: 4,1 Interval: 32,8 TokensMin:15 TokensMax: 125 PrioBias: 0 VaVaVoom: 0 cpu 0: 83 483 3919 552 0 R- 16 15 125 4/32 1/8 What is the meaning of the digits after the colon on the lines starting with FillRate and Interval? And what are the lines starting with VaVaVoom and cpu 0 ? Thanks for any explanation. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Display the xid of an INode
Hi, I don't know if this is a stupid question: but how do I get the xid of a file which has been tagged? Is it showattr? But which option does the trick? -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Can't compile unionfs with linux-vserver patch
Am Donnerstag, 20. Juli 2006 18:26 schrieb Francis Giraldeau: Hi, I tried to compile unionfs-1.3 with linux 2.6.17.5 with the vserver patch 2.0.2, but I got this error : CC [M] /home/francis/rpm/BUILD/unionfs-1.3/inode.o /home/francis/rpm/BUILD/unionfs-1.3/inode.c: In function 'unionfs_link': /home/francis/rpm/BUILD/unionfs-1.3/inode.c:260: error: too few arguments to function 'vfs_unlink' [...] Here is a snipet of code from the vserver patch, that shows the fuction definition change : -extern int vfs_unlink(struct inode *, struct dentry *); +extern int vfs_unlink(struct inode *, struct dentry *, struct nameidata *); In the inode.c, the fuction call is missing a pointer to a nameidata struct: unionfs-1.3/inode.c:260 vfs_unlink(hidden_dir_dentry-d_inode,whiteout_dentry); What can be done to make unionfs compatible with vserver? You can try these patches: http://mozart.informatik.fh-kl.de/download/Software/VServer/vserver.html They aren't espcially for unionfs-1.3 but they should be appliable. Thanks a lot in advance and have a nice day, Francis -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Gentoo eBuilds gone
Am Montag, 17. Juli 2006 11:16 schrieb Christian Heim: On Monday 17 July 2006 10:33, Oliver Welter wrote: Hi Folks, after a portage sync I recognized that all ebuilds except an old kernel 2.6.15, tools 2.0.1 has gone. What happend ? Is this related to the bugseries? Nope, we (Hollow and me) decided to move them to a seperate overlay [1] and only put the stable ones (to decrease the overhead) into the tree. You will find the overlay hosted on the overlays.gentoo.org host [2]. [1] http://overlays.gentoo.org/svn/proj/vps [2] http://overlays.gentoo.org/proj/vps/browser Is there an rsync-source to be used with gensync? -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] BME and CoW as split patches available?
Hi, probably a simple question: are the BME and CoW-link-breaking extensions available as single patches. If yes, where? And for kernel 2.6.15? thx, Wilhelm -- -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] BME and CoW as split patches available?
Am Donnerstag, 2. Februar 2006 12:09 schrieb Herbert Poetzl: And for kernel 2.6.15? nope, not publicly available atm, if you can make a good argument, we can arrange something though. Well, ... I thought it would be interesting to look if it works together with the new beta OpenVZ-2.6.15 patches and unification of OpenVZ VPSes. Just curious. thx, Wilhelm -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] BME and CoW as split patches available?
Am Donnerstag, 2. Februar 2006 13:39 schrieb Herbert Poetzl: On Thu, Feb 02, 2006 at 01:29:54PM +0100, Wilhelm Meier wrote: Am Donnerstag, 2. Februar 2006 12:09 schrieb Herbert Poetzl: And for kernel 2.6.15? nope, not publicly available atm, if you can make a good argument, we can arrange something though. Well, ... I thought it would be interesting to look if it works together with the new beta OpenVZ-2.6.15 patches and unification of OpenVZ VPSes. Just curious. well, let us know how it goes ... is the argument good enough for you to supply the split bme and cow patches for 2.6.15? best, Herbert thx, Wilhelm -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver -- -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] How to vunify/vhashify on Gentoo
Am Sonntag, 22. Januar 2006 12:38 schrieb Enrico Scholz: [EMAIL PROTECTED] (Wilhelm Meier) writes: I'm using Gentoo as a host and also Gentoo as VPSs. If I try to vunify/vhashify two VPS, I get: gs vservers # ln -s /etc/vservers/vs01 /etc/vservers/vs01c/apps/vunify/refserver.00 gs vservers # vserver vs01c unify Can not determine packagemanagement style failed to determine configfiles Does vhashify/vunify really make sense on Gentoo? AFAIK, Gentoo does not have a packagemanagement and you have to recompile everything (which will probably produce different checksums). Yes, but Gentoo has package-management - the portage system. When you do a 'make install' from the same source tree, vhashify/vunify will still not work because most 'make install' do not preserve timestamps. But because timestamps are used to check whether files are identically resp. are going into the calculation of the hash value, you will not gain very much with vhashify/vunify on Gentoo. You have to use binary packages, then you will gain the same amount as with other distributions. And you have to compile the things only once. This is o.k. since the compiler-flags won't change from Vserver to VServer. The only issue might be with the portage-use-flags. Enrico -- -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] /proc/1 and PID of init (Debian and Gentoo guest)
Hi, the init-process of a VServer (Gentoo-VPS is started via init) naturally gets a PID 1 in the host-context, but this PID ist remapped to 1 in the VPS-context. Is this a simple mapping done by the vserver-patches? I could not spot this in the patch-set ... I think this is part of the virtualization of the procfs similar to /proc/uptime. Can someone give me a hint? Additionally, what happened in the case of a Debian-VPS, which is started via /etc/init.d/rc 3 ? I can't read all the entries in /proc/1: vs03:~# ls -l /proc/1 ls: cannot read symbolic link /proc/1/cwd: Permission denied ls: cannot read symbolic link /proc/1/root: Permission denied ls: cannot read symbolic link /proc/1/exe: Permission denied and I can't find the init-process of the Debian-VPS in the spectator-context or by the vps-tool. So, this entry in /proc is completely faked, I think. In the old-style util-vserver-docu I found a flag fakeinit. Does this exist in the alpha-util-vserver also? Well, I think this question is a typical newbie question, but I can't find the information. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] tagxid and files in the host context
Hello, the VServer-Paper states that files which belong originally to the host context 0 silently migrate to context id nnn if the were modified from context nnn. I made the tests below, but the file /mnt/x which was created in the host-context did not migrate. Did I miss something? gs mnt # uname -a Linux gs 2.6.14-vs2.0.1-gentoo #1 SMP PREEMPT Sun Jan 1 18:49:51 CET 2006 i686 Intel(R) Pentium(R) M processor 1200MHz GenuineIntel GNU/Linux gs mnt # touch x gs mnt # vcontext --create --xid 10 /bin/bash New security context is 10 gs mnt # ls -l total 12 drwx-- 2 root root 12288 Jan 17 00:57 lost+found -rw-r--r-- 1 root root 0 Jan 17 01:19 x gs mnt # ls x gs mnt # exit exit gs mnt # ls -l total 13 drwx-- 2 root root 12288 Jan 17 00:57 lost+found -rw-r--r-- 1 root root13 Jan 17 01:20 x gs mnt # lsxid 0 . 0 ./lost+found 0 ./x gs mnt # mount /dev/hda1 on / type ext3 (rw,noatime) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) udev on /dev type tmpfs (rw,nosuid) devpts on /dev/pts type devpts (rw) /dev/hdb1 on /tftproot type ext3 (rw,noatime) shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev) nfsd on /proc/fs/nfs type nfsd (rw) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/loop/0 on /mnt type ext2 (rw,tagxid) gs mnt # gs mnt # vcontext --create --xid 10 /bin/bash New security context is 10 gs mnt # touch y gs mnt # exit exit gs mnt # lsxid 0 . 0 ./lost+found 0 ./x 10 ./y gs mnt # -- -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] VServer-2.1.0-Patch changed vfs_mkdir, etc. - breaks unionfs
On Mon, Jan 09, 2006 at 12:50:35PM +0100, Wilhelm Meier wrote: Hi, the developement-pachset 2.1.0 modified the interface to the vfs-operations (vfs_mkdir, ...). This breaks the compilation of other vfs-modules, e.g. unionfs. Why was this neccessary? How to fill the new parameter struct nameidata? this is mainliny 'broken' by the BME patches, which are basically a 'fix' to mainline kernels ... Does that mean, that the BME patches will be integrated in the mainline in the near future? have a look at this patch and try to 'adapt' your vfs module to pass the correct informations ... http://www.13thfloor.at/vserver/d_rel26/v2.1.0/split-2.6.14.4-vs2.1.0/36_2.6.14.4_bme.diff.hl thanks, Wilhelm best, Herbert -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] Getting the namespace of processes
Hi, I want to extract the namespace-attribute of a specific/all process(es). Some time ago there was a discussion about this topic, but I think the essence was that there are no tools to get this information. Or am I wrong? Is it possible to extract this information via /proc/...? I didn't find any hints about that. -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] VServer-2.1.0-Patch changed vfs_mkdir, etc. - breaks unionfs
Hi, the developement-pachset 2.1.0 modified the interface to the vfs-operations (vfs_mkdir, ...). This breaks the compilation of other vfs-modules, e.g. unionfs. Why was this neccessary? How to fill the new parameter struct nameidata? -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] How to vunify/vhashify on Gentoo
Hi, I'm using Gentoo as a host and also Gentoo as VPSs. If I try to vunify/vhashify two VPS, I get: gs vservers # ln -s /etc/vservers/vs01 /etc/vservers/vs01c/apps/vunify/refserver.00 gs vservers # vserver vs01c unify Can not determine packagemanagement style failed to determine configfiles gs vservers # So, how can I fix this? -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver
[Vserver] How to setup FSS?
Hello, how do I setup fair share scheduling? I've read http://linux-vserver.org/Scheduler+Parameters and I understand, that if all contexts have running processes, the fillrate/fillinterval gives the share of cpu capacity the context gets (roughly, if you neglect the effect of the other parameters). But what happens, if only one context has running processes. Is this context then able to use the rest of cpu capacity, or is it waiting ? -- Wilhelm Meier email: [EMAIL PROTECTED] ___ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserver