[be care, my english's awfull, and i hope my question is not too stupid]
Hi,
In Lenny, how i can use an nfs server in VE ?
In Host :
modprobe nfsd works (/proc/net/rpc created)
But in VE /proc/net/rpc not (mount --bind from host to VE don't work,
did i miss anything ?)
# aptitude search vz
I have a template that I use which needs a certain capability enabled
each time I deploy it. Is there an easy way to set this capability
inside the template itself?
I am sick of having to use this every time:
vzctl set # --capa sys_admin:on --save
If there's a way to script this, that would
Hi,
add capability and more settings to your ct template config file,
like:
$ echo 'CAPABILITY=SYS_ADMIN:on '
/etc/vz/conf/ve-your-CT_template_name.conf-sample
And deploy it ... // some capa need a restart
$ vzctl set CTID --applyconfig template_cfg [...]
or use on your creation:
$ vzctl
[be care, my english's awfull, and i hope my question is not too stupid]
Not problem, in this mail list english is not native languge for more
people (my English is bad too :( )
Please read this manual:
http://wiki.openvz.org/NFS
You don't may use kernel-mode NFS server inside container, but,
Also, you may use mount --bind: http://wiki.openvz.org/Bind_mounts to
sharing files beetwen different VE located on one NardwareNode (i use
it, it work fine, you may need rename container numbers for correct
mount in scripts aftrer node rebooting)
And, try this manuals:
Hello,
You can add this in your VE config. The default VE config is
/etc/vz/conf/ve-basic.conf-sample or /etc/vz/conf/ve-basic.conf-sample
depending on distro (you can change this via CONFIGFILE option in
/etc/vz/vz.conf). So you can add
CAPABILITY=NET_ADMIN:on
into default config or create
I found it... it was one of those so blatant you don't even notice errors.
somehow the /etc/hosts file in the CT looked like this:
10.1.10.101 testserver.mydomain.com testserver testserver
After removing the extra hostname, everything works.
BTW, I've since figured out why the odd routing