Re: [Lxc-users] Using common rootfs for multiple containers
Yes, this is possible. There are multiple approaches, for example: 1. Creating a snapshot (or outright copy) of a filesystem, then disposing of it when done. (1a) Manually creating a full copy (1b) Using a blockstore-provided snapshot facility such as LVM2 2. Using a snapshot-capable filesystem, and using a snapshot provided by the filesystem itself (ZFS, BTRFS, etc.) 3. Mounting read-only, with either of two solutions for writable portions of the filesystem. This class of solution is very similar to NFS based root situations (ie. modern PXE-driven diskless network boot). (3a) 'tmpfs' or some other in-memory based write solution where required. (3b) Union-mounts. My advice would be as follows. == simplest == (1a) and (1b) are easiest *and* allow the use of arbitrary filesystems. == medium hassle== (2) is become somewhat common but is more difficult. (3b) are more difficult == more hassle == (3a) is more hassle up front but is perhaps the neatest solution overall. (3b) i have never got working, but should be neat.. it's just not going to be as widely supported by various kernels out there as (3a) or (1b). Personally I use (3a) and (1b). - Walter -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Destination Host Unreachable from LXC guest
Assuming you have IP forwarding enabled on the LXC host's kernel (sysctl -w net.ipv4.ip_forward=1) as reported... Check you have allowed forwarding of packets to/from that interface with 'iptables-save' (dump current rules). If not, try adding some rules like: # at filter table, allow input (receiving packets) from vboxnet0 interface iptables -t filter -A INPUT -i vboxnet0 -j ACCEPT # at filter table, allow output (sending packets) to vboxnet0 interface iptables -t filter -A OUTPUT -o vboxnet0 -j ACCEPT iptables -t filter -A FORWARD -i vboxnet0 -j ACCEPT If you want to then add NAT access for the LXC guest to the internet, something quick might look like: iptables -t nat -A POSTROUTING -o boxnet0 -j MASQUERADE iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Also double-check in the LXC guest that you have no firewall rules active or that they default to ACCEPT (again, use 'iptables-save'). Finally, if you want the guest to route beyond the host, check that the LXC guest has a default route configured. For additional debugging, I'd recommend using tcpdump and ping within the host and the guest. - Walter -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] /dev/null permission denied
Since nobody else responded, even though I don't use this environment... > Host OS: Debian Wheezy > VM: Debian Wheezy Here's the issue: > INIT: version 2.88 booting > /etc/init.d/rc: 424: /lib/lsb/init-functions: cannot create /dev/null: > Permission denied > /etc/init.d/rc: 85: /etc/init.d/rc: cannot create /dev/null: Permission > denied > mount: permission denied > /etc/rcS.d/S02hostname.sh: 424: /lib/lsb/init-functions: cannot create > /dev/null: Permission denied > hostname: you must be root to change the host name > /etc/rcS.d/S02mountkernfs.sh: 424: /lib/lsb/init-functions: cannot create > /dev/null: Permission denied The above is your distribution init scripts trying to create devices within the guest filesystem. There are two ways around this: 1) Pre-create them (GOOD IDEA) 2) Grant it the rights to create them (BAD IDEA) Solution (1) can probably be achieved with careful use of cp from the host, or lots of fussy `mknod` use. There are also probably ways to do this with various device-oriented filesystems, but it's best in both a security and portability sense to use static device files instead. For solution (2), which is a BAD IDEA due to the fact that the capability in question (`man capabilities`, CAP_SYS_ADMIN) simultaneously allows device creation *and* lots of really bad stuff that you don't want an attacker who gets root in your container to do, the specific line that disallows the guest from creating the devices is: > lxc.cap.drop= sys_admin Again, my advice is: don't change this line. You were probably confused with the following lines which, in my understanding, define a sort of whitelist of device access from within the container without necessarily caring whether said devices actually exist within the guest filesystem (a separate issue). To clarify: the above line disallows device creation on the filesystem, the below two lines deal with device access. > lxc.cgroup.devices.allow= c *:* rwm > lxc.cgroup.devices.allow= b *:* rwm If you are interested in using containers in anything like a production environment I would suggest reviewing the generated configuration of `gentoo-lxc`, which contains some gathered wisdom around functional levels of security. It begins at https://github.com/globalcitizen/lxc-gentoo/blob/master/lxc-gentoo#L403 and includes additional recommendations to avoid mounting /sys or issuing CAP_NET_ADMIN. Enjoy, Walter -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Kernel Log Namespace Support?
PS. Possibly some potential for a cryptographic hook to TPM stuff here. - Walter -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Kernel Log Namespace Support?
I am sure there's a good reason why this doesn't yet exist... however, it would still be useful! For instance, I note that the network namespace is leading the charge with the capacity to implement netfilter rules within a container... unfortunately the common -j LOG target becomes ~useless within a container since it is impossible, when interpreting the resulting data at some future point in time, to reliably determine which container (or host) the resulting kernel log entries spawned from. I know there's a lot of stuff going on around capabilities right now... perhaps a capability to explicitly allow the setting netfilter rules (on all interfaces within a container) wouldn't go astray. This would be separate to existing network-related capabilities. The idea is that at least that way you could set the -j LOG --log-prefix='[guest id] ' ... in order to better trust generated entries. (It's also not out of the question that one may wish to differently process logs produced within guests .. for example, to send them to a particular remote syslog server. Right now that's a big iffy when it comes to kernel messages.) Ultimately this only potentially solves this very specific class of use case, though. Another option might be to have a kernel option that enables logging the executing cgroup name at the beginning of the kernel log line. This would require security attempts to imitate it, though... resulting in some overhead. More weightily against, I recall a structured kernel log proposal being discussed on LWN someplace... perhaps https://lwn.net/Articles/464276/ but I believe there was a more recent update, which I can't find. Has anyone given this some thought recently? Is there any information out there about a solution in the works? Cheers, Walter -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Fwd: Time Namespace Support?
Apparently there was once a patch regarding time namespaces @ https://lwn.net/Articles/179825/ but it has vanished. For wont of a better place to ask - does anyone know if we'll see that back soon? Reason for quest: I am trying to run an NTP server in an LXC container and would prefer not to have to grant the container CAP_SYS_TIME - rather I would prefer that if CAP_SYS_TIME were absent then time manipulation would affect the container only, ie. using time namespaces, or if time namespaces were not available, it would fail (as occurs presently when CAP_SYS_TIME is dropped for a container). Any idea if we are likely to see any features like this at some point soon? This would also make LXC a whole lot more useful to simulate some WAN configurations (in combination with the sophisticated capabilities of the networking stack, re: latencies, lossiness, traffic generation, etc.) which is potentially something we are looking at shortly. It would also make LXC really useful for stimulating weird, time-related bugs in automated software testing (there's a whole lot of those out there!) I do realize that most people these days tend toward UTC for the system clock .. I still see a time namespace as valuable, though, for the above reasons. - Walter -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Control panel
> With somewhat more work I could add: > > * a wizard to create new containers (very simple at first, where it only > creates one kind of container system) This is a real minefield. All in all, I would leave guest creation out of any such script as it feels like it's going to get code rot, generate bug reports and generally achieve little. If you don't believe me, try it :) All other ideas on this thread are excellent. Thumbs up. - Walter -- Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology - will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/intel-dev2devmar ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )
>>> ... I have read up on the OUI documentation and >>> looking at the detail on the site LXC could opt for a 32bit OUI which would >>> cost $600 for one block. The dev guys might want to setup a pledge >>> program... >> I will pay for it. > I too am willing to pay the whole thing, so, halvsies? Or see how many > others want to split even? Sounds good. I guess we can nominate you as the finance go-to on this one then :) Let us know details when they emerge. - Walter -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )
>> I have been following this thread and have started to investigate if my >> compagnie might be willing to donate a range of macs to lxc. Give me about a >> week And i will know more. > > Well that was fun. After spending much of last week trying to figure out who > was responsible for the OUI for my company I came to a dead end. So > unfortunately I can't help. I have read up on the OUI documentation and > looking at the detail on the site LXC could opt for a 32bit OUI which would > cost $600 for one block. The dev guys might want to setup a pledge program > or paypal donation account to see if they might raise the 600 Bucks. I would > donate for sure. I will pay for it. Please let me know the procedure to purchase. - Walter -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC Container Boot/Shutdown errors
> Gentoo has an unsupported script called lxc-gentoo that will ... > Any help to resolve the above situations would be appreciated. Hi Kelly, I am one of the lxc-gentoo authors. However, I am only able to spend time on it sporadically and mostly merge other people's fixes and additions these days. Feel free to fork on github and fix the script, or if you find a way to resolve the issue let us know and we can implement the required changes for you. To communicate or post future issues with the script, please use: https://github.com/globalcitizen/lxc-gentoo/issues Thanks... and I'm very glad to see fellow Gentoo users checking out lxc! :) - Walter -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC and OVF
ed resources to the container Personal feeling (may be wrong): Probably lxc is not quite at the point where a clear integration could be made. This is because there were (last time I checked, couple of months back) still a number of issues around more basic network configuration (eg: ability to push routes to a container from lxc) that prohibited the full externalisation of even basic config. Because distributions tend to have their own manner of configuring static networking information, this seems to have led to a temporary workaround practice whereby people either manually configure nodes or deploy using DHCP. However, OVF itself appears to assume the capacity for the virtualisation system to inflict a desired configuration on the virtual environment. (Perhaps a specific, guest-readable, host-writable file made available to the container could be used to provide this configuration to the guest at boot time? Via /proc?) - Walter -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] Restarting snmp service on the host, shutdown snmp on the guest.
>> i've just found something that is being anoying me: >> >> when i restart the snmpd daemon on my host, it shutdown the snmpd daemon >> on my container. > > This, and many similar cases, happens - most likely - due to > bugs in system startup scripts on the host. Just briefly: this type of problem is very common across multiple distributions at the moment. Multiple Gentoo daemons have had similar bugs lodged and resolved. - Walter -- Protect Your Site and Customers from Malware Attacks Learn about various malware tactics and how to avoid them. Understand malware threats, the impact they can have on your business, and how you can protect your company and customers by using code signing. http://p.sf.net/sfu/oracle-sfdevnl ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] regular lxc development call?
Apologies that I am travelling in north Africa at the moment - somewhat sudden change of schedule - and will have highly sporadic availability until late January. Nevertheless please keep me Cc'd on any developments for subsequent calls. W On 13/12/2010 7:05 PM, "Stéphane Graber" wrote: On Tue, 2010-11-30 at 03:06 +, Serge E. Hallyn wrote: > Quoting Daniel Lezcano (daniel.lezc...@f... I'd like to attend that call, Skype ID: stgraber Depending on how many people are going to attend and where they're from, I might be able to provide a conf number. I asked my company (Revolution Linux) and we can use our 1-800 number for the call. I can also invite people from other countries as long as they are on landline. 9:30am central is a bit early for me as I tend to arrive at the office around 10am central (9am eastern). I'm usually around from 9am eastern to 11:30am and 12:30pm to 5:30pm. Monday being usually quite busy so would like to avoid if possible :) I guess it might be useful to have a list somewhere (wiki ?) of people who'd like to attend with availabilities and timezone. -- Stéphane Graber Ubuntu developer http://www.ubuntu.com -- Lotusphere 2011 Register now for Lotusphere 2011 and learn how to connect the dots, take your collaborative environment to the next level, and enter the era of Social Business. http://p.sf.net/sfu/lotusphere-d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Lotusphere 2011 Register now for Lotusphere 2011 and learn how to connect the dots, take your collaborative environment to the next level, and enter the era of Social Business. http://p.sf.net/sfu/lotusphere-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Proposal for an FHS-compliant default guest filesystem location
Glad to see some further discussion. > Personally, I like and use /srv/lxc for my VMs and don't see any > conflict with the FHS. It is, after all, a site local configuration > sort of thing that gets set up when you build the images and comprises, > potentially, entire FHS-like sub hierarchies for the VMs. The thing is, the FHS does say "no program should rely on a specific subdirectory structure of /srv existing or data necessarily being stored in /srv" which makes it a poor choice for example when considering eg: the default directory to deploy "lxc-create -T " template script generated guests to. Handling 10,000 what-ifs in bash isn't super enjoyable... if things go down the 'make it ultra configurable' path (not a bad thing) then perhaps we need to mature the template scripts to use a shared library of bash functions... Services that are more daemon oriented do have no problem reconfiguring their default path, however for lxc-utils this information is presently distributed throughout a number of places, some of which are ignored or overwritten by various distributions' packages, it becomes somewhat harder to manage 'on the fly' reconfiguration... That's what originally prompted this post, actually. So while /srv could work, I do think /var is more suitable in this case. > > > (eg: /var/lib/lxc/) > > > - all use of /etc/lxc//rootfs should be considered deprecated > > For the cgroup mount point, I've been using /var/lib/cgroup and I think > (believe) that was the consensus of a discussion quite some time ago and > is what's recommended in some howtos. References please? Judging from the existing /dev /sys and /proc mounts, anything kernel-centric that is going to become a base-expectation in future should probably not reside in /miles/of/subdirectories/that/are/potentially/mounted/later/in/the/boot/process/than/real/root/thereby/causing/untold/issues Hope you can understand my point ... basically if you've mounted /cgroup then you're set. If you've mounted /var/lib/cgroup then want to (re)mount /var you have issues... > For the container mount-points > and storage of the registered configuration files(s), /var/lib/lxc works > just fine and would be in agreement with the strategy if /var/lib/cgroup > for the cgroups, IMHO. Personally I see lxc.conf more suited to /etc/lxc/guestname.conf or /etc/lxc/guestname/lxc.conf but it's very much a dont-really-care scenario. /etc would be the traditional option. And since we are talking about virtualising an entire system, it's not without precedent to segregate the configuration information and the filesystem (eg: VMware installs often do this), eg: by leaving config in /etc/lxc/* and filesystems elsewhere. IMHO. - Walter -- The Next 800 Companies to Lead America's Growth: New Video Whitepaper David G. Thomson, author of the best-selling book "Blueprint to a Billion" shares his insights and actions to help propel your business during the next growth cycle. Listen Now! http://p.sf.net/sfu/SAP-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Proposal for an FHS-compliant default guest filesystem location
> > Therefore I humbly propose: > > - the establishment of /var/lib/lxc as the default top-level > > directory for guest filesystems > AFAICS we are still using /var/cache/lxc right now. Hrrm interesting, I haven't seen that come through on my distribution's packages (gentoo). Quick survey - what paths are other distributions encouraging their users to put guest root filesystems in? > Which I like better than /var/lib/lxc. If it has 'lib' in the pathname, it > should have libraries! I'm not certain about that: /var/lib/mysql has been the default for MySQL databases, forever. The FHS v2.3 (p33) states that /var/lib is for "variable state information" and "An application (or a group of inter-related applications) must use a subdirectory of /var/lib for its data. There is one required subdirectory, /var/lib/misc, which is intended for state files that don’t need a subdirectory; the other subdirectories should only be present if the application in question is included in the distribution. /var/lib/ is the location that must be used for all distribution packaging support." Here's what the FHS v2.3 (p31) doc says about /var/cache : 'Application cache data': "/var/cache is intended for cached data from applications. Such data is locally generated as a result of time-consuming I/O or calculation. The application must be able to regenerate or restore the data. Unlike /var/spool, the cached files can be deleted without data loss. The data must remain valid between invocations of the application and rebooting the system. Files located under /var/cache may be expired in an application specific manner, by the system administrator, or both. The application must always be able to recover from manual deletion of these files (generally because of a disk space shortage). No other requirements are made on the data format of the cache directories." Basically /var/cache seems to be the wrong place for this type of data. My vote is still for /var/lib instead of /var/cache. > > - all use of /etc/lxc//rootfs should be considered deprecated > I don't see that being used on my system, or in the git commit you cited. As per the example given in the previous post, I believe it's used in some of the template scripts. (Also I have a feeling it may be referenced in some distribution-specific documentation or packages, though I'm not sure exactly which/where.) Again, thanks a lot to all for great software :) - Walter -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Possibly of interest - Chrome OS plans
Assuming this is not already known to everyone, though it was apparently published in late 2009... Apparently Chrome OS plans to use containers to increase system security. See http://www.chromium.org/chromium-os/chromiumos-design-docs/system-hardening (In particular, 'minijail' and 'libminijail'.) Update from August 20 this year: "we have minijail implemented, just not feature-complete". http://code.google.com/p/chromium-os/issues/detail?id=380 Code is available to browse here: http://git.chromium.org/gitweb/?p=minijail.git;a=tree The code itself states: "XXX This is a very early implementation of the jailing logic. XXX Many features are missing or will be made more tunable." Hope the above is of interest to some! - Walter -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Proposal for an FHS-compliant default guest filesystem location
Hi all, I have been playing with LXC on and off for a few months now. It's great. Thanks so much to all developers and the wider user community for making yet another powerful set of functionality available to the free world! :) Now that's out of the way... One higher-level issue I see at present is that the various distribution packages and lxc userspace/template scripts seem to have different concepts of the 'correct' destination for container-related files. While /etc/lxc may be a good choice for configuration files, guest root filesystems may be of considerable size and should definitely stay away from /etc/ Right now this is not the case. For example 'lxc-create -t fedora -n fedora' will create /etc/lxc/fedora/rootfs To see what 'the right approach' might be, I had a look at the Filesystem Heirarchy Standard v2.3 (2004) @ http://www.pathname.com/fhs/ Apparently the decision earlier this year to move the template scripts out of standard binary locations was made against this standard, so it would seem a good place to seek guidance. http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commitdiff_plain;h=c01d62f21b21ba6c2b8b78ab3c2b37cc8f8fd265 Reading the document, it appears that one of the following may be a better location for guests' root filesystems: /srv/lxc /var/lxc /var/lib/lxc The reason *not* to use /srv/lxc is the following quote from page 15 of the FHS v2.3 PDF: "This setup will differ from host to host. Therefore, no program should rely on a specific subdirectory structure of /srv existing or data necessarily being stored in /srv" The data should not be placed in /usr ("/usr is shareable, read-only data" - page 18). It thus appears that one of the /var/lxc or /var/lib/lxc options appear best: "/var is specified here in order to make it possible to mount /usr read-only. Everything that once went into /usr that is written to during system operation (as opposed to installation and software maintenance) must be in /var" (page 30) Later the FHS states (page 30): "Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list" Therefore I humbly propose: - the establishment of /var/lib/lxc as the default top-level directory for guest filesystems (eg: /var/lib/lxc/) - all use of /etc/lxc//rootfs should be considered deprecated - legacy installations may create /etc/lxc//rootfs symlinks to assist with migration Thoughts? - Walter -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Proposal for an FHS-compliant default guest filesystem location
Hi all, I have been playing with LXC on and off for a few months now. It's great. Thanks so much to all developers and the wider user community for making yet another powerful set of functionality available to the free world! :) Now that's out of the way... One higher-level issue I see at present is that the various distribution packages and lxc userspace/template scripts seem to have different concepts of the 'correct' destination for container-related files. While /etc/lxc may be a good choice for configuration files, guest root filesystems may be of considerable size and should definitely stay away from /etc/ Right now this is not the case. For example 'lxc-create -t fedora -n fedora' will create /etc/lxc/fedora/rootfs To see what 'the right approach' might be, I had a look at the Filesystem Heirarchy Standard v2.3 (2004) @ http://www.pathname.com/fhs/ Apparently the decision earlier this year to move the template scripts out of standard binary locations was made against this standard, so it would seem a good place to seek guidance. http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commitdiff_plain;h=c01d62f21b21ba6c2b8b78ab3c2b37cc8f8fd265 Reading the document, it appears that one of the following may be a better location for guests' root filesystems: /srv/lxc /var/lxc /var/lib/lxc The reason *not* to use /srv/lxc is the following quote from page 15 of the FHS v2.3 PDF: "This setup will differ from host to host. Therefore, no program should rely on a specific subdirectory structure of /srv existing or data necessarily being stored in /srv" The data should not be placed in /usr ("/usr is shareable, read-only data" - page 18). It thus appears that one of the /var/lxc or /var/lib/lxc options appear best: "/var is specified here in order to make it possible to mount /usr read-only. Everything that once went into /usr that is written to during system operation (as opposed to installation and software maintenance) must be in /var" (page 30) Later the FHS states (page 30): "Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list" Therefore I humbly propose: - the establishment of /var/lib/lxc as the default top-level directory for guest filesystems (eg: /var/lib/lxc/) - all use of /etc/lxc//rootfs should be considered deprecated - legacy installations may create /etc/lxc//rootfs symlinks to assist with migration Thoughts? - Walter -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users