Re: [Lxc-users] LVM and XFS quota

2013-09-18 Thread Gary Ballantyne
On 18/09/13 16:43, Gary Ballantyne wrote:

> I am pretty sure that XFS needs to be *initially* mounted with the quota
> option --- but after rebooting I have lost the uquota.

Update:

If I create an ordinary (not lvm backed) container, then shuffle things 
around so that /var/lib/lxc/vm0/rootfs is mounted on a XFS logical 
volume (with uquota option), I can get quotas working in the container. 
*But*, only if I mirror user accounts between the container and host.

I also tried bind mounting the container to a XFS mount point via 
lxc.mount.entry. It seemed to work --- "mount" yields:

/dev/mapper/vg_rootfs-lv_temp on /mnt type xfs 
(rw,relatime,attr2,inode64,usrquota)

But, no joy using xfs_quota within the container.


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] LVM and XFS quota

2013-09-17 Thread Gary Ballantyne
Hi All

I have a container running over a XFS logical volume, and would like to 
employ user-level disk quota.

This doesn't work, but it seems like I need something like:

mount -o remount,uquota /var/lib/lxc/vm0/rootfs/

The change seems to stick:

/dev/mapper/lxc-vm0 on /var/lib/lxc/vm0/rootfs type xfs (rw,uquota)

But,

xfs_quota -xc 'report -h' /var/lib/lxc/vm0/rootfs/

yields

xfs_quota: cannot setup path for mount /var/lib/lxc/vm0/rootfs/: No such 
device or address

I am pretty sure that XFS needs to be *initially* mounted with the quota 
option --- but after rebooting I have lost the uquota.

My guess is that I need to change the way lxc initially mounts the LV, 
but I've no idea how to go about that. Appreciate any insight anyone may 
have.

Cheers

Gary


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] kernel bug?

2013-03-13 Thread Gary Ballantyne
On 14/03/13 16:31, Serge Hallyn wrote:
> Looks to me like the problem is a conflict between memory cgroup and
> xen:

Thanks Serge. This is the distro: 
http://cloud-images.ubuntu.com/releases/raring/alpha-2/ (ami-c842608d). 
And a stable version of quantal before that.

I will start by looking for problems in my cgroup stuff. Thanks for the 
pointer.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] kernel bug?

2013-03-13 Thread Gary Ballantyne
Hi All

I have an intermittent, but crippling, problem on a raring EC2 instance 
(also on quantal). Its a (raring) lvm-backed container --- I use cgroups 
directly (via /sys/fs) and iptables in the instance (not sure if that's 
relevant at all).

Occasionally, when stopping or starting the container (there is just 
one), the instance becomes unreachable. Rebooting doesn't help, but 
starting/stopping the instance, typically at least twice, fixes things 
(the instance is reachable, and the container auto-starts).

There doesn't appear to be anything sinister in /var/log/dmesg (upon 
restart), but the AWS system log is pasted below. I *think* the first 
part corresponds to before the crash, and the interesting bit is:

[3587596.471053] [ cut here ]
[3587596.471071] Kernel BUG at 816c7c2c [verbose debug info
...
[3587596.472282] ---[ end trace dc5c4320e1320f1d ]---
[3587596.472292] Fixing recursive fault but reboot is needed!

The log doesn't mention lxc, but it is definitely starting/stopping a 
container that triggers the problem. Not sure if this is a lxc, or even 
EC2 problem --- but greatly appreciate any ideas anyone might have.

Thanks

Gary


=

 Xen Minimal OS!
   start_info: 0xce2000(VA)
 nr_pages: 0x6a400
   shared_inf: 0x7e02e000(MA)
  pt_base: 0xce5000(VA)
nr_pt_frames: 0xb
 mfn_list: 0x99(VA)
mod_start: 0x0(VA)
  mod_len: 0
flags: 0x0
 cmd_line: root=/dev/sda1 ro 4
   stack:  0x94f860-0x96f860
MM: Init
   _text: 0x0(VA)
  _etext: 0x5ffbd(VA)
_erodata: 0x78000(VA)
  _edata: 0x80ae0(VA)
stack start: 0x94f860(VA)
_end: 0x98fe68(VA)
   start_pfn: cf3
 max_pfn: 6a400
Mapping memory range 0x100 - 0x6a40
setting 0x0-0x78000 readonly
skipped 0x1000
MM: Initialise page allocator for 103e000(103e000)-6a40(6a40)
MM: done
Demand map pfns at 6a401000-206a401000.
Heap resides at 206a402000-406a402000.
Initialising timer interface
Initialising console ... done.
gnttab_table mapped at 0x6a401000.
Initialising scheduler
Thread "Idle": pointer: 0x206a402010, stack: 0x13b
Initialising xenbus
Thread "xenstore": pointer: 0x206a4027c0, stack: 0x13c
Dummy main: start_info=0x96f960
Thread "main": pointer: 0x206a402f70, stack: 0x13d
"main" "root=/dev/sda1" "ro" "4"
vbd 2049 is hd0
*** BLKFRONT for device/vbd/2049 **


backend at /local/domain/0/backend/vbd/2550/2049
Failed to read /local/domain/0/backend/vbd/2550/2049/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/2049/feature-flush-cache.
16777216 sectors of 512 bytes
**
vbd 2051 is hd1
*** BLKFRONT for device/vbd/2051 **


backend at /local/domain/0/backend/vbd/2550/2051
Failed to read /local/domain/0/backend/vbd/2550/2051/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/2051/feature-flush-cache.
1835008 sectors of 512 bytes
**
vbd 2064 is hd2
*** BLKFRONT for device/vbd/2064 **


backend at /local/domain/0/backend/vbd/2550/2064
Failed to read /local/domain/0/backend/vbd/2550/2064/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/2064/feature-flush-cache.
312705024 sectors of 512 bytes
**
vbd 51792 is hd3
*** BLKFRONT for device/vbd/51792 **


backend at /local/domain/0/backend/vbd/2550/51792
Failed to read /local/domain/0/backend/vbd/2550/51792/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/51792/feature-flush-cache.
2097152 sectors of 512 bytes
**
vbd 51904 is hd4
*** BLKFRONT for device/vbd/51904 **


backend at /local/domain/0/backend/vbd/2550/51904
Failed to read /local/domain/0/backend/vbd/2550/51904/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/51904/feature-flush-cache.
2097152 sectors of 512 bytes
**
vbd 51952 is hd5
*** BLKFRONT for device/vbd/51952 **


backend at /local/domain/0/backend/vbd/2550/51952
Failed to read /local/domain/0/backend/vbd/2550/51952/feature-barrier.
Failed to read /local/domain/0/backend/vbd/2550/51952/feature-flush-cache.
2097152 sectors of 512 bytes
**
vbd 268441344 is hd6
*** BLKFRONT for device/vbd/268441344 **


backend at /local/domain/0/backend/vbd/2550/268441344
Failed to read /local/domain/0/backend/vbd/2550/268441344/feature-barrier.
Failed to read 
/local/domain/0/backend/vbd/2550/268441344/feature-flush-cache.
33554432 sectors of 512 bytes
**

 [H
 [J  Booting 'Ubuntu Raring Ringtail (development branch), kernel 
3.8.0-6-generic'



root  (hd0)

  Filesystem type is ext2fs, using whole disk

kernel  /boot/vmlinuz-3.8.0-6-generic root=LABEL=cloudimg-rootfs ro 
console=hvc

0

initrd  /boot/init

Re: [Lxc-users] total RAM limit

2013-02-02 Thread Gary Ballantyne
On Fri, 1 Feb 2013 10:24:13 -0600
Serge Hallyn  wrote:

> 
> Did you actually test with a memory hog program? I just noticed there
> appears to be a bug in that if I
> 
>   d=/sys/fs/cgroup/memory/a
>   mkdir $d
>   echo 1 > $d/memory.use_hierarchy
>   echo 5000 > $d/memory.limit_in_bytes
>   mkdir $d/b
> 
> then $d/b/memory.limit_in_bytes does not report the reduced limit.  However
> whenI run a program which does char *c = malloc(1) after doing
>   echo $$ > $d/b/tasks
> 
> then I get killed by OOM.
> 
> -serge

I tested with a large array in ipython. Though, from your example, it seems I'm 
missing memory.use_hierarchy.

In principle, it seems like I need something like:

echo 1 > /sys/fs/cgroup/memory/lxc/memory.use_hierarchy

But, I can't do that before the container is started (no lxc directory) or 
after it is started (device busy).


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] total RAM limit

2013-01-31 Thread Gary Ballantyne

On 01/02/13 02:33, lxc-users-requ...@lists.sourceforge.net wrote:

On 2013-01-31 07:41, Gary Ballantyne wrote:

>*# echo '64M' > /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes*
># cat /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes (return 67108864)

Dear Gary,

what's the value of '/sys/fs/cgroup/memory/lxc/'? If it's not '1', all cgroup settings in 
"lxc" will not be used as a template for the container "lxc/foo".

It might be much simpler to add lxc config entries like 
'lxc.cgroup.memory.limit_in_bytes = 64M' to your container configuration.

And to limit use of swap, to my knowledge you also have to limit 
'memory.memsw...'. But i may be wrong, on my LXC environmet i don't use swap at 
all.


greetings

Guido

Thanks Guido

Did you mean the value of 
/sys/fs/cgroup/memory/lxc/cgroup.clone_children? That guy is '1'.


Modifying the container configuration certainly works --- but I'd like 
to limit the total RAM available to all containers, rather than limit 
each individual container. I tried setting 
lxc.cgroup.memory.limit_in_bytes to various values (-1, 1G) to see if 
the container would pick up 
/sys/fs/cgroup/memory/lxc/memory.limit_in_bytes --- but no luck.


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] total RAM limit

2013-01-30 Thread Gary Ballantyne

On 12/01/13 08:49, Stéphane Graber wrote:

On 01/11/2013 01:17 PM, Gary Ballantyne wrote:

Hello All

I understand that I can limit the RAM of a single container via
lxc.cgroup.memory.limit_in_bytes. But, is there a way to limit the total
RAM available to all containers (without limiting each individually)?

E.g., say we have 4G available. Rather than specifying a maximum number
of containers (16 with 250M say), I'd like to allocate 4G to all
containers, without a hard upper limit on the number of containers (16
in this case), and let the performance degrade gradually as more
containers are added. (I'm anticipating being able to use many more
containers this way, since our container's RAM usage is likely to be
bursty).

You can, but not through lxc configuration.

LXC uses the "lxc" directory in the cgroup hierarchy, so that your
container is typically at:
lxc//

Manually changing the keys in the lxc directory will set a shared quota
for everything under it.

As a concrete example, on my laptop, the memory cgroup is mounted at:
/sys/fs/cgroup/memory/

And individual container cgroups are at:
/sys/fs/cgroup/memory/lxc/

So setting /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes would do what
you want.


Thanks Stéphane

I tried to test this, but without any success. This is what I did:

On a new quantal EC2 instance I set the memory limit to 64M and created 
a container (vm0):


# apt-get update
# apt-get install lxc lvm2
# umount /mnt
# pvcreate /dev/xvdb
# vgcreate lxc /dev/xvdb
# echo "prepend domain-name-servers 10.0.3.1;" >> /etc/dhcp/dhclient.conf
*# echo '64M' > /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes*
# cat /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes (return 67108864)
# lxc-create -t ubuntu -n vm0 -B lvm
# lxc-start -d -n vm0
# ssh ubuntu@vm0

On vm0 I installed ipython and numpy

# apt-get update
# apt-get install ipython python-numpy

In Ipython (on vm0) I created a large random array:

: import numpy as np
: A=np.random.rand(1,1)

In 'top', on the host (or on the container), I was hoping to see the 
memory to be limited per the cgroup, and for the container to start 
using swap --- but ipython cheerfully used most/all of the host's RAM 
(depending on how large the numpy array was).


Perhaps I am missing a step?

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] total RAM limit

2013-01-11 Thread Gary Ballantyne

Hello All

I understand that I can limit the RAM of a single container via 
lxc.cgroup.memory.limit_in_bytes. But, is there a way to limit the total 
RAM available to all containers (without limiting each individually)?


E.g., say we have 4G available. Rather than specifying a maximum number 
of containers (16 with 250M say), I'd like to allocate 4G to all 
containers, without a hard upper limit on the number of containers (16 
in this case), and let the performance degrade gradually as more 
containers are added. (I'm anticipating being able to use many more 
containers this way, since our container's RAM usage is likely to be 
bursty).


Many thanks

Gary
--
Master HTML5, CSS3, ASP.NET, MVC, AJAX, Knockout.js, Web API and
much more. Get web development skills now with LearnDevNow -
350+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122812___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] apparmor and nfs

2012-10-18 Thread Gary Ballantyne
Hi

I use "lxc.aa_profile = unconfined" to get the NFS client to work in a 
container (precise host and container).

Is that the best approach?

Thanks

Gary

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] cachefilesd

2012-02-15 Thread Gary Ballantyne
Hi

Does anyone have experience with cachefilesd they can share?

I have a ubuntu precise host and container (latest EC2 beta). I 
installed cachefilesd on both host and container. cachefilesd starts on 
the host, but not in the container.

The container's syslog complained about access to /dev/cachefilesd. I 
only kinda know what I am doing here, but I commented out 
"lxc.cgroup.devices.deny = a" in the config file. Now I see:

Feb 15 17:51:07 localhost cachefilesd[358]: About to bind cache
Feb 15 17:51:07 localhost cachefilesd[358]: CacheFiles bind failed: 
errno 95 (Operation not supported)

Appreciate your thoughts ...

Thanks

Gary

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] minimum fstab?

2012-02-04 Thread Gary Ballantyne
Hello List

Various templates have differing fstab definitions (at least for 
ubuntu). For example, [1] includes only /proc and /sys, [2] further adds 
/dev/pts, and [3] further adds /var/lock and /var/run.

Could someone please explain the pros/cons of including more than /proc 
and /sysfs? (which I assume is the bare minimum)?

Many thanks,

Gary

[1] https://github.com/hallyn/lxc/blob/master/templates/lxc-ubuntu.in
[2] 
http://www.activestate.com/blog/2011/10/virtualization-ec2-cloud-using-lxc
[3] 
https://github.com/dereks/lxc-ubuntu-x/blob/master/lxc-ubuntu-x/hooks.d/configure_fstab

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this?

2011-12-08 Thread Gary Ballantyne
On 08/12/11 19:39, Daniel Lezcano wrote:
> On 12/08/2011 12:38 AM, Joseph Heck wrote:
>> I've been seeing a pause in the whole networking stack when starting
>> and stopping LXC - it seems to be somewhat intermittent, but happens
>> reasonably consistently the first time I start up the LXC.
>>
>> I'm using ubuntu 11.10, which is using LXC 0.7.5
>>
>> I'm starting the container with lxc-start -d -n $CONTAINERNAME
> That could be the bridge configuration. Did you do 'brctl setfd br0 0' ?
>

FWIW, I see the same issue following [1], which has 'brctl setfd br0 0'.

[1] 
http://www.activestate.com/blog/2011/10/virtualization-ec2-cloud-using-lxc

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] non-root exploits?

2011-09-05 Thread Gary Ballantyne
Hello All,

Is there any known means for a non-root user, who is ssh'd into a 
container, to attack the host (e.g. read a file, reboot the machine ...)?

 From what I have read the (potential) trouble seems to be with root 
users. Is that true?

Many thanks,

Gary



--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Many containers and too many open files

2011-08-15 Thread Gary Ballantyne
On 16/08/11 06:52, Andre Nathan wrote:
> Hi Gary
>
> On Tue, 2011-08-16 at 06:38 +1200, Gary Ballantyne wrote:
>> Unfortunately, I am still getting the same errors with a little over 40
>> containers.
> I also had this problem. It was solved after Daniel suggested me to
> increase the following sysctl setting:
>
>fs.inotify.max_user_instances
>
> HTH,
> Andre
>

Hi Andre

That did it, thanks very much.

With:

echo 1024 > /proc/sys/fs/inotify/max_user_instances

I can fire up (at least) 100 containers.

Cheers

Gary



--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Many containers and too many open files

2011-08-15 Thread Gary Ballantyne
On 15/08/11 19:52, Jäkel, Guido wrote:
>> Hi
>>
>> Going back through the list, I couldn't find whether this has been resolved.
>>
>> I had a similar problem today with a little over 40 containers:
>>
>> # lxc-start -n gary
>> lxc-start: Too many open files - failed to inotify_init
>> lxc-start: failed to add utmp handler to mainloop
>> lxc-start: mainloop exited with an error
>> lxc-start: Device or resource busy - failed to remove cgroup '/cgroup/gary'
>>
> Dear Gary,
>
> did you (re-)configure  /etc/security/limits.conf  on the lxc host to have an 
> adequate value for filehandles in such an environment? E.g.:
>
>   [...]
>   *   hardnofile  65536
>   *   softnofile  65000
>   [...]
>
> Greetings
>
> Guido

Thanks Guido.

I didn't know about that. I made the change you suggested (there was a 
slight snag for ubuntu: 
http://serverfault.com/questions/235356/open-file-descriptor-limits-conf-setting-isnt-read-by-ulimit-even-when-pam-limit
 
). ulimit is pasted below.

Unfortunately, I am still getting the same errors with a little over 40 
containers.

Any further ideas?

Cheers

Gary

# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 20
file size   (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 65000
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) unlimited
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Many containers and too many open files

2011-08-15 Thread Gary Ballantyne

Hi

Going back through the list, I couldn't find whether this has been resolved.

I had a similar problem today with a little over 40 containers:

# lxc-start -n gary
lxc-start: Too many open files - failed to inotify_init
lxc-start: failed to add utmp handler to mainloop
lxc-start: mainloop exited with an error
lxc-start: Device or resource busy - failed to remove cgroup '/cgroup/gary'

(On Ubuntu 10.10 (EC2), LXC 0.7.2. Installed with this recipe: 
http://www.phenona.com/blog/using-lxc-linux-containers-in-amazon-ec2/ )


Appreciate your thoughts.

Cheers,

Gary


On 03/02/2011 02:46 PM, Andre Nathan wrote:
>  On Wed, 2011-03-02 at 14:24 +0100, Daniel Lezcano wrote:
>>>  I could paste my configuration files if you think it'd help you
>>>  reproducing the issue.
>>  Yes, please :)
>  Ok. The test host has a br0 interface which is not attached to any
>  physical interface:
>
> auto br0
> iface br0 inet static
>   address 192.168.0.1
>   netmask 255.255.0.0
>   broadcast 192.168.255.255
>   bridge_stp off
>   bridge_maxwait 5
>   pre-up /usr/sbin/brctl addbr br0
>   post-up /usr/sbin/brctl setfd br0 0
>   post-down /usr/sbin/brctl delbr br0
>
>  I use NAT for container access, translating to the host's eth0 address.
>  There is also a MARK rule that I use for bandwidth limiting. These
>  commands are run on the host startup:
>
>  iptables -t mangle -A PREROUTING -i br0 -j MARK --set-mark 2
>  iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source $ETH0_IP
>  iptables -P FORWARD DROP
>  iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
>  iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
>  tc qdisc add dev eth0 root handle 1: htb
>
>  I'm using a custom container creation script based on the ubuntu
>  templace that you can find here:
>
> http://andre.people.digirati.com.br/lxc-create.sh
>
>  It sets up the bandwidth limit for each container and populates the
>  container's rootfs (there is a usage message :). It creates
>  configuration files like this:
>
> lxc.utsname = c2
>
> lxc.network.type = veth
> lxc.network.link = br0
> lxc.network.flags = up
> lxc.network.ipv4 = 192.168.0.2/16 192.168.255.255
> lxc.network.name = eth0
> lxc.network.veth.pair = veth0.2
>
> lxc.tty = 4
> lxc.pts = 1024
> lxc.rootfs = /var/lib/lxc/c2/rootfs
> lxc.mount  = /var/lib/lxc/c2/fstab
>
> lxc.cgroup.devices.deny = a
> # /dev/null and zero
> lxc.cgroup.devices.allow = c 1:3 rwm
> lxc.cgroup.devices.allow = c 1:5 rwm
> # consoles
> lxc.cgroup.devices.allow = c 5:1 rwm
> lxc.cgroup.devices.allow = c 5:0 rwm
> #lxc.cgroup.devices.allow = c 4:0 rwm
> #lxc.cgroup.devices.allow = c 4:1 rwm
> #//dev//{,u}random
> lxc.cgroup.devices.allow = c 1:9 rwm
> lxc.cgroup.devices.allow = c 1:8 rwm
> lxc.cgroup.devices.allow = c 136:* rwm
> lxc.cgroup.devices.allow = c 5:2 rwm
> # rtc
> lxc.cgroup.devices.allow = c 254:0 rwm
>
> # capabilities
> lxc.cap.drop = audit_control audit_write fsetid kill ipc_lock
>  ipc_owner lease linux_immutable mac_admin mac_override net_bind_service
>  mknod setfcap setpcap sys_admin sys_boot sys_module sys_nice sys_pacct
>  sys_ptrace sys_rawio sys_resource sys_time sys_tty_config
>
>  and fstab like this:
>
> /bin /var/lib/lxc/c2/rootfs/bin ext4 bind,ro 0 0
> /lib /var/lib/lxc/c2/rootfs/lib ext4 bind,ro 0 0
> /lib64 /var/lib/lxc/c2/rootfs/lib64 ext4 bind,ro 0 0
> /sbin /var/lib/lxc/c2/rootfs/sbin ext4 bind,ro 0 0
> /usr /var/lib/lxc/c2/rootfs/usr ext4 bind,ro 0 0
> /etc/environment /var/lib/lxc/c2/rootfs/etc/environment none bind,ro 0
>  0
> /etc/resolv.conf /var/lib/lxc/c2/rootfs/etc/resolv.conf none bind,ro 0
>  0
> /etc/localtime /var/lib/lxc/c2/rootfs/etc/localtime none bind,ro 0 0
> /etc/network/if-down.d /var/lib/lxc/c2/rootfs/etc/network/if-down.d
>  none bind,ro 0 0
> /etc/network/if-post-down.d 
/var/lib/lxc/c2/rootfs/etc/network/if-post-down.d none bind,ro 0 0
> /etc/network/if-pre-up.d /var/lib/lxc/c2/rootfs/etc/network/if-pre-up.d 
none bind,ro 0 0
> /etc/network/if-up.d /var/lib/lxc/c2/rootfs/etc/network/if-up.d none
>  bind,ro 0 0
> /etc/login.defs /var/lib/lxc/c2/rootfs/etc/login.defs none bind,ro 0 0
> /etc/securetty /var/lib/lxc/c2/rootfs/etc/securetty none bind,ro 0 0
> /etc/pam.conf /var/lib/lxc/c2/rootfs/etc/pam.conf none bind,ro 0 0
> /etc/pam.d /var/lib/lxc/c2/rootfs/etc/pam.d none bind,ro 0 0
> /etc/security /var/lib/lxc/c2/rootfs/etc/security none bind,ro 0 0
> /etc/alternatives /var/lib/lxc/c2/rootfs/etc/alternatives none bind,ro
>  0 0
> proc /var/lib/lxc/c2/rootfs/proc proc ro,nodev,noexec,nosuid 0 0
> devpts /var/lib/lxc/c2/rootfs/dev/pts devpts defaults 0 0
> sysfs /var/lib/lxc/c2/rootfs/sys sysfs defaults 0 0
>
>
>  I think that's all. If you need any more info feel free to ask :)

Thanks Andre !


-

Re: [Lxc-users] Root-less containers?

2011-02-05 Thread Gary Ballantyne
On 2/6/2011 3:56 PM, John Drescher wrote:
>> Is this important if, say, a malicious user has access to a container?
>> Or, can a container be configured such that they could do little harm?
> 
> You can easily make a container have its own filesystem and no access
> to the host's filesystem or devices. Is that what you are getting at?
> 

Say we have a process P, which accepts an input file from the user.
Further, suppose that P allows access to the command line -- and so a
user can potentially execute any command in the container.

To prevent malicious use, one option is to parse the input -- but
running P in a container with minimal resources seems a much better option.

I am trying to put a proof-of-concept together, and the root vs. normal
user issue seemed relevant. Perhaps a better question would have been,
what is the practical difference between the container running as a root
user and a normal user?

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Root-less containers?

2011-02-05 Thread Gary Ballantyne
On 2/6/2011 10:44 AM, Daniel Lezcano wrote:
> On 02/04/2011 07:24 PM, Andre Nathan wrote:
>> Hello
>>
>> Is it possible to have everything inside a container (including init,
>> getty and whatever daemons are installed) being run as a normal user?
>> That is, can I have a container with no root user in /etc/passwd?
> 
> Not yet. The user namespace is partially implement in the kernel and the 
> userspace tools do not make use of it for the moment.

Is this important if, say, a malicious user has access to a container?
Or, can a container be configured such that they could do little harm?
(Apologies if this is a stupid question, but it's very significant to
our project).

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ubuntu sshd template

2011-02-03 Thread Gary Ballantyne

On 2/3/2011 1:47 PM, Trent W. Buck wrote:
> Gary Ballantyne
>  writes:
> 
>> # /usr/bin/lxc-execute -n foo -f
>> /usr/share/doc/lxc/examples/lxc-veth.conf /bin/bash
>>
>> The container fired up, and I could ping to/from the host. However, when
>> I left the container (with "exit") things got weird. In a second
>> terminal (already connected to the host), I got repeated errors of the form:
>>
>> [ 1396.169010] unregister_netdevice: waiting for lo to become free.
>> Usage count = 3.
> 
> I don't know about that one, sorry.  IIRC I got the lxc-ssh container to
> DTRT on 10.04, but it's entirely possible I was getting those dmesg
> errors and not seeing them, because I wasn't on a local tty.

Good point -- the errors are only shown on the local tty.

> UPDATE: oh, I see you're just using lxc-veth for bash... I dunno
> anything about that.  I guess you could be getting that when bash tries
> to initialize itself (e.g. setting $HOSTNAME)?  Do you get the same
> problems with /bin/dash or (say) /bin/pwd instead?

Same behavior with dash.

There is no science behind using lxc-veth, only that: (a), it went well
in 9.10; (b), it appears to use a bridge (which I read somewhere was the
safest/easiest option); and (c), it seemed a reasonable place to start.

>> Where the bracketed number changes for each error. (A new error appears
>> every 10 seconds or so).
> 
> The bracketed number is the number of seconds since boot.
> The message is being emitted by the kernel.
> 
>> Any suggestions?
> 
> Show us your .conf.

Here is the .conf -- I have only changed .ipv4 from the lxc-veth.conf
that ships with the installation.

lxc.utsname = beta
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bf
lxc.network.ipv4 = 10.89.233.55/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597

Here is /etc/network/interfaces (I have followed a recipe to set up the
bridge, but don't really know what I am doing here.)

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
#iface eth0 inet dhcp
iface eth0 inet manual

auto br0
iface br0 inet static
address 10.89.233.57
network 10.89.233.0
netmask 255.255.255.0
broadcast 10.89.233.255
gateway 10.89.233.1
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

> Maybe show us some diagnostics, too

These are after a reboot, followed by # /usr/bin/lxc-execute -n foo -f
/usr/share/doc/lxc/examples/lxc-veth.conf /bin/bash.

After "exit"-ing the container, the usual errors started appearing in
the local tty and I was told (on the remote tty):

lxc-execute: Device or resource busy - failed to remove cgroup '/cgroup/foo'

I am not sure if it is helpful to repeat the diagnostics below at this
stage or not -- please let me know if it would be helpful. Cheers.

> lxc-ps auxf

>From the container:

root@beta:/usr/share/doc/lxc/examples# lxc-ps auxf
CONTAINER  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START
TIME COMMAND
fooroot 1  0.0  0.1   2000   548 pts/0S10:22
0:00 /usr/lib/lxc/lxc-init -- /bin/bash
fooroot 2  0.0  0.3   5204  1772 pts/0S10:22
0:00 /bin/bash
fooroot14  0.0  0.5   6332  2596 pts/0S+   10:22
0:00  \_ /usr/bin/perl /usr/bin/lxc-ps auxf
   root15  0.0  0.1   4556   964 pts/0R+   10:22
0:00  \_ ps auxf

>From the host:

# lxc-ps auxf
CONTAINER  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START
TIME COMMAND
   root 2  0.0  0.0  0 0 ?SFeb03
0:00 [kthreadd]
   root 3  0.0  0.0  0 0 ?SFeb03
0:00  \_ [ksoftirqd/0]
   root 4  0.0  0.0  0 0 ?SFeb03
0:00  \_ [migration/0]
   root 5  0.0  0.0  0 0 ?SFeb03
0:00  \_ [watchdog/0]
   root 6  0.0  0.0  0 0 ?SFeb03
0:01  \_ [events/0]
   root 7  0.0  0.0  0 0 ?SFeb03
0:00  \_ [cpuset]
   root 8  0.0  0.0  0 0 ?SFeb03
0:00  \_ [khelper]
   root 9  0.0  0.0  0 0 ?SFeb03
0:00  \_ [netns]
   root10  0.0  0.0  0 0 ?SFeb03
0:00  \_ [async/mgr]
   root11  0.0  0.0  0 0 ?SFeb03
0:00  \_ [pm]
   root12  0.0  0.0  0 0 ?SFeb03
0:00  \_ [sync_supers]
   root13  0.0  0.0  0 0 ?SFeb03
0:00  \_ [bdi-default]
   root14  0.0  0.0  0 0 ?SFeb03
0:00  \_ [kintegrityd/0]
   root15  0.0  0.0

Re: [Lxc-users] Ubuntu sshd template

2011-02-02 Thread Gary Ballantyne

On 2/2/2011 1:13 PM, Trent W. Buck wrote:
> Gary Ballantyne
>  writes:
> 
>> Would greatly appreciate any help getting the sshd template working on
>> my Ubuntu 9.10 host.
> 
> I recommend you upgrade to 10.04 LTS and try again.  9.10 will be
> end-of-lifed by Canonical in three months, after which time there will
> be no security patches released for 9.10.
> 
> (10.04 LTS packages receive support until 2013 or 2015, depending on
> whether they're considered "server" packages.)

Good idea. To test it out I installed 10.10 on a vmware server (10.10
uses lxc 0.7.2-1, 10.04 uses 0.6.5-1).

I installed openssh-client, openssh-server and bridge-utils; set up the
bridge bridge [1] and cgroup [2]; installed lxc.

[1] http://www.howtoforge.com/virtualization-with-kvm-on-ubuntu-10.10
[2] http://lxc.teegra.net/#_setup_of_the_controlling_host

To check everything was OK I changed the ip address in lxc-veth.conf, then:

# /usr/bin/lxc-execute -n foo -f
/usr/share/doc/lxc/examples/lxc-veth.conf /bin/bash

The container fired up, and I could ping to/from the host. However, when
I left the container (with "exit") things got weird. In a second
terminal (already connected to the host), I got repeated errors of the form:

[ 1396.169010] unregister_netdevice: waiting for lo to become free.
Usage count = 3.

Where the bracketed number changes for each error. (A new error appears
every 10 seconds or so).

Then, if I try to fire up the container again, the terminal freezes and
I have to reboot to get things back to normal (i.e. where I can
successfully run the lxc-execute command above).

Thinking this was an issue with vmware I repeated the process on an old
machine with the same result (the only difference was that there was
only one terminal open and the [ 1396.169010] error appeared in that
terminal)

Any suggestions?

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ubuntu sshd template

2011-02-01 Thread Gary Ballantyne


On 2/1/2011 11:05 PM, Daniel Lezcano wrote:
> On 02/01/2011 12:16 AM, Gary Ballantyne wrote:
>> Hi
>>
>> Would greatly appreciate any help getting the sshd template working on
>> my Ubuntu 9.1 host.
>>
>> I can ssh to and from the container and host when the container is
>> generated by:
>>
>> lxc-execute -n foo2 -f /usr/share/doc/lxc/examples/lxc/lxc-veth-gb.conf
>> /bin/bash
>>
>> Here I have slightly modified the example conf file:
>>
>> # Container with network virtualized using a pre-configured bridge named
>> br0 and
>> # veth pair virtual network devices
>> lxc.utsname = gb1
>> lxc.network.type = veth
>> lxc.network.flags = up
>> lxc.network.link = br0
>> lxc.network.ipv4 = 10.89.233.15/24
>>
>> Also, I can create a sshd template using "./lxc-sshd create" (ip =
>> 10.89.233.13/24).
> 
> What version of lxc are you using ?

Sorry, stupid thing to leave out: 0.6.3-1 (installed with apt-get).

>> However, the container doesn't appear to start (or, at least, persist)
>> with the suggested command:
>>
>> # lxc-execute -n sshd /usr/sbin/sshd&
>> [1] 7612
>> # lxc-info -n sshd  'sshd' is STOPPED
>> [1]+  Exit 139lxc-execute -n sshd /usr/sbin/sshd
>>
>> But, I can start the container with:
>>
>> # lxc-start -n sshd&
>> [1] 7623
>> # lxc-info -n sshd
>> 'sshd' is RUNNING
>>
>> And I can ping the container:
>>
>> root@haulashore1:/usr/share/doc/lxc/examples# ping 10.89.233.13
>> PING 10.89.233.13 (10.89.233.13) 56(84) bytes of data.
>> 64 bytes from 10.89.233.13: icmp_seq=1 ttl=64 time=9.81 ms
>> 64 bytes from 10.89.233.13: icmp_seq=2 ttl=64 time=0.049 ms
>>
>> But, I can't ssh to the container:
>>
>> # ssh 10.89.233.13
>> ssh: connect to host 10.89.233.13 port 22: Connection refused
> 
> What is the output of lxc-ps --lxc ?

# lxc-start -n sshd &
[1] 8727

# lxc-ps --lxc
CONTAINERPID TTY  TIME CMD
sshd8728 ?00:00:00 init

>> (And, I can't connect with lxc-console)
>>
>> # lxc-console -n sshd
>> lxc-console: failed to connect to the tty service
> 
> Yeah, with the sshd template you can't because there is only the sshd
> daemon running inside, no mingetty respawned by an init process and the
> console configuration is not set for this template.
> 

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Ubuntu sshd template

2011-01-31 Thread Gary Ballantyne
Hi

Would greatly appreciate any help getting the sshd template working on
my Ubuntu 9.1 host.

I can ssh to and from the container and host when the container is
generated by:

lxc-execute -n foo2 -f /usr/share/doc/lxc/examples/lxc/lxc-veth-gb.conf
/bin/bash

Here I have slightly modified the example conf file:

# Container with network virtualized using a pre-configured bridge named
br0 and
# veth pair virtual network devices
lxc.utsname = gb1
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 10.89.233.15/24

Also, I can create a sshd template using "./lxc-sshd create" (ip =
10.89.233.13/24).

However, the container doesn't appear to start (or, at least, persist)
with the suggested command:

# lxc-execute -n sshd /usr/sbin/sshd &
[1] 7612
# lxc-info -n sshd  'sshd' is STOPPED
[1]+  Exit 139lxc-execute -n sshd /usr/sbin/sshd

But, I can start the container with:

# lxc-start -n sshd &
[1] 7623
# lxc-info -n sshd
'sshd' is RUNNING

And I can ping the container:

root@haulashore1:/usr/share/doc/lxc/examples# ping 10.89.233.13
PING 10.89.233.13 (10.89.233.13) 56(84) bytes of data.
64 bytes from 10.89.233.13: icmp_seq=1 ttl=64 time=9.81 ms
64 bytes from 10.89.233.13: icmp_seq=2 ttl=64 time=0.049 ms

But, I can't ssh to the container:

# ssh 10.89.233.13
ssh: connect to host 10.89.233.13 port 22: Connection refused

(And, I can't connect with lxc-console)

# lxc-console -n sshd
lxc-console: failed to connect to the tty service

Thanks in advance,

Gary



--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users