[Lxc-users] clones of clones are failing to start

2013-07-17 Thread Jay Taylor
Hi,

I'm not sure what I am doing wrong here.

This is a EC2 VM with /var/lib/lxc linking to a mounted BTRFS volume.

OS: Ubuntu 12.04 LTS
LXC Version: 0.9.0
FS: BTRFS

The current situation is that I have a base container, "base", which starts
fine.  This has then been cloned (with snapshot) to another container, e.g.
"test".  "test" is also able to start fine.  Then if I clone (again with
snapshot) "test" to "test_1", and it won't start.

Here is the output:
$ sudo lxc-start -n test_1
lxc-start: unknown capability mac_test_1
lxc-start: failed to drop capabilities
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'test_1'

I not yet sure why this is happening.  Do you know what might be causing
this or how I might best go about resolving it?

Thank you,
Jay Taylor
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] clones of clones are failing to start

2013-07-17 Thread Jay Taylor
Here it is, it is completely vanilla:

$ cat /var/lib/lxc/test_1/config
lxc.mount = /var/lib/lxc/test_1/fstab
lxc.tty = 4
lxc.pts = 1024
lxc.devttydir = lxc
lxc.arch = x86_64
lxc.logfile = /var/log/lxc/test_1.log
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rm
lxc.cgroup.devices.allow = c 10:229 rwm
lxc.cgroup.devices.allow = c 10:200 rwm
lxc.cgroup.devices.allow = c 1:7 rwm
lxc.cgroup.devices.allow = c 10:228 rwm
lxc.cgroup.devices.allow = c 10:232 rwm
lxc.utsname = test_1
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:d8:0d:f2
lxc.cap.drop = sys_module
lxc.cap.drop = mac_test_1
lxc.cap.drop = mac_override
lxc.cap.drop = sys_time
lxc.rootfs = /var/lib/lxc/test_1/rootfs
lxc.pivotdir = lxc_putold


On Wed, Jul 17, 2013 at 1:07 PM, Tamas Papp  wrote:

> On 07/17/2013 10:01 PM, Jay Taylor wrote:
> > Hi,
> >
> > I'm not sure what I am doing wrong here.
> >
> > This is a EC2 VM with /var/lib/lxc linking to a mounted BTRFS volume.
> >
> > OS: Ubuntu 12.04 LTS
> > LXC Version: 0.9.0
> > FS: BTRFS
> >
> > The current situation is that I have a base container, "base", which
> starts fine.  This has then
> > been cloned (with snapshot) to another container, e.g. "test".  "test"
> is also able to start fine.
> >  Then if I clone (again with snapshot) "test" to "test_1", and it won't
> start.
> >
> > Here is the output:
> > $ sudo lxc-start -n test_1
> > lxc-start: unknown capability mac_test_1
> > lxc-start: failed to drop capabilities
> > lxc-start: failed to setup the container
> > lxc-start: invalid sequence number 1. expected 2
> > lxc-start: failed to spawn 'test_1'
> >
> > I not yet sure why this is happening.  Do you know what might be causing
> this or how I might best
> > go about resolving it?
> >
>
> How does your config file looks?
>
> tamas
>
>
> --
> See everything from the browser to the database with AppDynamics
> Get end-to-end visibility with application monitoring from AppDynamics
> Isolate bottlenecks and diagnose root cause in seconds.
> Start your free trial of AppDynamics Pro today!
> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] clones of clones are failing to start

2013-07-17 Thread Jay Taylor
Yes, the actual name was "admin".  Yikes.  Is there a list of "don't do's"
anywhere?


On Wed, Jul 17, 2013 at 2:31 PM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > lxc.cap.drop = sys_module
> > lxc.cap.drop = mac_test_1
>
> Jinkeys - was the intermediate container clone name 'admin'?
>
> > lxc.cap.drop = mac_override
> > lxc.cap.drop = sys_time
> > lxc.rootfs = /var/lib/lxc/test_1/rootfs
> > lxc.pivotdir = lxc_putold
>
> clearly the updating of hostnames should always exempt lxc.cap.drop,
> and a few other lines.  Just how robust we can make this I'm not 100%
> sure.  (I.e. in a lxc.hook.mount = /opt/mycontainer/hooks/mycontainer.1,
> how can we know which 'mycontainer' strings should be replaced?)
>
> -serge
>
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-ls --fancy is lying

2013-07-31 Thread Jay Taylor
Out of curiosity, did you see it in the past before you you updated from
the PPA today?


On Wed, Jul 31, 2013 at 4:16 AM, Tamas Papp  wrote:

> hi All,
>
> I've seen a couple of times in the past like this:
>
> # lxc-ls --fancy --stopped --fancy-format name,state
> NAME STATE
> 
> finance  STOPPED
> hammer   STOPPED
> ijc-cipool   STOPPED
> jay  STOPPED
> marvin4jsci  STOPPED
> marvinci STOPPED
> svntest  STOPPED
>
>
> In fact VMs are running fine, but I updated lxc (from daily ppa) this
> morning.
>
> Is this some kind of bug? Do others meet this issue?
>
>
> 10x
> tamas
>
>
> --
> Get your SQL database under version control now!
> Version control is standard for application code, but databases havent
> caught up. So what steps can you take to put your SQL databases under
> version control? Why should you start doing it? Read more to find out.
> http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
--
Get your SQL database under version control now!
Version control is standard for application code, but databases havent 
caught up. So what steps can you take to put your SQL databases under 
version control? Why should you start doing it? Read more to find out.
http://pubads.g.doubleclick.net/gampad/clk?id=49501711&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
Greetings,

I am hitting a problem with LXC were it's assigned the same IP address to
different containers:

sendhub_important_v7_10010   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10017   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10023   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10053   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10093   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10103   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10108   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10128   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10133   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10143   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10163   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10173   RUNNING  10.0.3.141  - NO
sendhub_important_v7_10178   RUNNING  10.0.3.141  - NO
sendhub_web_v7_10028 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10033 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10038 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10058 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10063 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10068 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10073 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10078 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10083 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10098 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10113 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10138 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10148 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10153 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10168 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10183 RUNNING  10.0.3.141  - NO
sendhub_web_v7_10188 RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10043  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10088  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10118  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10123  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10158  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10193  RUNNING  10.0.3.141  - NO
sendhub_worker_v7_10198  RUNNING  10.0.3.141  - NO

Any ideas on what can cause this to happen?  Also, even across different
container hosts, the containers are all being assigned this same ip.
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
Also, now when I try to clone "sendhub" to a new container, it doesn't even
get an IP address..

sendhub_important_v7_10300   RUNNING  -   - NO


On Fri, Aug 9, 2013 at 7:08 AM, Jay Taylor  wrote:

> Greetings,
>
> I am hitting a problem with LXC were it's assigned the same IP address to
> different containers:
>
> sendhub_important_v7_10010   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10017   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10023   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10053   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10093   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10103   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10108   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10128   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10133   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10143   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10163   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10173   RUNNING  10.0.3.141  - NO
> sendhub_important_v7_10178   RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10028 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10033 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10038 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10058 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10063 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10068 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10073 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10078 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10083 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10098 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10113 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10138 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10148 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10153 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10168 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10183 RUNNING  10.0.3.141  - NO
> sendhub_web_v7_10188 RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10043  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10088  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10118  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10123  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10158  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10193  RUNNING  10.0.3.141  - NO
> sendhub_worker_v7_10198  RUNNING  10.0.3.141  - NO
>
> Any ideas on what can cause this to happen?  Also, even across different
> container hosts, the containers are all being assigned this same ip.
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
Hi Serge,

That's the thing..the containers all have unique addresses.  They're being
created by `lxc-clone -B btrfs -s -o base_container -n 

Now...every container I try to create or stop/start doesn't receive any IP
address.  And all those duplicates no longer show any ip.

here is lxc-ls --fancy:

sendhub STOPPED  - - NO
sendhub_important_v7_10012  RUNNING  - - NO
sendhub_important_v7_10019  RUNNING  - - NO
sendhub_important_v7_10024  RUNNING  - - NO
sendhub_important_v7_10054  RUNNING  - - NO
sendhub_important_v7_10069  RUNNING  - - NO
sendhub_important_v7_10074  RUNNING  - - NO
sendhub_important_v7_10089  RUNNING  - - NO
sendhub_important_v7_10094  RUNNING  - - NO
sendhub_important_v7_10104  RUNNING  - - NO
sendhub_important_v7_10129  RUNNING  - - NO
sendhub_important_v7_10139  RUNNING  - - NO
sendhub_important_v7_10144  RUNNING  - - NO
sendhub_important_v7_10149  RUNNING  - - NO
sendhub_important_v7_10164  RUNNING  - - NO
sendhub_important_v7_10174  RUNNING  - - NO
sendhub_important_v7_10179  RUNNING  - - NO
sendhub_important_v7_10189  RUNNING  - - NO
sendhub_scheduler_v7_10159  RUNNING  - - NO
sendhub_web_v7_10029RUNNING  - - NO
sendhub_web_v7_10034RUNNING  - - NO
sendhub_web_v7_10039RUNNING  - - NO
sendhub_web_v7_10059RUNNING  - - NO
sendhub_web_v7_10064RUNNING  - - NO
sendhub_web_v7_10084RUNNING  - - NO
sendhub_web_v7_10099RUNNING  - - NO
sendhub_web_v7_10109RUNNING  - - NO
sendhub_web_v7_10114RUNNING  - - NO
sendhub_web_v7_10119RUNNING  - - NO
sendhub_web_v7_10134RUNNING  - - NO
sendhub_web_v7_10154RUNNING  - - NO
sendhub_web_v7_10169RUNNING  - - NO
sendhub_web_v7_10184RUNNING  - - NO
sendhub_worker_v7_10044 RUNNING  - - NO
sendhub_worker_v7_10079 RUNNING  - - NO
sendhub_worker_v7_10124 RUNNING  - - NO
sendhub_worker_v7_10194 RUNNING  - - NO
sendhub_worker_v7_10199 RUNNING  - - NO

What could have gone wrong here? I have 5/5 nodes, all in this state
(meaning this problem has been reproduced across multiple hosts).


On Fri, Aug 9, 2013 at 8:02 AM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > Greetings,
> >
> > I am hitting a problem with LXC were it's assigned the same IP address to
> > different containers:
>
> How are you creating the containers?  You need to give each container a
> unique mac address.  lxc-clone should do this for you, as should
> lxc-create.  If you're manually copying the containers, then you'll
> have to do it by hand.
>
> can you show the lxc.network sections of two of the containers, and
> the commands you used to create them?
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
Also, here is one of the container configs:

lxc.mount = /var/lib/lxc/sendhub_web_v7_10146/fstab
lxc.tty = 4
lxc.pts = 1024
lxc.devttydir = lxc
lxc.arch = x86_64
lxc.logfile = /var/log/lxc/sendhub_web_v7_10146.log
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rm
lxc.cgroup.devices.allow = c 10:229 rwm
lxc.cgroup.devices.allow = c 10:200 rwm
lxc.cgroup.devices.allow = c 1:7 rwm
lxc.cgroup.devices.allow = c 10:228 rwm
lxc.cgroup.devices.allow = c 10:232 rwm
lxc.utsname = sendhub_web_v7_10146
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:07:7b:8f
lxc.cap.drop = sys_module
lxc.cap.drop = mac_admin
lxc.cap.drop = mac_override
lxc.cap.drop = sys_time
lxc.rootfs = /var/lib/lxc/sendhub_web_v7_10146/rootfs
lxc.pivotdir = lxc_putold


On Fri, Aug 9, 2013 at 8:52 AM, Jay Taylor  wrote:

> Hi Serge,
>
> That's the thing..the containers all have unique addresses.  They're being
> created by `lxc-clone -B btrfs -s -o base_container -n 
>
> Now...every container I try to create or stop/start doesn't receive any IP
> address.  And all those duplicates no longer show any ip.
>
> here is lxc-ls --fancy:
>
>  sendhub STOPPED  - - NO
> sendhub_important_v7_10012  RUNNING  - - NO
> sendhub_important_v7_10019  RUNNING  - - NO
> sendhub_important_v7_10024  RUNNING  - - NO
> sendhub_important_v7_10054  RUNNING  - - NO
> sendhub_important_v7_10069  RUNNING  - - NO
> sendhub_important_v7_10074  RUNNING  - - NO
> sendhub_important_v7_10089  RUNNING  - - NO
> sendhub_important_v7_10094  RUNNING  - - NO
> sendhub_important_v7_10104  RUNNING  - - NO
> sendhub_important_v7_10129  RUNNING  - - NO
> sendhub_important_v7_10139  RUNNING  - - NO
> sendhub_important_v7_10144  RUNNING  - - NO
> sendhub_important_v7_10149  RUNNING  - - NO
> sendhub_important_v7_10164  RUNNING  - - NO
> sendhub_important_v7_10174  RUNNING  - - NO
> sendhub_important_v7_10179  RUNNING  - - NO
> sendhub_important_v7_10189  RUNNING  - - NO
> sendhub_scheduler_v7_10159  RUNNING  - - NO
> sendhub_web_v7_10029RUNNING  - - NO
> sendhub_web_v7_10034RUNNING  - - NO
> sendhub_web_v7_10039RUNNING  - - NO
> sendhub_web_v7_10059RUNNING  - - NO
> sendhub_web_v7_10064RUNNING  - - NO
> sendhub_web_v7_10084RUNNING  - - NO
> sendhub_web_v7_10099RUNNING  - - NO
> sendhub_web_v7_10109RUNNING  - - NO
> sendhub_web_v7_10114RUNNING  - - NO
> sendhub_web_v7_10119RUNNING  - - NO
> sendhub_web_v7_10134RUNNING  - - NO
> sendhub_web_v7_10154RUNNING  - - NO
> sendhub_web_v7_10169RUNNING  - - NO
> sendhub_web_v7_10184RUNNING  - - NO
> sendhub_worker_v7_10044 RUNNING  - - NO
> sendhub_worker_v7_10079 RUNNING  - - NO
> sendhub_worker_v7_10124 RUNNING  - - NO
> sendhub_worker_v7_10194 RUNNING  - - NO
> sendhub_worker_v7_10199 RUNNING  - - NO
>
> What could have gone wrong here? I have 5/5 nodes, all in this state
> (meaning this problem has been reproduced across multiple hosts).
>
>
> On Fri, Aug 9, 2013 at 8:02 AM, Serge Hallyn wrote:
>
>> Quoting Jay Taylor (j...@jaytaylor.com):
>> > Greetings,
>> >
>> > I am hitting a problem with LXC were it's assigned the same IP address
>> to
>> > different containers:
>>
>> How are you creating the containers?  You need to give each container a
>> unique mac address.  lxc-clone should do this for you, as should
>> lxc-create.  If you're manually copying the containers, then you'll
>> have to do it by hand.
>>
>> can you show the lxc.network sections of two of the containers, and
>> the commands you used to create them?
>>
>
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
Continuing to dig into this..

Here is an example of the syslog when I attempt to `lxc-start -n `
one of the containers:

Aug  9 19:24:13 ip-10-34-249-56 kernel: [1326297.257700] device vethRVTxKd
entered promiscuous mode
Aug  9 19:24:13 ip-10-34-249-56 kernel: [1326297.259693]
ADDRCONF(NETDEV_UP): vethRVTxKd: link is not ready
Aug  9 19:24:13 ip-10-34-249-56 kernel: [1326297.295988] init: Failed to
spawn network-interface (vethLoEcZK) pre-start process: unable to change
root directory: No such file or directory
*Aug  9 19:24:13 ip-10-34-249-56 kernel: a he: No toicoess:b etrk-interface
(vethRVTxKd) post-ppoes  roirecthr c633(vethRV  fstate*

It never gets to a login prompt.

Here is the syslog output when I start in daemon mode, `lxc-start -d -n
`:

Aug  9 19:25:48 ip-10-34-249-56 kernel: 6>[1326391.843150] device
vethkKKsPf entered promiscuous mode
Aug  9 19:25:48 ip-10-34-249-56 kernel: [1326391.844146]
ADDRCONF(NETDEV_UP): vethkKKsPf: link is not ready
*Aug  9 19:25:48 ip-10-34-249-56 kernel: wrface (vethkKKsPf) pre-start
profkinte e]ink becomes ready*

Both times, there is always one line which is crazy and gibberish looking.

Have any of you seen anything like this before?




On Fri, Aug 9, 2013 at 8:57 AM, Jay Taylor  wrote:

> Also, here is one of the container configs:
>
> lxc.mount = /var/lib/lxc/sendhub_web_v7_10146/fstab
> lxc.tty = 4
> lxc.pts = 1024
> lxc.devttydir = lxc
> lxc.arch = x86_64
> lxc.logfile = /var/log/lxc/sendhub_web_v7_10146.log
> lxc.cgroup.devices.deny = a
> lxc.cgroup.devices.allow = c *:* m
> lxc.cgroup.devices.allow = b *:* m
> lxc.cgroup.devices.allow = c 1:3 rwm
> lxc.cgroup.devices.allow = c 1:5 rwm
> lxc.cgroup.devices.allow = c 5:1 rwm
> lxc.cgroup.devices.allow = c 5:0 rwm
> lxc.cgroup.devices.allow = c 1:9 rwm
> lxc.cgroup.devices.allow = c 1:8 rwm
> lxc.cgroup.devices.allow = c 136:* rwm
> lxc.cgroup.devices.allow = c 5:2 rwm
> lxc.cgroup.devices.allow = c 254:0 rm
> lxc.cgroup.devices.allow = c 10:229 rwm
> lxc.cgroup.devices.allow = c 10:200 rwm
> lxc.cgroup.devices.allow = c 1:7 rwm
> lxc.cgroup.devices.allow = c 10:228 rwm
> lxc.cgroup.devices.allow = c 10:232 rwm
> lxc.utsname = sendhub_web_v7_10146
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = lxcbr0
> lxc.network.hwaddr = 00:16:3e:07:7b:8f
> lxc.cap.drop = sys_module
> lxc.cap.drop = mac_admin
> lxc.cap.drop = mac_override
> lxc.cap.drop = sys_time
> lxc.rootfs = /var/lib/lxc/sendhub_web_v7_10146/rootfs
> lxc.pivotdir = lxc_putold
>
>
> On Fri, Aug 9, 2013 at 8:52 AM, Jay Taylor  wrote:
>
>> Hi Serge,
>>
>> That's the thing..the containers all have unique addresses.  They're
>> being created by `lxc-clone -B btrfs -s -o base_container -n 
>>
>> Now...every container I try to create or stop/start doesn't receive any
>> IP address.  And all those duplicates no longer show any ip.
>>
>> here is lxc-ls --fancy:
>>
>>  sendhub STOPPED  - - NO
>> sendhub_important_v7_10012  RUNNING  - - NO
>> sendhub_important_v7_10019  RUNNING  - - NO
>> sendhub_important_v7_10024  RUNNING  - - NO
>> sendhub_important_v7_10054  RUNNING  - - NO
>> sendhub_important_v7_10069  RUNNING  - - NO
>> sendhub_important_v7_10074  RUNNING  - - NO
>> sendhub_important_v7_10089  RUNNING  - - NO
>> sendhub_important_v7_10094  RUNNING  - - NO
>> sendhub_important_v7_10104  RUNNING  - - NO
>> sendhub_important_v7_10129  RUNNING  - - NO
>> sendhub_important_v7_10139  RUNNING  - - NO
>> sendhub_important_v7_10144  RUNNING  - - NO
>> sendhub_important_v7_10149  RUNNING  - - NO
>> sendhub_important_v7_10164  RUNNING  - - NO
>> sendhub_important_v7_10174  RUNNING  - - NO
>> sendhub_important_v7_10179  RUNNING  - - NO
>> sendhub_important_v7_10189  RUNNING  - - NO
>> sendhub_scheduler_v7_10159  RUNNING  - - NO
>> sendhub_web_v7_10029RUNNING  - - NO
>> sendhub_web_v7_10034RUNNING  - - NO
>> sendhub_web_v7_10039RUNNING  - - NO
>> sendhub_web_v7_10059RUNNING  - - NO
>> sendhub_web_v7_10064RUNNING  - - NO
>> sendhub_web_v7_10084RUNNING  - - NO
>> sendhub_web_v7_10099RUNNING  - - NO
>> sendhub_web_v7_10109RUNNING  - - NO
>> sendhub_web_v7_10114RUNNING  - - NO
>> sendhub_web_v7_10119RUNNING  - - NO
>> sendhub_web_v7_10134RUNNING  - - NO
>> sendhub_web_v7_10154RUNNING  - - NO
>> sendhub_

Re: [Lxc-users] Containers are all getting same IP address

2013-08-09 Thread Jay Taylor
My reply is inline below.


On Fri, Aug 9, 2013 at 12:10 PM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > lxc.network.hwaddr = 00:16:3e:07:7b:8f
>
> Can you show the result of 'grep lxc.network.hwaddr /var/lib/lxc/*/config'?
>

ubuntu@ip-10-34-249-56:~$ grep lxc.network.hwaddr /var/lib/lxc/*/config
/var/lib/lxc/analytics/config:lxc.network.hwaddr = 00:16:3e:72:a3:43
/var/lib/lxc/analytics_scheduler_v5_10001/config:lxc.network.hwaddr =
00:16:3e:92:8c:a9
/var/lib/lxc/base/config:lxc.network.hwaddr = 00:16:3e:5f:dd:91
/var/lib/lxc/base-playframework2/config:lxc.network.hwaddr =
00:16:3e:5b:64:bf
/var/lib/lxc/base-python/config:lxc.network.hwaddr = 00:16:3e:3e:59:1f
/var/lib/lxc/inbound-sms/config:lxc.network.hwaddr = 00:16:3e:ac:16:ca
/var/lib/lxc/inbound-sms_web_v5_10002/config:lxc.network.hwaddr =
00:16:3e:b0:f1:3b
/var/lib/lxc/s1/config:lxc.network.hwaddr = 00:16:3e:71:53:b4
/var/lib/lxc/s1_web_v72_10018/config:lxc.network.hwaddr = 00:16:3e:fc:c5:d9
/var/lib/lxc/s1_web_v72_10049/config:lxc.network.hwaddr = 00:16:3e:c7:6a:68
/var/lib/lxc/sendhub/config:lxc.network.hwaddr = 00:16:3e:00:ac:f5
/var/lib/lxc/sendhub_important_v7_10002/config:lxc.network.hwaddr =
00:16:3e:0b:95:26
/var/lib/lxc/sendhub_important_v7_10014/config:lxc.network.hwaddr =
00:16:3e:9a:29:71
/var/lib/lxc/sendhub_important_v7_10021/config:lxc.network.hwaddr =
00:16:3e:05:f8:9a
/var/lib/lxc/sendhub_important_v7_10056/config:lxc.network.hwaddr =
00:16:3e:ca:ec:8e
/var/lib/lxc/sendhub_important_v7_10061/config:lxc.network.hwaddr =
00:16:3e:36:5d:e6
/var/lib/lxc/sendhub_important_v7_10076/config:lxc.network.hwaddr =
00:16:3e:5d:63:ea
/var/lib/lxc/sendhub_important_v7_10091/config:lxc.network.hwaddr =
00:16:3e:6d:f5:38
/var/lib/lxc/sendhub_important_v7_10106/config:lxc.network.hwaddr =
00:16:3e:e0:2e:58
/var/lib/lxc/sendhub_important_v7_10121/config:lxc.network.hwaddr =
00:16:3e:36:24:33
/var/lib/lxc/sendhub_important_v7_10126/config:lxc.network.hwaddr =
00:16:3e:18:20:84
/var/lib/lxc/sendhub_important_v7_10131/config:lxc.network.hwaddr =
00:16:3e:05:15:f9
/var/lib/lxc/sendhub_important_v7_10156/config:lxc.network.hwaddr =
00:16:3e:e7:b3:36
/var/lib/lxc/sendhub_important_v7_10166/config:lxc.network.hwaddr =
00:16:3e:a5:a4:da
/var/lib/lxc/sendhub_important_v7_10171/config:lxc.network.hwaddr =
00:16:3e:20:02:43
/var/lib/lxc/sendhub_important_v7_10181/config:lxc.network.hwaddr =
00:16:3e:a8:f1:e3
/var/lib/lxc/sendhub_important_v7_10191/config:lxc.network.hwaddr =
00:16:3e:08:84:f8
/var/lib/lxc/sendhub_scheduler_v7_10026/config:lxc.network.hwaddr =
00:16:3e:63:44:6e
/var/lib/lxc/sendhub_scheduler_v7_10086/config:lxc.network.hwaddr =
00:16:3e:a3:f7:df
/var/lib/lxc/sendhub_scheduler_v7_10196/config:lxc.network.hwaddr =
00:16:3e:d8:4b:52
/var/lib/lxc/sendhub_web_v7_10031/config:lxc.network.hwaddr =
00:16:3e:2d:95:9f
/var/lib/lxc/sendhub_web_v7_10036/config:lxc.network.hwaddr =
00:16:3e:61:d1:d7
/var/lib/lxc/sendhub_web_v7_10066/config:lxc.network.hwaddr =
00:16:3e:17:3e:97
/var/lib/lxc/sendhub_web_v7_10071/config:lxc.network.hwaddr =
00:16:3e:9e:95:69
/var/lib/lxc/sendhub_web_v7_10081/config:lxc.network.hwaddr =
00:16:3e:c4:79:05
/var/lib/lxc/sendhub_web_v7_10096/config:lxc.network.hwaddr =
00:16:3e:9d:92:a4
/var/lib/lxc/sendhub_web_v7_10101/config:lxc.network.hwaddr =
00:16:3e:12:69:93
/var/lib/lxc/sendhub_web_v7_10111/config:lxc.network.hwaddr =
00:16:3e:d1:47:2b
/var/lib/lxc/sendhub_web_v7_10116/config:lxc.network.hwaddr =
00:16:3e:0d:8a:71
/var/lib/lxc/sendhub_web_v7_10136/config:lxc.network.hwaddr =
00:16:3e:77:8a:12
/var/lib/lxc/sendhub_web_v7_10141/config:lxc.network.hwaddr =
00:16:3e:12:c1:5c
/var/lib/lxc/sendhub_web_v7_10146/config:lxc.network.hwaddr =
00:16:3e:07:7b:8f
/var/lib/lxc/sendhub_web_v7_10151/config:lxc.network.hwaddr =
00:16:3e:4b:d3:d4
/var/lib/lxc/sendhub_web_v7_10176/config:lxc.network.hwaddr =
00:16:3e:90:e1:66
/var/lib/lxc/sendhub_web_v7_10186/config:lxc.network.hwaddr =
00:16:3e:ac:80:28
/var/lib/lxc/sendhub_worker_v7_10041/config:lxc.network.hwaddr =
00:16:3e:4a:b9:35
/var/lib/lxc/sendhub_worker_v7_10051/config:lxc.network.hwaddr =
00:16:3e:17:a7:5a
/var/lib/lxc/sendhub_worker_v7_10161/config:lxc.network.hwaddr =
00:16:3e:f9:ab:68
/var/lib/lxc/voice-api/config:lxc.network.hwaddr = 00:16:3e:2c:1f:e2
/var/lib/lxc/voice-api_web_v3_1/config:lxc.network.hwaddr =
00:16:3e:53:dc:b7


>
> Which version of lxc are you using again?
>

ubuntu@ip-10-34-249-56:~$ lxc-version
lxc version: 0.9.0
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users m

Re: [Lxc-users] Containers are all getting same IP address

2013-08-10 Thread Jay Taylor
After further investigation yesterday, I am not convinced it is an
IP-address issue.  The affected host machines are unable to start any
existing or newly created containers.  The incident that triggered the
issue was cloning 1 container into 10 new ones, and then launching them all
simultaneously.  Are there any known concurrency issues with LXC which
would explain why executing a lot of clone/start LXC commands at the same
time causes all of LXC to be rendered useless until the host machine is
physically rebooted?


On Sat, Aug 10, 2013 at 11:42 AM, Tony Su  wrote:

> FYI
> I avoid the whole issue assigning different IP addresses by creating
> my br devices using libvirt (vm manager).
>
> Using vm manager, I can
> Create a  virtual network that references using the specified
> bridge device (all libvrit created bridge devices have a "virbr" name
> instead of "br")
> When you setup your bridge device using libvirt, you can configure a
> DHCP process for that virtual network without having to install and
> run a proper DHCP server.
>
> Once you create the bridge device using libvirt, it can be used by
>  virtualization technology, so for example although I'm not
> managing my LXC Containers using libvirt, the bridge devices are still
> usable by KVM, Xen, LXC and QEMU on my machine. Once the bridge device
> is created, just reference it in your LXC Container config file and
> from within the container your eth0 if setup for DHCP will
> automatically get its address.
>
> You can display the bridge devices that exist on your machine with the
> following command
> brctl show
>
> Although it's probably likely that a "regular" bridge device could be
> configured with DHCP and even be referenced by name, I find it so much
> easier and avoids mistakes to just use vm manager to do the work for
> me.
>
> HTH,
> Tony
>
>
>
>
> On Fri, Aug 9, 2013 at 7:32 PM, Serge Hallyn 
> wrote:
> > Sorry, I can't figure out what's going wrong.  You have unique macaddrs
> > for each container, so the dnsmasq-lxc should be handing out unique
> > ip addresses.  What does /etc/network/interfaces in one of the containers
> > look like?
> >
> >> ubuntu@ip-10-34-249-56:~$ lxc-version
> >> lxc version: 0.9.0
> >
> > what about
> > dpkg -l | grep lxc
> > and
> > dpkg -l | grep dnsmasq
> > ?
> >
> >
> --
> > Get 100% visibility into Java/.NET code with AppDynamics Lite!
> > It's a free troubleshooting tool designed for production.
> > Get down to code-level detail for bottlenecks, with <2% overhead.
> > Download for free and get started troubleshooting in minutes.
> >
> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
> > ___
> > Lxc-users mailing list
> > Lxc-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/lxc-users
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-10 Thread Jay Taylor
The systems all use btrfs volumes.

On Aug 10, 2013, at 3:59 PM, zoolook  wrote:

> On Sat, Aug 10, 2013 at 7:40 PM, Jay Taylor  wrote:
>> Are there any known concurrency issues with LXC which would
>> explain why executing a lot of clone/start LXC commands at the same time
>> causes all of LXC to be rendered useless until the host machine is
>> physically rebooted?
> 
> If you use LVM, yes.
> 
> If it is LVM, then you can workaround it by waiting for all the 'lvm
> vgscan*' commands to finish.
> 
> 
> HTH,
> Norberto
> 
> [*] AFAIK, these commands are triggered by udev.
> 
> --
> Get 100% visibility into Java/.NET code with AppDynamics Lite!
> It's a free troubleshooting tool designed for production.
> Get down to code-level detail for bottlenecks, with <2% overhead. 
> Download for free and get started troubleshooting in minutes. 
> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users

--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-12 Thread Jay Taylor
Thanks, will do this morning.

On Aug 12, 2013, at 6:10 AM, Serge Hallyn  wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
>> After further investigation yesterday, I am not convinced it is an
>> IP-address issue.  The affected host machines are unable to start any
>> existing or newly created containers.  The incident that triggered the
>> issue was cloning 1 container into 10 new ones, and then launching them all
>> simultaneously.  Are there any known concurrency issues with LXC which
>> would explain why executing a lot of clone/start LXC commands at the same
>> time causes all of LXC to be rendered useless until the host machine is
>> physically rebooted?
> 
> Assuming you're on a pretty stock ubuntu host, it would be helpful if
> you could file a bug with launchpad.  Start one of the containers which
> fails, then do
> 
>ubuntu-bug lxc

--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-12 Thread Jay Taylor
I tried doing that, but it didn't work.

ubuntu-bug lxc

*** Collecting problem information

The collected information can be sent to the developers to improve the
application. This might take a few minutes.
.

*** Problem in lxc

The problem cannot be reported:

This is not an official Ubuntu package. Please remove any third party
package and try again.

Press any key to continue...



All my machines are using the PPA at https://launchpad.net/~ubuntu-lxc/,
which apparently is unable to accept bugs filed this way.



On Mon, Aug 12, 2013 at 6:10 AM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > After further investigation yesterday, I am not convinced it is an
> > IP-address issue.  The affected host machines are unable to start any
> > existing or newly created containers.  The incident that triggered the
> > issue was cloning 1 container into 10 new ones, and then launching them
> all
> > simultaneously.  Are there any known concurrency issues with LXC which
> > would explain why executing a lot of clone/start LXC commands at the same
> > time causes all of LXC to be rendered useless until the host machine is
> > physically rebooted?
>
> Assuming you're on a pretty stock ubuntu host, it would be helpful if
> you could file a bug with launchpad.  Start one of the containers which
> fails, then do
>
> ubuntu-bug lxc
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-14 Thread Jay Taylor
Hi Serge,

I added zfs support to the application and systems creating/hosting the
containers, and I have subsequently been unable to reproduce any issues.

As far as trying to reproduce it with btrfs, I've had some success.

The general system state is something like:
N containers already running happily
Launch N+ more containers in rapid succession (in parallell, not serially).

I've modified your test script to reflect more closely what my application
is actually doing, by slowly launching 10 containers, and then using "&" to
rapidly fork and additional 10 clone/start operations.  I have it doing 2
cycles of this and it eventually triggers the problem (it's taken up to 3
runs for to trigger the problem).

And for reference, here is an exact copy the scripts I used to reproduce
the problem:

test.sh:

#!/usr/bin/env bash

prefix=$1

test -z "${prefix}" && echo 'error: missing required parameter: prefix'
1>&2 && exit 1

path=/mnt

sudo lxc-destroy -n c1 2>/dev/null
sudo lxc-create -t ubuntu -B btrfs -n c1

for i in `seq 1 10`; do
sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i
sudo lxc-start -d -n $prefix$i
done
for i in `seq 11 20`; do
echo $(sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i; sudo
lxc-start -d -n $prefix$i) &
done

sleep 10

# Create even more.
for i in `seq 21 30`; do
sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i
sudo lxc-start -d -n $prefix$i
done
for i in `seq 31 40`; do
echo $(sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i; sudo
lxc-start -d -n $prefix$i) &
done


stop.sh:

#!/usr/bin/env bash

prefix=$1

test -z "${prefix}" && echo 'error: missing required parameter: prefix'
1>&2 && exit 1

sudo lxc-destroy -n c1;

for i in `seq 1 40`; do
echo $(sudo lxc-stop -k -n $prefix$i; sudo lxc-destroy -n $prefix$i) &
done



bash ./test.sh x
bash ./test.sh y
bash ./test.sh z


If it doesn't manifest at first, try stopping/starting varying quantities
of containers for several cycles.  Eventually I consistently end up not
ever getting ip addresses:

x1  RUNNING  - - NO
x10 RUNNING  - - NO
x11 RUNNING  - - NO
x12 RUNNING  - - NO
x13 RUNNING  - - NO
x14 RUNNING  - - NO
x15 RUNNING  - - NO
x16 RUNNING  - - NO
x17 RUNNING  - - NO
x18 RUNNING  - - NO
x19 RUNNING  - - NO
x2  RUNNING  - - NO
x20 RUNNING  - - NO
x21 RUNNING  - - NO
x22 RUNNING  - - NO
x23 RUNNING  - - NO
x24 RUNNING  - - NO
x25 RUNNING  - - NO
x26 RUNNING  - - NO
x27 RUNNING  - - NO
x28 RUNNING  - - NO
x29 RUNNING  - - NO
x3  RUNNING  - - NO
x30 RUNNING  - - NO
x31 RUNNING  - - NO
x32 RUNNING  - - NO
x33 RUNNING  - - NO
x34 RUNNING  - - NO
x35 RUNNING  - - NO
x36 RUNNING  - - NO
x37 RUNNING  - - NO
x38 RUNNING  - - NO
x39 RUNNING  - - NO
x4  RUNNING  - - NO
x40 RUNNING  - - NO
x5  RUNNING  - - NO
x6  RUNNING  - - NO
x7  RUNNING  - - NO
x8  RUNNING  - - NO
x9  RUNNING  -     -     NO


On Wed, Aug 14, 2013 at 10:12 AM, Serge Hallyn wrote:

> Quoting Serge Hallyn (serge.hal...@ubuntu.com):
> > Quoting Jay Taylor (j...@jaytaylor.com):
> > > After further investigation yesterday, I am not convinced it is an
> > > IP-address issue.  The affected host machines are unable to start any
> > > existing or newly created containers.  The incident that triggered the
> > > issue was cloning 1 container into 10 new ones, and then launching
> them all
> > > simultaneously.  Are there any known concurrency issues with LXC which
> > > would explain why executing a lot of clone/start LXC commands at the
> same
> >
> > Known, no, but th

Re: [Lxc-users] Containers are all getting same IP address

2013-08-14 Thread Jay Taylor
One additional note:

Make sure the btrfs volume is a fast disk.  I just tried with an AWS EBS
volume and was unable reproduce the problem.  As soon as I switched to
using an ephemeral (local storage) disk, I was able to reproduce after only
2 runs of the test script.


On Wed, Aug 14, 2013 at 1:22 PM, Jay Taylor  wrote:

> Hi Serge,
>
> I added zfs support to the application and systems creating/hosting the
> containers, and I have subsequently been unable to reproduce any issues.
>
> As far as trying to reproduce it with btrfs, I've had some success.
>
> The general system state is something like:
> N containers already running happily
> Launch N+ more containers in rapid succession (in parallell, not serially).
>
> I've modified your test script to reflect more closely what my application
> is actually doing, by slowly launching 10 containers, and then using "&" to
> rapidly fork and additional 10 clone/start operations.  I have it doing 2
> cycles of this and it eventually triggers the problem (it's taken up to 3
> runs for to trigger the problem).
>
> And for reference, here is an exact copy the scripts I used to reproduce
> the problem:
>
> test.sh:
>
> #!/usr/bin/env bash
>
> prefix=$1
>
> test -z "${prefix}" && echo 'error: missing required parameter: prefix'
> 1>&2 && exit 1
>
> path=/mnt
>
> sudo lxc-destroy -n c1 2>/dev/null
> sudo lxc-create -t ubuntu -B btrfs -n c1
>
> for i in `seq 1 10`; do
> sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i
> sudo lxc-start -d -n $prefix$i
> done
> for i in `seq 11 20`; do
> echo $(sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i; sudo
> lxc-start -d -n $prefix$i) &
> done
>
> sleep 10
>
> # Create even more.
> for i in `seq 21 30`; do
> sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i
> sudo lxc-start -d -n $prefix$i
> done
> for i in `seq 31 40`; do
> echo $(sudo lxc-clone -s -B btrfs -P $path -o c1 -n $prefix$i; sudo
> lxc-start -d -n $prefix$i) &
> done
>
>
> stop.sh:
>
> #!/usr/bin/env bash
>
> prefix=$1
>
> test -z "${prefix}" && echo 'error: missing required parameter: prefix'
> 1>&2 && exit 1
>
> sudo lxc-destroy -n c1;
>
> for i in `seq 1 40`; do
> echo $(sudo lxc-stop -k -n $prefix$i; sudo lxc-destroy -n $prefix$i) &
> done
>
>
>
> bash ./test.sh x
> bash ./test.sh y
> bash ./test.sh z
>
>
> If it doesn't manifest at first, try stopping/starting varying quantities
> of containers for several cycles.  Eventually I consistently end up not
> ever getting ip addresses:
>
> x1  RUNNING  - - NO
> x10 RUNNING  - - NO
> x11 RUNNING  - - NO
> x12 RUNNING  - - NO
> x13 RUNNING  - - NO
> x14 RUNNING  - - NO
> x15 RUNNING  - - NO
> x16 RUNNING  - - NO
> x17 RUNNING  - - NO
> x18 RUNNING  - - NO
> x19 RUNNING  - - NO
> x2  RUNNING  - - NO
> x20 RUNNING  - - NO
> x21 RUNNING  - - NO
> x22 RUNNING  - - NO
> x23 RUNNING  - - NO
> x24 RUNNING  - - NO
> x25 RUNNING  - - NO
> x26 RUNNING  - - NO
> x27 RUNNING  - - NO
> x28 RUNNING  - - NO
> x29 RUNNING  - - NO
> x3  RUNNING  - - NO
> x30 RUNNING  - - NO
> x31 RUNNING  - - NO
> x32 RUNNING  - - NO
> x33 RUNNING  - - NO
> x34 RUNNING  - - NO
> x35 RUNNING  - - NO
> x36 RUNNING  - - NO
> x37 RUNNING  - - NO
> x38 RUNNING  - - NO
> x39 RUNNING  - - NO
> x4      RUNNING  - - NO
> x40 RUNNING  - - NO
> x5  RUNNING  - - NO
> x6  RUN

Re: [Lxc-users] Containers are all getting same IP address

2013-08-14 Thread Jay Taylor
My reply is inline below.


On Wed, Aug 14, 2013 at 1:30 PM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > Hi Serge,
> >
> > I added zfs support to the application and systems creating/hosting the
> > containers, and I have subsequently been unable to reproduce any issues.
>
> Thanks for the script, I'll play with that in a bit.  But to be clear:
> you're saying you can only reproduce this with btrfs and not with
> zfs?
>

That is correct.  Today I ran side-by-side tests on 2 identical systems
where the only difference was the filesystem, and only btrfs was able to
produce the bad state.


>
> thanks,
> -serge
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Containers are all getting same IP address

2013-08-19 Thread Jay Taylor
Serge,

As requested, I've filed the issue:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1214085

Best,
Jay



On Fri, Aug 16, 2013 at 9:08 AM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > I tried doing that, but it didn't work.
> >
> > ubuntu-bug lxc
>
> Ah, this is actually a kernel bug, so we have a workaround :)
>
> can you do
>
> sudo ubuntu-bug linux
>
> and I'll subscribe to the bug.  So far I've still not reproduced it.
>
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running LXC on ZFS, never comes back online after reboot

2013-08-28 Thread Jay Taylor
Serge,

As a followup on this issue, I've ported the application to use zfs-fuse
instead of the PPA version, and overall things are working well.  The only
new problem I've encountered is that when destroying a container, I
frequently get "dataset is busy" errors, but after adding up to 5 retries,
it consistently eventually completely successfully.

Thanks for your advice!

Best,
Jay


On Tue, Aug 27, 2013 at 6:06 AM, Serge Hallyn wrote:

> Quoting Jay Taylor (j...@jaytaylor.com):
> > Greetings LXC folks,
> >
> > With LXC and ZFS on AWS, after I've created 1 or more containers, the
> > machine will never come back up after a reboot.
> >
> > One "fix" I've found for this is to always explicity run `sudo zpool
> export
> > tank` before every system restart, but there are situations where this is
>
> It looks to me like a bug in the zfs implementation you are using.
>
> I've done my testing using zfs-fuse (on 12.04) with no problems.
>
> -serge
>
--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] reg iptables usage in containers

2013-09-25 Thread Jay Taylor
Hi Srini,

Learning the iptables rules can be tricky at first, especially when you're
new to LXC.  I highly recommend finding a way to automate the process.

Here is a real-world example of how iptables can be setup on a
per-container basis in LXC:

https://github.com/Sendhub/shipbuilder/blob/master/src/scripts.go#L38

This is a python script which is run to clone and launch a new container
and setup the iptables TCP port-forwarding for it.

Hope this is useful.

Best regards,
Jay


On Tue, Sep 24, 2013 at 2:21 AM, Aarti Sawant wrote:

> hello,
>
> Bellow link might be useful for setting up iptables per conatiners
> http://openvz.org/Setting_up_an_iptables_firewall
>
> Thanks,
> Aarti Sawant
> NTTDATA OSS Center Pune
>
>
> On Tue, Sep 24, 2013 at 5:37 AM, srinivas k  wrote:
>
>> Hi Group.
>>
>> I am new to lxc and I am trying to create containers for the first time.
>>
>> My plan is to create 2 containers using lxc-create and do some networking
>> between  2containers using a br0 as bridge between 2 containers
>>
>> What is the basic procedure to do the below
>>
>> 1.How to setup iptables per container
>>
>> 2.How to filter out incoming traffic traffic per container using iptables
>> with respect to that particular container
>>
>> Will be thankful for any help or pointers
>>
>> Regards
>> Srini
>>
>>
>> --
>> October Webinars: Code for Performance
>> Free Intel webinars can help you accelerate application performance.
>> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most
>> from
>> the latest Intel processors and coprocessors. See abstracts and register >
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
>> ___
>> Lxc-users mailing list
>> Lxc-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>>
>>
>
>
> --
> October Webinars: Code for Performance
> Free Intel webinars can help you accelerate application performance.
> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most
> from
> the latest Intel processors and coprocessors. See abstracts and register >
> http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users