i went with a different approach and wrote a charm to do the overlay
network using coreos's newly released rudder
http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt


it works across all providers (including manual and the digitalocean and
softlayer manual based plugins) using udp for encapsulation.

cheers,

Kapil

ps. ec2 is/was broken due to archive error across regions today.




On Wed, Aug 27, 2014 at 9:52 AM, Kapil Thangavelu <
kapil.thangav...@canonical.com> wrote:

>
>
>
>
> On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel <john.mei...@canonical.com>
> wrote:
>
>> So I played around with manually assigning IP addresses to a machine, and
>> using BTRFS to make the LXC instances cheap in terms of disk space.
>>
>> I had success bringing up LXC instances that I created directly, I
>> haven't gotten to the point where I could use Juju for the intermediate
>> steps. See the attached document for the steps I used to set up several
>> addressable containers on an instance.
>>
>> However, I feel pretty good that Container Addressability would actually
>> be pretty straightforward to achieve with the new Networker. We need to
>> make APIs for requesting an Address for a new container available, but then
>> we can configure all of the routing stuff without too much difficulty.
>>
>> Also of note, is that because we are using MASQUERADE in order to route
>> the traffic, it doesn't require putting the bridge (br0) directly onto
>> eth0. So it depends if MaaS will play nicely with routing rules if you
>> assign an IP address into a container on a machine, will the routes end up
>> routing the traffic there (I think it will, but we'd have to test to
>> confirm it).
>>
>> Ideally, I'd rather do the same thing everywhere, rather that have
>> containers routed one way in MaaS and a different way on EC2.
>>
>> It may be that in the field we need to not Masquerade, so I'm open to
>> feedback here.
>>
>> I wrote this up a bit like how I would want to use dense containers for
>> scale testing, since you can then deploy actual workloads into each of
>> these LXCs if you wanted (and had the horsepower :).
>>
>> I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC
>> containers and was able to connect to them from another machine running
>> inside the VPC.
>>
>
>
> Thanks for exploring this John. I'm excited about utilizing something like
> this for regular scale testing on the cheap (10 instances for 1 hr on spot
> markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use
> ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs
> volume for btrfs) in ec2 via
> https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some
> other scripts in there (add.py) for provisioning the container with
> userdata (ie. automate key installation and machine setup) which can
> obviate/automate several of these steps. Either ebs or instance ephemeral
> disk (ssd) is preferable i think to loopback dev for perf testing.  Re
> uniform networking handling, it still feels like we're exploring here its
> unclear if we have the knowledge base to dictate a common mechanism yet.
>
> cheers,
>
> Kapil
>
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev

Reply via email to