Re: Anyone try Weave in Mesos env ?

2015-11-26 Thread Sam Chen
Paul,
Appreciated

Regards,
Sam

On Thu, Nov 26, 2015 at 11:09 AM, Paul Bell  wrote:

> HmmI'm not sure there's really a "fix" for that (BTW: I assume you
> mean to fix high (or long) latency, i.e., to make it lower, faster). A
> network link is a network link, right? Like all hardware, it has its own
> physical characteristics which determine its latency's lower bound, below
> which it is physically impossible to go.
>
> Sounds to me as if you've got the whole Mesos + Docker + Weave thing
> figured out, at least as far as the basic connectivity and addressing is
> concerned. So there's not much more that I can tell you in that regard.
>
> Are you running Weave 1.2 (or above)? It incorporates their "fast path"
> technology based on the Linux kernel's Open vSwitch (*vide*:
> http://blog.weave.works/2015/11/13/weave-docker-networking-performance-fast-data-path/).
> But, remember, there's still the link in between endpoints. One can
> optimize the packet handling within an endpoint, but this could boil down
> to a case of "hurry up and wait".
>
> I would urge you to take this question up with the friendly,
> knowledgeable, and very helpful folks at Weave:
> https://groups.google.com/a/weave.works/forum/#!forum/weave-users .
>
> Cordially,
>
> Paul
>
> On Wed, Nov 25, 2015 at 9:31 PM, Sam  wrote:
>
>> Paul,
>> Yup, Weave and Docker.  May I know how did you fix low latency issue over
>> Internet ? By tunnel or ?
>>
>> Regards,
>> Sam
>>
>> Sent from my iPhone
>>
>> > On Nov 26, 2015, at 10:23 AM, Paul  wrote:
>> >
>> > Happy Thanksgiving to you, too.
>> >
>> > I tend to deploy the several Mesos nodes as VMware VMs.
>> >
>> > However, I've also run a cluster with master on ESXi, slaves on ESXi,
>> slave on bare metal, and an EC2 slave.
>> >
>> > But in my case all applications are Docker containers connected via
>> Weave.
>> >
>> > Does your present deployment involve Docker and Weave?
>> >
>> > -paul
>> >
>> >> On Nov 25, 2015, at 8:55 PM, Sam  wrote:
>> >>
>> >> Paul,
>> >> Happy thanksgiving first. We are using Aws, Rackspace as hybrid cloud
>> env , and we deployed Mesos master in AWS , part of Slaves in AWS , part of
>> Slaves in Rackspace .  I am thinking whether it works ? And since it got
>> low latency in networking , can we deploy two masters in both AWS and
>> Rackspace ? And federation ?Appreciated for your reply .
>> >>
>> >> Regards ,
>> >> Sam
>> >>
>> >> Sent from my iPhone
>> >>
>> >>> On Nov 26, 2015, at 9:47 AM, Paul  wrote:
>> >>>
>> >>> Hi Sam,
>> >>>
>> >>> Yeah, I have significant experience in this regard.
>> >>>
>> >>> We run a Docker containers spread across several Mesos slave nodes.
>> The containers are all connected via Weave. It works very well.
>> >>>
>> >>> Can you describe what you have in mind?
>> >>>
>> >>> Cordially,
>> >>>
>> >>> Paul
>> >>>
>>  On Nov 25, 2015, at 8:03 PM, Sam  wrote:
>> 
>>  Guys,
>>  We are trying to use Weave in hybrid cloud Mesos env , anyone got
>> experience on it ? Appreciated
>>  Regards,
>>  Sam
>> 
>>  Sent from my iPhone
>>
>
>


Re: Anyone try Weave in Mesos env ?

2015-11-26 Thread Paul
Gladly, Weitao. It'd be my pleasure.

But give me a few hours to find some free time. 

I am today tasked with cooking a Thanksgiving turkey.

But I will try to find the time before noon today (I'm on the right coast in 
the USA).

-Paul

> On Nov 25, 2015, at 11:26 PM, Weitao  wrote:
> 
> Hi, Paul. Can your share the total experience about the arch with us. I am 
> trying to do the similar thing
> 
> 
>> 在 2015年11月26日,09:47,Paul  写道:
>> 
>> experience


RE: Anyone try Weave in Mesos env ?

2015-11-26 Thread Ajay Bhatnagar
Have a look at Calico. I found it much easier to deploy in cross domain setup 
and you do not have to worry about networking containers in multihost setup as 
it is pure layer 3 virtual networking.
https://github.com/projectcalico/calico-docker#calico-on-docker

Ajay


-Original Message-
From: Sam [mailto:usultra...@gmail.com] 
Sent: Wednesday, November 25, 2015 8:03 PM
To: user@mesos.apache.org
Subject: Anyone try Weave in Mesos env ?

Guys,
We are trying to use Weave in hybrid cloud Mesos env , anyone got experience on 
it ? Appreciated  Regards, Sam

Sent from my iPhone


Re: Anyone try Weave in Mesos env ?

2015-11-26 Thread Paul Bell
Hi Weitao,

I came up with this architecture as a way of distributing our application
across multiple nodes. Pre-Mesos, our application, delivered as a single
VMware VM, was not easily scalable. By breaking out the several application
components as Docker containers, we are now able (within limits imposed
chiefly by the application itself) to distribute & run those containers
across the several nodes in the Mesos cluster. Application containers that
need to talk to each other are connected via Weave's "overlay" (veth)
network.

Not surprisingly, this architecture has some of the benefits that you'd
expect from Mesos, chief among them being high-availability (more on this
below), scalability, and hybrid Cloud deployment.

The core unit of deployment is an Ubuntu image (14.04 LTS) that I've
configured with the appropriate components:

Zookeeper
Mesos-master
Mesos-slave
Marathon
Docker
Weave

SSH (including RSA keys)

Our application


This images is presently downloaded by a customer as a VMware .ova file. We
typically ask the customer to convert the resulting VM to a so-called
VMware template from which she can easily deploy multiple VMs as needed.
Please note that although we've started with VMware as our virtualization
platform, I've successfully run cluster nodes on both EC2 and Azure.

I tend to describe the Ubuntu image as "polymorphic", i.e., it can be told
to assume one of two roles, either a "master" role or a "slave" role. A
master runs ZK, mesos-master, and Marathon. A slave runs mesos-slave,
Docker, Weave, and the application.

We presently offer 3 canned deployment options:

   1. single-host, no HA
   2. multi-host, no HA (1 master, 3 slaves)
   3. multi-host, HA (3 masters, 3 slaves)

The single-host, no HA option exists chiefly to mimic the original
pre-Mesos deployment. But it has the added virtue, thanks to Mesos, of
allowing us to dynamically "grow" from a single-host to multiple hosts.

The multi-host, no HA option is presently geared toward a sharded MongoDB
backend where each slave runs a mongod container that is a single partition
(shard) of the larger database. This deployment option also lends itself
very nicely to adding a new slave node at the cluster level, and a new
mongod container at the application level - all without any downtime
whatsoever.

The multi-host, HA option offers the probably familiar *cluster-level* high
availability. I stress "cluster-level" because I think we have to
distinguish between HA at that level & HA at the application level. The
former is realized by the 3 master hosts, i.e., you can lose a master and
new one will self-elect thereby keeping the cluster up & running. But, to
my mind, at least, application level HA requires some co-operation on the
part of the application itself (e.g., checkpoint/restart). That said, it
*is* almost magical to watch Mesos re-launch an application container that
has crashed. But whether or not that re-launch results in coherent
application behavior is another matter.

An important home-grown component here is a Java program that automates
these functions:

create cluster - configures a host for a given role and starts Mesos
services. This is done via SSH
start application - distributes application containers across slave hosts.
This is done by talking to the Marathon REST API
stop application - again, via the Marathon REST API
stop cluster - stops Mesos services. Again, via SSH
destroy cluster - deconfigures the host (after which it has no defined
role); again, SSH


As I write, I see Ajay's e-mail arrive about Calico. I am aware of this
project and it seems quite solid. But I've never understood the need to
"worry about networking containers in multihost setup". Weave runs as a
Docker container and It Just Works. I've "woven" together slaves nodes in a
cluster that spanned 3 different datacenters, one of them in EC2, without
any difficulty. Yes, I do have to assign Weave IP addresses to the several
containers, but this is hardly onerous. In fact, I've found it "liberating"
to select such addresses from a CIDR/8 address space, assigning them to
containers based on the container's purpose (e.g., MongoDB shard containers
might live at 10.4.0.X, etc.). Ultimately, this assignment boils down to
setting an environment variable that Marathon (or the mesos-slave executor)
will use when creating the container via "docker run".

There is a whole lot more that I could say about the internals of this
architecture. But, if you're still interested, I'll await further questions
from you.

HTH.

Cordially,

Paul


On Thu, Nov 26, 2015 at 7:16 AM, Paul  wrote:

> Gladly, Weitao. It'd be my pleasure.
>
> But give me a few hours to find some free time.
>
> I am today tasked with cooking a Thanksgiving turkey.
>
> But I will try to find the time before noon today (I'm on the right coast
> in the USA).
>
> -Paul
>
> > On Nov 25, 2015, at 11:26 PM, Weitao  wrote:
> >
> > Hi, Paul. Can your share 

RE: Anyone try Weave in Mesos env ?

2015-11-26 Thread Ajay Bhatnagar
With Calico you only create the virtual subnets and ip assignments are managed 
dynamically by Calico w/o any manual intervention needed.
Cheers
Ajay

From: Paul Bell [mailto:arach...@gmail.com]
Sent: Thursday, November 26, 2015 9:05 AM
To: user@mesos.apache.org
Subject: Re: Anyone try Weave in Mesos env ?

Hi Weitao,

I came up with this architecture as a way of distributing our application 
across multiple nodes. Pre-Mesos, our application, delivered as a single VMware 
VM, was not easily scalable. By breaking out the several application components 
as Docker containers, we are now able (within limits imposed chiefly by the 
application itself) to distribute & run those containers across the several 
nodes in the Mesos cluster. Application containers that need to talk to each 
other are connected via Weave's "overlay" (veth) network.

Not surprisingly, this architecture has some of the benefits that you'd expect 
from Mesos, chief among them being high-availability (more on this below), 
scalability, and hybrid Cloud deployment.

The core unit of deployment is an Ubuntu image (14.04 LTS) that I've configured 
with the appropriate components:

Zookeeper
Mesos-master
Mesos-slave
Marathon
Docker
Weave
SSH (including RSA keys)
Our application

This images is presently downloaded by a customer as a VMware .ova file. We 
typically ask the customer to convert the resulting VM to a so-called VMware 
template from which she can easily deploy multiple VMs as needed. Please note 
that although we've started with VMware as our virtualization platform, I've 
successfully run cluster nodes on both EC2 and Azure.

I tend to describe the Ubuntu image as "polymorphic", i.e., it can be told to 
assume one of two roles, either a "master" role or a "slave" role. A master 
runs ZK, mesos-master, and Marathon. A slave runs mesos-slave, Docker, Weave, 
and the application.

We presently offer 3 canned deployment options:

  1.  single-host, no HA
  2.  multi-host, no HA (1 master, 3 slaves)
  3.  multi-host, HA (3 masters, 3 slaves)
The single-host, no HA option exists chiefly to mimic the original pre-Mesos 
deployment. But it has the added virtue, thanks to Mesos, of allowing us to 
dynamically "grow" from a single-host to multiple hosts.

The multi-host, no HA option is presently geared toward a sharded MongoDB 
backend where each slave runs a mongod container that is a single partition 
(shard) of the larger database. This deployment option also lends itself very 
nicely to adding a new slave node at the cluster level, and a new mongod 
container at the application level - all without any downtime whatsoever.

The multi-host, HA option offers the probably familiar cluster-level high 
availability. I stress "cluster-level" because I think we have to distinguish 
between HA at that level & HA at the application level. The former is realized 
by the 3 master hosts, i.e., you can lose a master and new one will self-elect 
thereby keeping the cluster up & running. But, to my mind, at least, 
application level HA requires some co-operation on the part of the application 
itself (e.g., checkpoint/restart). That said, it is almost magical to watch 
Mesos re-launch an application container that has crashed. But whether or not 
that re-launch results in coherent application behavior is another matter.

An important home-grown component here is a Java program that automates these 
functions:

create cluster - configures a host for a given role and starts Mesos services. 
This is done via SSH
start application - distributes application containers across slave hosts. This 
is done by talking to the Marathon REST API
stop application - again, via the Marathon REST API
stop cluster - stops Mesos services. Again, via SSH
destroy cluster - deconfigures the host (after which it has no defined role); 
again, SSH

As I write, I see Ajay's e-mail arrive about Calico. I am aware of this project 
and it seems quite solid. But I've never understood the need to "worry about 
networking containers in multihost setup". Weave runs as a Docker container and 
It Just Works. I've "woven" together slaves nodes in a cluster that spanned 3 
different datacenters, one of them in EC2, without any difficulty. Yes, I do 
have to assign Weave IP addresses to the several containers, but this is hardly 
onerous. In fact, I've found it "liberating" to select such addresses from a 
CIDR/8 address space, assigning them to containers based on the container's 
purpose (e.g., MongoDB shard containers might live at 10.4.0.X, etc.). 
Ultimately, this assignment boils down to setting an environment variable that 
Marathon (or the mesos-slave executor) will use when creating the container via 
"docker run".

There is a whole lot more that I could say about the internals of this 
architecture. But, if you're still interested, I'll await further questions 
from you.

HTH.

Cordially,

Paul


On Thu, Nov 26, 2015 at 7:16 AM, Paul