Hi, Paul. Can your share the total experience about the arch with us. I am
trying to do the similar thing
> 在 2015年11月26日,09:47,Paul 写道:
>
> experience
HmmI'm not sure there's really a "fix" for that (BTW: I assume you mean
to fix high (or long) latency, i.e., to make it lower, faster). A network
link is a network link, right? Like all hardware, it has its own physical
characteristics which determine its latency's lower bound, below which it
i
Paul,
Yup, Weave and Docker. May I know how did you fix low latency issue over
Internet ? By tunnel or ?
Regards,
Sam
Sent from my iPhone
> On Nov 26, 2015, at 10:23 AM, Paul wrote:
>
> Happy Thanksgiving to you, too.
>
> I tend to deploy the several Mesos nodes as VMware VMs.
>
> However
Happy Thanksgiving to you, too.
I tend to deploy the several Mesos nodes as VMware VMs.
However, I've also run a cluster with master on ESXi, slaves on ESXi, slave on
bare metal, and an EC2 slave.
But in my case all applications are Docker containers connected via Weave.
Does your present depl
Paul,
Happy thanksgiving first. We are using Aws, Rackspace as hybrid cloud env , and
we deployed Mesos master in AWS , part of Slaves in AWS , part of Slaves in
Rackspace . I am thinking whether it works ? And since it got low latency in
networking , can we deploy two masters in both AWS and R
Hi Sam,
Yeah, I have significant experience in this regard.
We run a Docker containers spread across several Mesos slave nodes. The
containers are all connected via Weave. It works very well.
Can you describe what you have in mind?
Cordially,
Paul
> On Nov 25, 2015, at 8:03 PM, Sam wrote:
Guys,
We are trying to use Weave in hybrid cloud Mesos env , anyone got experience on
it ? Appreciated
Regards,
Sam
Sent from my iPhone
On Tue, Nov 24, 2015 at 3:38 PM, Marco Massenzio wrote:
> The closest I could find is [0], but granted, much more detail could be
> desirable :)
Agreed! See also https://issues.apache.org/jira/browse/MESOS-3995
Neil
Community growth starts by talking with those interested in your
project. ApacheCon North America is coming, are you?
We are delighted to announce that the Call For Presentations (CFP) is
now open for ApacheCon North America. You can submit your proposed
sessions at
http://events.linuxfoundation.o
Yes, those'll be CommandExecutors; this is probably not the issue I
suggested it might be.
On Wed, Nov 25, 2015 at 11:02 AM James Vanns wrote:
> I don't know what the Chronos default is - but in the recent case I posted
> about, we use whatever the Chronos default is I just checked their
> d
I don't know what the Chronos default is - but in the recent case I posted
about, we use whatever the Chronos default is I just checked their
documentation and it states they use the Mesos command executor.
As far as our own framework, which exhibits similar behaviour, we don't
explicitly spec
If you're using a custom executor, this could happen if you don't actually
exit the executor process. Is this using CommandExecutor or a custom one?
On Wed, Nov 25, 2015 at 5:01 AM James Vanns wrote:
> Er, I could. At the moment it's pretty huge so maybe I'll just try and
> trim it down a bit. I'
JFYI.
I finished the mesos-master migrate and all works fine as expectd.
--
Thanks,
Chengwei
On Wed, Nov 25, 2015 at 06:29:54PM +0800, Chengwei Yang wrote:
> OOPS,
>
> We forgot to disable firewalld on the new centos7 VM, once firewalld disabled,
> replicate finished in seconds.
>
> as below.
OOPS,
We forgot to disable firewalld on the new centos7 VM, once firewalld disabled,
replicate finished in seconds.
as below.
```
I1125 18:27:33.737843 2490 replica.cpp:369] Replica ignoring promise request
as it is in RECOVERING status
I1125 18:27:33.740927 2489 replica.cpp:655] Replica rece
while the other 2 mesos-master (one leader and one follower) both repeat below
log.
I1125 18:06:33.315208 28401 replica.cpp:638] Replica in VOTING status received
a broadcasted recover request
I1125 18:06:43.316341 28404 replica.cpp:638] Replica in VOTING status received
a broadcasted recover re
Er, I could. At the moment it's pretty huge so maybe I'll just try and trim
it down a bit. I've noticed that Chronos does the same, actually. There is
a task that is 'active' and still holding onto resources yet it has already
completed unsuccessfully with TASK_FAILED (16hrs ago!) state. Attached i
Hi All,
I did step 1 below and check logs from the new started mesos-master, and it
continuously complaint like below.
```
I1125 17:42:59.066706 2330 recover.cpp:188] Received a recover response from a
replica in EMPTY status
I1125 17:43:09.065188 2331 recover.cpp:111] Unable to finish the rec
17 matches
Mail list logo