.
--
Thanks,
Paul
Thank you, Sivaram.
That would seem to be 2 "votes" for upgrading.
-Paul
> On Aug 13, 2016, at 11:47 PM, Sivaram Kannan <sivara...@gmail.com> wrote:
>
>
> I don't remember the condition exactly, but I have faced similar issue in my
> deployments and have be
it, then I'd do my best to compel the
upgrade.
Docker version is also old, 1.6.2.
-Paul
On Wed, Aug 10, 2016 at 9:18 AM, Jeff Schroeder <jeffschroe...@computer.org>
wrote:
> Have you considered upgrading Mesos and Marathon? Those are quite old
> versions of both with some fairly glar
to Mesos & Marathon becoming unaware of
containers that they started.
I would be very grateful if someone could help me understand what's going
on here (so would our customer!).
Thanks.
-Paul
) as in 'hot'.
But, at least to English ears, that pronunciation feels a bit stilted. So I
think Rodrick's right to sound the 'o' as long, as in 'tone'.
-Paul
> On Jul 13, 2016, at 9:12 PM, Rodrick Brown <rodr...@orchard-app.com> wrote:
>
> Mess-O's
>
> Get Outlook for iOS
>
or ZK?
Thanks again.
-Paul
May 16 20:06:53 71 kernel: [193339.890848] INFO: task mesos-master:4013
blocked for more than 120 seconds.
May 16 20:06:53 71 kernel: [193339.890873] Not tainted
3.13.0-32-generic #57-Ubuntu
May 16 20:06:53 71 kernel: [193339.890889] "echo 0 >
/proc/sy
there other steps I can take to avoid this mildly calamitous
occurrence?
- (Also, I'd be grateful for more clarity on anything in steps 1-4 above
that is a bit hand-wavy!)
As always, thanks.
-Paul
://: the mesos-slave service refuses to start. It seems to
require a Unix socket.
There is one comment in the ticket that expresses the hope of being able to
use URLs of the tcp:// form.
Am I misunderstanding this fix and if not, what release of Mesos
incorporates it?
Thanks for your help.
-Paul
Hi June,
In addition to doing what Pradeep suggests, I also now & then run a single
node "cluster" that houses mesos-master, mesos-slave, and Marathon.
Works fine.
Cordially,
Paul
On Wed, Apr 13, 2016 at 12:36 PM, Pradeep Chhetri <
pradeep.chhetr...@gmail.com> wrote:
&
, then
simply a different strategic direction?
My 2 cents.
-Paul
On Mon, Apr 11, 2016 at 5:19 PM, Zameer Manji <zma...@apache.org> wrote:
> I have suggested this before and I will suggest it again here.
>
> I think the Apache Mesos project should build and distribute pac
ault
workdir?
Shua,
Thank you for this observation. Happily (I think), we do not have a custom
framework. Presently, Marathon is the only framework that we use.
-Paul
On Mon, Apr 11, 2016 at 8:12 AM, Shuai Lin <linshuai2...@gmail.com> wrote:
> If your product containers a custom framew
pose the
usual dimensions of B/R would come into play, e.g., hot/cold, application
consistent/crash consistent, etc.
Has anyone grappled with this issue and, if so, would you be so kind as to
share your experience and solutions?
Thank you.
-Paul
all
services on all nodes) is a standard feature in our product, should I
routinely to the above "rm" command when the mesos services are stopped?
Thanks for your help.
Cordially,
Paul
On Tue, Mar 29, 2016 at 6:16 PM, Greg Mann <g...@mesosphere.io> wrote:
> Check out
interested in fully understanding the causal chain here before I
try to fix anything.
-Paul
On Tue, Mar 29, 2016 at 5:51 PM, Paul Bell <arach...@gmail.com> wrote:
> Whoa...interessant!
>
> The node *may* have been rebooted. Uptime says 2 days. I'll need to check
> my notes.
&g
- or do I misunderstand?
Thank you again for your help.
-Paul
On Tue, Mar 29, 2016 at 5:36 PM, Greg Mann <g...@mesosphere.io> wrote:
> Paul,
> This would be relevant for any system which is automatically deleting
> files in /tmp. It looks like in Ubuntu, the default behav
Hi Greg,
Thanks very much for your quick reply.
I simply forgot to mention platform. It's Ubuntu 14.04 LTS and it's not
systemd. I will look at the link you provide.
Is there any chance that it might apply to non-systemd platforms?
Cordially,
Paul
On Tue, Mar 29, 2016 at 5:18 PM, Greg Mann
Mesos is version 0.23.0.
Thanks for your help.
-Paul
Log file created at: 2016/03/29 14:19:39
Running on machine: 71.100.202.193
Log line format: [IWEF]mmdd hh:mm:ss.uu threadid file:line] msg
I0329 14:19:39.512249 5870 logging.cpp:172] INFO level logging started!
I0329 14:19:39.512564 5870 m
me more, but I'd be interested to hear feedback on
these few points that I've raised.
Thanks again.
-Paul
On Thu, Feb 25, 2016 at 11:55 AM, Vinod Kone <vinodk...@gmail.com> wrote:
>
> > But an important MSP requirement is a unified view of their many
> tenants. So I am really t
unified view of their many tenants. So I am really trying to get a
sense for how well the recent Mesos/Marathon releases address this
requirement.
Thank you.
-Paul
seful from a load-balancing
perspective.
Just curious if it's ever been considered and if so - and rejected - why
rejected?
Thanks.
-Paul
hing "wrong" in my kernel upgrade steps?
Is anyone aware of such an issue in 3.19 or of work done post-3.13 in the
area of task termination & signal handling?
Thanks for your help.
-Paul
On Thu, Jan 14, 2016 at 5:14 PM, Paul Bell <arach...@gmail.com> wrote:
> I spoke
he VM.
FWIW, the 3 mongod containers (apparently stuck in their Killing docker
task/shutting down loop) are running at 100%CPU as evinced by both "docker
stats" and "top".
I would truly be grateful for some guidance on this - even a mere
work-around would be appreciated.
Thank you.
-Paul
modifying docker_stop_timeout. Back shortly
Thanks again.
-Paul
PS: what do you make of the "broken pipe" error in the docker.log?
*from /var/log/upstart/docker.log*
[34mINFO[0m[3054] GET /v1.15/images/mongo:2.6.8/json
[34mINFO[0m[3054] GET
/v1.21/images/mesos-20160114-153418-1674208327-5050-3
Hey Tim,
Thank you very much for your reply.
Yes, I am in the midst of trying to reproduce the problem. If successful
(so to speak), I will do as you ask.
Cordially,
Paul
On Thu, Jan 14, 2016 at 3:19 PM, Tim Chen <t...@mesosphere.io> wrote:
> Hi Paul,
>
> Looks like we've
I spoke to soon, I'm afraid.
Next time I did the stop (with zero timeout), I see the same phenomenon: a
mongo container showing repeated:
killing docker task
shutting down
What else can I try?
Thank you.
On Thu, Jan 14, 2016 at 5:07 PM, Paul Bell <arach...@gmail.com> wrote:
> Hi T
Gladly, Weitao. It'd be my pleasure.
But give me a few hours to find some free time.
I am today tasked with cooking a Thanksgiving turkey.
But I will try to find the time before noon today (I'm on the right coast in
the USA).
-Paul
> On Nov 25, 2015, at 11:26 PM, Weitao <z
uot;.
There is a whole lot more that I could say about the internals of this
architecture. But, if you're still interested, I'll await further questions
from you.
HTH.
Cordially,
Paul
On Thu, Nov 26, 2015 at 7:16 AM, Paul <arach...@gmail.com> wrote:
> Gladly, Weitao. It'd be my pleasure
Hi Sam,
Yeah, I have significant experience in this regard.
We run a Docker containers spread across several Mesos slave nodes. The
containers are all connected via Weave. It works very well.
Can you describe what you have in mind?
Cordially,
Paul
> On Nov 25, 2015, at 8:03 PM,
deployment involve Docker and Weave?
-paul
> On Nov 25, 2015, at 8:55 PM, Sam <usultra...@gmail.com> wrote:
>
> Paul,
> Happy thanksgiving first. We are using Aws, Rackspace as hybrid cloud env ,
> and we deployed Mesos master in AWS , part of Slaves in AWS , part of Slaves
&
g within an endpoint, but this could boil down
to a case of "hurry up and wait".
I would urge you to take this question up with the friendly, knowledgeable,
and very helpful folks at Weave:
https://groups.google.com/a/weave.works/forum/#!forum/weave-users .
Cordially,
Paul
On Wed, Nov 25,
Jie,
Thank you.
That's odd behavior, no? That would seem to mean that the slave can never again
join the cluster, at least not from it's original IP@.
What if the master bounces? Will it then tolerate the slave?
-Paul
On Nov 13, 2015, at 4:46 PM, Jie Yu <yujie@gmail.com> wrote:
Ah, now I get it.
And this comports with the behavior I am observing right now.
Thanks again, Jie.
-Paul
> On Nov 13, 2015, at 5:55 PM, Jie Yu <yujie@gmail.com> wrote:
>
> Paul, the slave will terminate after receiving a Shutdown message. The slave
> will be restart
happens if
it comes up 1 second after exceeding the timeout product?
(I'm dusting off some old notes and trying to refresh my memory about
problems I haven't seen in quite some time).
Thank you.
-Paul
-qualified path under
/tmp/mesos/slaves, but this is readily available via "docker inspect".
-Paul
On Fri, Nov 6, 2015 at 7:17 AM, Paul Bell <arach...@gmail.com> wrote:
> Hi Mauricio,
>
> YeahI see your point; thank you.
>
> My approach would be akin to closing th
Hi Mauricio,
YeahI see your point; thank you.
My approach would be akin to closing the barn door after the horse got out.
Both Mesos & Docker are doing their own writing of STDOUT. Docker's
rotation won't address Mesos's behavior.
I need to find a solution here.
-Paul
On Thu, Nov 5,
Hi Mauricio,
I'm grappling with the same issue.
I'm not yet sure if it represents a viable solution, but I plan to look at
Docker's log rotation facility. It was introduced in Docker 1.8.
If you beat me to it & it looks like a solution, please let us know!
Thanks.
Cordially,
Paul
>
or the mesos running of the image, versions are getting
confused.
I guess my first question is what additional information can I get from
marathon or mesos logs to help diagnose? I've checked the mesos-SLAVE.* but
haven't been able to garner anything interesting there.
Thanks for any help!
Paul
No different tags.
From: Rad Gruchalski [mailto:ra...@gruchalski.com]
Sent: Tuesday, October 06, 2015 11:39 AM
To: user@mesos.apache.org
Subject: Re: Old docker version deployed
Paul,
Are you using the same tag every time?
Kind regards,
Radek Gruchalski
ra...@gruchalski.com<mailto
: Re: Old docker version deployed
You could see the stdout/stderr of your container from mesos webui.
On Tue, Oct 6, 2015 at 5:30 PM, Paul Wolfe
<paul.wo...@imc.nl<mailto:paul.wo...@imc.nl>> wrote:
Hello all,
I'm new to this list, so please let me know if there is a better/more
er": {
"image": "docker-registry:8080/myapp:86",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 80,
"protocol": "tcp"
}
parameters. I think maybe some machines have problems to connect your
image registry.
On Tue, Oct 6, 2015 at 5:40 PM, Paul Wolfe
<paul.wo...@imc.nl<mailto:paul.wo...@imc.nl>> wrote:
My marathon deploy json:
{
"type": "DOCKER",
"volumes": [
container, always use your image name "docker-registry:8080/myapp:86" as pull
and run parameters. I think maybe some machines have problems to connect your
image registry.
On Tue, Oct 6, 2015 at 5:40 PM, Paul Wolfe
<paul.wo...@imc.nl<mailto:paul.wo...@imc.nl>> wrote:
Thanks, Alexander; I will check out the vid.
I kind of assumed that this port was used for exactly the purpose you
mention.
Is TLS a possibility here?
-Paul
On Tue, Oct 6, 2015 at 8:15 AM, Alexander Rojas <alexan...@mesosphere.io>
wrote:
> Hi Paul,
>
> I can refer you to
or Nessus against my clustered
deployment.
So I am wondering what the best practices approach is to securing these
open ports.
Thanks for your help.
-Paul
/slaves/latest
But I know of no way to make such configuration changes without downtime.
And I'd very much like it if Mesos supported such dynamic changes. I
suppose this would require that the agent consult its default file on
demand, rather than once at start-up.
Cordially,
Paul
On Wed, Sep 23
Thank you, Benjamin.
So, I could periodically request the metrics endpoint, or stream the logs
(maybe via mesos.cli; or SSH)? What, roughly, does the "agent removed"
message look like in the logs?
Are there plans to offer a mechanism for event subscription?
Cordially,
Paul
On W
Alex and Marco,
Thanks very much for your really helpful explanations.
For better or worse, neither cpp nor Python are my things; Java's the go-to
language for me.
Cordially,
Paul
On Sat, Aug 29, 2015 at 5:23 AM, Marco Massenzio <ma...@mesosphere.io>
wrote:
> Hi Paul,
>
> +1 t
with a different CTR_ID effectively rendering that data
inaccessible. But docker start will restart the container and its old
data will still be there.
Thanks.
-Paul
, gents.
-Paul
On Fri, Aug 28, 2015 at 2:26 PM, Tim Chen t...@mesosphere.io wrote:
Hi Paul,
We don't [re]start a container since we assume once the task terminated
the container is no longer reused. In Mesos to allow tasks to reuse the
same executor and handle task logic accordingly people
at a customer site.
Thanks for all your help.
Cordially,
Paul
On Thu, Aug 13, 2015 at 9:41 PM, Klaus Ma kl...@cguru.net wrote:
I used to meet a similar issue with Zookeeper + Messo; I resolved it by
remove 127.0.1.1 from /etc/hosts; here is an example:
klaus@klaus-OptiPlex-780
*(UPID=master@127.0.1.1:5050 http://master@127.0.1.1:5050*
I've tried clearing what ZK and mesos-master state I can find, but this
problem will not go away.
Would someone be so kind as to a) explain what is happening here and b)
suggest remedies?
Thanks very much.
-Paul
Log file created at: 2015
, master, marathon; all on same host) by SSHing
into the host doing service start commands.
Again, thanks very much; and more tomorrow.
Cordially,
Paul
On Thu, Aug 13, 2015 at 1:08 PM, haosdent haosd...@gmail.com wrote:
Hello, how you start the master? And could you try use netstat -antp|grep
be curious to learn why you're finding Weave messy.
If you'd like to take it out-of-band (as it were), please feel free to
e-mail me directly.
Cordially,
Paul
On Wed, Aug 12, 2015 at 3:16 AM, Stephen Knight skni...@pivotal.io wrote:
Hi,
Is there a way to pass a custom flag to docker run through
Twitter: xds2000
E-mail: xiaods(AT)gmail.com
--
-- Paul Brett
in the code?
-- Paul
On Tue, Jun 2, 2015 at 12:53 PM, haosdent haosd...@gmail.com wrote:
Hi Adam,
1. Mesos Worker
2. Mesos Worker
3. No
4. Carefully. Should take care the compatible when upgrade.
On Wed, Jun 3, 2015 at 2:50 AM, Dave Lester d...@davelester.org wrote:
Hi Adam,
I've been using
Finally, a reason to travel to Deutschland! ;) Good luck with the new MUG!
Paul
On Mar 29, 2015 1:00 PM, Marc Zimmermann marc.zimmerm...@mmbash.de
wrote:
We’ve started a new users group in Cologne called Mesos-User-Group-Cologne
- please add us to your list!
http://www.meetup.com/Mesos-User
This is awesome! Thanks for all the hard work you all have put into this! I
am really excited to update to the latest stable version of Apache Mesos!
Regards,
Paul
Paul Otto
Principal DevOps Architect, Co-founder
Otto Ops LLC | *OttoOps.com http://OttoOps.com*
970.343.4561 office
720.381.2383
Hi Dave,
I would be interested in having Otto Ops LLC be added to that list. We have
been building a Mesos + Marathon + Docker infrastructure for Time Warner
Cable, and would be very interested in doing more with the community.
Regards,
Paul Otto
--
Paul Otto
Principal DevOps Engineer, Owner
exceeded its allocation and kill the process. For that, you need to enable
cgroups at which point allocation limits are enforced by the OS. Did I get
that right?
--
Thanks,
Paul
-running tasks. I also haven't seen any configuration related to
resource revocation. Am I missing something?
--
Thanks,
Paul
61 matches
Mail list logo