Simple question about slave<->master latency?

2016-05-11 Thread James Vanns
Is the latency (perhaps the weighted rolling average) between master and a
slave measured? If so, is it recorded as an attribute of a slave object in
the scheduler API?

Cheers,

Jim

--
Senior Production Engineer
Industrial Light & Magic


RE: Marathon scaling application

2016-05-11 Thread suruchi.kumari
Hi,


1.   I did not launch the marathon  job with json file.

2.version of mesos is 0.27.2 and marathon is 0.15.3

3.   what OS is on the nodes :Ubuntu 14.04 LTS

4.   Here are the slave logs :-

E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor 
'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
a0d72cc7-f02b-44d7-b93a-3b1df6e74414
E0511 01:41:44.518671  1457 slave.cpp:3729] Container 
'20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor 
'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the enabled 
containerizers (mesos) could create a container for the provided 
TaskInfo/ExecutorInfo message
E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor 
'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
20095298-d0c5-4c23-ae0b-a0b9393ecfb4
E0511 02:41:44.632048  1462 slave.cpp:3729] Container 
'944a6719-b942-4a06-8d4a-08e1f624f62e' for executor 
'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the enabled 
containerizers (mesos) could create a container for the provided 
TaskInfo/ExecutorInfo message
E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor 
'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
944a6719-b942-4a06-8d4a-08e1f624f62e
E0511 03:41:44.781136  1464 slave.cpp:3729] Container 
'2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor 
'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the enabled 
containerizers (mesos) could create a container for the provided 
TaskInfo/ExecutorInfo message
E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor 
'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
2677810f-0f42-45fd-87aa-329a9fbe5af0
E0511 04:41:44.891082  1463 slave.cpp:3729] Container 
'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor 
'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the enabled 
containerizers (mesos) could create a container for the provided 
TaskInfo/ExecutorInfo message
E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor 
'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework 
a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
acefe126-7d69-4525-987c-bafbf1dd1d6f

E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown socket with fd 
10: Transport endpoint is not connected
E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources for 
container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor 
'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task 
nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal task, 
destroying container: Failed to 'docker -H unix:///var/run/docker.sock inspect 
mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
 exit status = exited with status 1 stderr = Cannot connect to the Docker 
daemon. Is the docker daemon running on this host?
E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources for 
container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor 
'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task 
nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal task, 
destroying container: Failed to 'docker -H unix:///var/run/docker.sock inspect 
mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
 exit status = exited with status 1 stderr = Cannot connect to the Docker 
daemon. Is the docker daemon running on this host?
E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources for 
container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor 
'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task 
nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal task, 
destroying container: Failed to 'docker -H unix:///var/run/docker.sock inspect 
mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.116be528-b81f-4e4c-b2a4-11bb10707031':
 exit status = exited with status 1 stderr = Cannot connect to the Docker 
daemon. Is the docker daemon running on this host?
E0511 05:39:43.652032  1351 slave.cpp:3252] Failed to update resources for 
container f77a5a14-4eb0-4801-a520-6fd2b298a3e3 of executor 
'nginx.47bcdb9a-169b-11e6-9f8a-fa163ecc33f1' running task 
nginx.47bcdb9a-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal task, 
destroying container: Failed to 'docker -H unix:///var/run/docker.sock inspect 

Re: Marathon scaling application

2016-05-11 Thread Ken Sipe
The logs indicate an issue with running docker… 
I would start by login into the node that you are having issues with and debug 
the docker issue.  I would suspect you can’t run a docker container manually.  

ken
> On May 11, 2016, at 4:43 AM,  
>  wrote:
> 
> Hi,
>  
> 1.   I did not launch the marathon  job with json file.
> 2.version of mesos is 0.27.2 and marathon is 0.15.3
> 3.   what OS is on the nodes :Ubuntu 14.04 LTS
> 4.   Here are the slave logs :-
>  
> E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor 
> 'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
> a0d72cc7-f02b-44d7-b93a-3b1df6e74414
> E0511 01:41:44.518671  1457 slave.cpp:3729] Container 
> '20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor 
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the 
> enabled containerizers (mesos) could create a container for the provided 
> TaskInfo/ExecutorInfo message
> E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor 
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
> 20095298-d0c5-4c23-ae0b-a0b9393ecfb4
> E0511 02:41:44.632048  1462 slave.cpp:3729] Container 
> '944a6719-b942-4a06-8d4a-08e1f624f62e' for executor 
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the 
> enabled containerizers (mesos) could create a container for the provided 
> TaskInfo/ExecutorInfo message
> E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor 
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
> 944a6719-b942-4a06-8d4a-08e1f624f62e
> E0511 03:41:44.781136  1464 slave.cpp:3729] Container 
> '2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor 
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the 
> enabled containerizers (mesos) could create a container for the provided 
> TaskInfo/ExecutorInfo message
> E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor 
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
> 2677810f-0f42-45fd-87aa-329a9fbe5af0
> E0511 04:41:44.891082  1463 slave.cpp:3729] Container 
> 'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor 
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the 
> enabled containerizers (mesos) could create a container for the provided 
> TaskInfo/ExecutorInfo message
> E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor 
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework 
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container: 
> acefe126-7d69-4525-987c-bafbf1dd1d6f
>  
> E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown socket with 
> fd 10: Transport endpoint is not connected
> E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources for 
> container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor 
> 'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task 
> nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal 
> task, destroying container: Failed to 'docker -H unix:///var/run/docker.sock 
>  inspect 
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
>  exit status = exited with status 1 stderr = Cannot connect to the Docker 
> daemon. Is the docker daemon running on this host?
> E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources for 
> container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor 
> 'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task 
> nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal 
> task, destroying container: Failed to 'docker -H unix:///var/run/docker.sock 
>  inspect 
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
>  exit status = exited with status 1 stderr = Cannot connect to the Docker 
> daemon. Is the docker daemon running on this host?
> E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources for 
> container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor 
> 'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task 
> nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal 
> task, destroying container: Failed to 'docker -H unix:///var/run/docker.sock 
>  inspect 
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.116be528-b81f-4e4c-b2a4-11bb10707031':
>  exit status = exited with status 1 stderr = Cannot 

Re: Marathon scaling application

2016-05-11 Thread Stephen Gran
Hi,

The logs say that the only enabled containerizer is mesos.  Perhaps you 
need to set that to mesos,docker.

Cheers,

On 11/05/16 10:48, suruchi.kum...@accenture.com wrote:
> Hi,
>
> 1.I did not launch the marathon  job with json file.
>
> 2. version of mesos is 0.27.2 and marathon is 0.15.3
>
> 3.what OS is on the nodes :Ubuntu 14.04 LTS
>
> 4.Here are the slave logs :-
>
> E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor
> 'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> a0d72cc7-f02b-44d7-b93a-3b1df6e74414
>
> E0511 01:41:44.518671  1457 slave.cpp:3729] Container
> '20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the provided
> TaskInfo/ExecutorInfo message
>
> E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 20095298-d0c5-4c23-ae0b-a0b9393ecfb4
>
> E0511 02:41:44.632048  1462 slave.cpp:3729] Container
> '944a6719-b942-4a06-8d4a-08e1f624f62e' for executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the provided
> TaskInfo/ExecutorInfo message
>
> E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 944a6719-b942-4a06-8d4a-08e1f624f62e
>
> E0511 03:41:44.781136  1464 slave.cpp:3729] Container
> '2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the provided
> TaskInfo/ExecutorInfo message
>
> E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 2677810f-0f42-45fd-87aa-329a9fbe5af0
>
> E0511 04:41:44.891082  1463 slave.cpp:3729] Container
> 'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the provided
> TaskInfo/ExecutorInfo message
>
> E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> acefe126-7d69-4525-987c-bafbf1dd1d6f
>
> E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown socket
> with fd 10: Transport endpoint is not connected
>
> E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources
> for container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor
> 'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal
> task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
> exit status = exited with status 1 stderr = Cannot connect to the Docker
> daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources
> for container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor
> 'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal
> task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
> exit status = exited with status 1 stderr = Cannot connect to the Docker
> daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources
> for container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor
> 'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on status update for terminal
> task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.116be528-b81f-4e4c-b2a4-11bb10707031':
> exit status = exited with status 1 stderr = Cannot connect to the Docker
> daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.652032  1351 slave.cpp:3252] Failed to update resources
> for container f77a5a14-4eb0-4801-a520-6fd2b298a3e3 of executor

Re: IP address as resource

2016-05-11 Thread Bharath Ravi Kumar
Hi Stefano,

Yes, I did look at Calico, Weave and similar projects, but didn't find them
relevant since they appear to solve a different (and more complex) problem.

On Sat, May 7, 2016 at 7:37 PM, Stefano Bianchi 
wrote:

> Did you look at Project Calico?
>
> 2016-05-07 3:45 GMT+02:00 Bharath Ravi Kumar :
>
>> Hi Zameer,
>>
>> Thanks for responding. I had reached out to user@ at the same time, but
>> haven't heard back. As for the specific feature in Aurora, since we're
>> still testing our internal system against various frameworks, I'd be
>> willing to try a patch to better evaluate the capability in question.
>> Looking forward to a response on user@aurora.
>>
>> On Fri, May 6, 2016 at 10:17 PM, Zameer Manji  wrote:
>>
>>> Bharath,
>>>
>>> Aurora is currently adding support for arbitrary resources with this
>>> exact usecase in mind. The code isn't complete yet and it hasn't been tried
>>> out in production. I suggest reaching out to the user@
>>>  for Aurora to get the latest
>>> update.
>>>
>>> On Fri, May 6, 2016 at 6:36 AM, Bharath Ravi Kumar 
>>> wrote:
>>>
 Hi,

 I'm aware of mesos' IP-per-container capability and the authors'
 reasons for not modeling an IP address as a resource on a host. However,
 for operational simplicity, I prefer an implementation that does not
 interact with multiple other services (e.g. an IPAM). I'm hence considering
 the following approach:

 a) Model the IP addresses available on a host as resources.
 b) Using the IP address (from the set) accepted by a framework, launch
 a task using the docker containerizer, with the IP address selected by the
 framework.
 c) For tasks that are not resource intensive, fall back on port range
 reservation and docker host mode networking.

 It appears that Marathon doesn't support arbitrary resources, but
 Apache Aurora might(?) . I'd like to know if anyone else has attempted this
 approach with either framework, any potential downsides to this approach,
 and any alternatives that are similar to this.

 Thanks,
 Bharath

 --
 Zameer Manji


>>
>


Re: We're using Mesos

2016-05-11 Thread haosdent
Sure, you could create a merge request in github like
https://github.com/apache/mesos/pull/100

On Wed, May 11, 2016 at 11:22 PM, Lee Porte 
wrote:

> Hi,
>
> Please could we be included in the list of Powered by Mesos?
>
> Company name: Football Radar
> URL: http://www.footballradar.com
>
> Many thanks
>
> Lee
>



-- 
Best Regards,
Haosdent Huang


Re: We're using Mesos

2016-05-11 Thread Lee Porte
Good idea , opened at https://github.com/apache/mesos/pull/103

On Wed, May 11, 2016 at 4:31 PM, haosdent  wrote:

> Sure, you could create a merge request in github like
> https://github.com/apache/mesos/pull/100
>
> On Wed, May 11, 2016 at 11:22 PM, Lee Porte 
> wrote:
>
>> Hi,
>>
>> Please could we be included in the list of Powered by Mesos?
>>
>> Company name: Football Radar
>> URL: http://www.footballradar.com
>>
>> Many thanks
>>
>> Lee
>>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>


RE: distributed file systems

2016-05-11 Thread Aaron Carey
What exactly do you mean by deploying a mesos cluster to run on ceph etc?

Do you mean having a clustered file system mounted via nfs to the hosts which 
contains the mesos binaries?

Or something to do with how jobs are executed?

--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: james [gar...@verizon.net]
Sent: 11 May 2016 17:08
To: user@mesos.apache.org
Subject: Re: distributed file systems

Hello Rodrick,

That EFS looks interesting, but I did not find the location for the
source-code/git download?  I do not remember the (linux) kernel hooks
for that Distributed File System, or is completely on top of the systems
codes?

License details and I'm not sure if it's 100% opensource?

Beegfs [A] is partially opensource, but that does not fit what is needed
for experimentation. A robust community around open sources and tools,
such as github, should have been mentioned. Equally important
is a community keen on sharing and supporting other efforts to replicate
and use the components of these cluster centric codes. [B,C]


James

[A] http://www.beegfs.com/content/

[B] https://forums.aws.amazon.com/thread.jspa?threadID=217783

[C]
http://searchaws.techtarget.com/news/4500272286/Amazon-EFS-stuck-in-beta-lacks-native-Windows-support




On 05/11/2016 01:07 AM, Rodrick Brown wrote:
> Does EFS count? :-)
>
> https://aws.amazon.com/efs/
>
>
> --
>
> *Rodrick Brown* / Systems Engineer
>
> +1 917 445 6839 / rodr...@orchardplatform.com
> 
>
> *Orchard Platform*
>
> 101 5th Avenue, 4th Floor, New York, NY 10003
>
> http://www.orchardplatform.com 
>
> Orchard Blog  | Marketplace
> Lending Meetup 
>
>
> On May 10 2016, at 9:07 pm, james  wrote:
>
> Hello,
>
>
> Has anyone customized/compiled mesos and successfully deployed a mesos
> cluster to run on cephfs, orangefs [1], or any other distributed file
> systems?
>
> If so, some detail on your setup would be appreciated.
>
>
> [1]
> 
> http://www.phoronix.com/scan.php?page=news_item=OrangeFS-Lands-Linux-4.6
>
>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient
> of this communication, please delete it immediately and notify the
> sender by return email. Unauthorized reading, dissemination,
> distribution or copying of this communication is prohibited. This
> communication does not constitute an offer to sell or a solicitation of
> an indication of interest to purchase any loan, security or any other
> financial product or instrument, nor is it an offer to sell or a
> solicitation of an indication of interest to purchase any products or
> services to any persons who are prohibited from receiving such
> information under applicable law. The contents of this communication may
> not be accurate or complete and are subject to change without notice. As
> such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or
> completeness of the information contained herein. The intended recipient
> is advised to consult its own professional advisors, including those
> specializing in legal, tax and accounting matters. Orchard does not
> provide legal, tax or accounting advice.



Re: distributed file systems

2016-05-11 Thread james

Hello Rodrick,

That EFS looks interesting, but I did not find the location for the 
source-code/git download?  I do not remember the (linux) kernel hooks

for that Distributed File System, or is completely on top of the systems
codes?

License details and I'm not sure if it's 100% opensource?

Beegfs [A] is partially opensource, but that does not fit what is needed 
for experimentation. A robust community around open sources and tools, 
such as github, should have been mentioned. Equally important

is a community keen on sharing and supporting other efforts to replicate
and use the components of these cluster centric codes. [B,C]


James

[A] http://www.beegfs.com/content/

[B] https://forums.aws.amazon.com/thread.jspa?threadID=217783

[C] 
http://searchaws.techtarget.com/news/4500272286/Amazon-EFS-stuck-in-beta-lacks-native-Windows-support





On 05/11/2016 01:07 AM, Rodrick Brown wrote:

Does EFS count? :-)

https://aws.amazon.com/efs/


--

*Rodrick Brown* / Systems Engineer

+1 917 445 6839 / rodr...@orchardplatform.com


*Orchard Platform*

101 5th Avenue, 4th Floor, New York, NY 10003

http://www.orchardplatform.com 

Orchard Blog  | Marketplace
Lending Meetup 


On May 10 2016, at 9:07 pm, james  wrote:

Hello,


Has anyone customized/compiled mesos and successfully deployed a mesos
cluster to run on cephfs, orangefs [1], or any other distributed file
systems?

If so, some detail on your setup would be appreciated.


[1]
http://www.phoronix.com/scan.php?page=news_item=OrangeFS-Lands-Linux-4.6


*NOTICE TO RECIPIENTS*: This communication is confidential and intended
for the use of the addressee only. If you are not an intended recipient
of this communication, please delete it immediately and notify the
sender by return email. Unauthorized reading, dissemination,
distribution or copying of this communication is prohibited. This
communication does not constitute an offer to sell or a solicitation of
an indication of interest to purchase any loan, security or any other
financial product or instrument, nor is it an offer to sell or a
solicitation of an indication of interest to purchase any products or
services to any persons who are prohibited from receiving such
information under applicable law. The contents of this communication may
not be accurate or complete and are subject to change without notice. As
such, Orchard App, Inc. (including its subsidiaries and affiliates,
"Orchard") makes no representation regarding the accuracy or
completeness of the information contained herein. The intended recipient
is advised to consult its own professional advisors, including those
specializing in legal, tax and accounting matters. Orchard does not
provide legal, tax or accounting advice.




We're using Mesos

2016-05-11 Thread Lee Porte
Hi,

Please could we be included in the list of Powered by Mesos?

Company name: Football Radar
URL: http://www.footballradar.com

Many thanks

Lee


Re: distributed file systems

2016-05-11 Thread james

On 05/11/2016 10:09 AM, Aaron Carey wrote:

What exactly do you mean by deploying a mesos cluster to run on ceph etc?

Do you mean having a clustered file system mounted via nfs to the hosts which 
contains the
mesos binaries?


That would be one way to use a DFS, but low latency on a variety of 
different linux kernels will be evaluated too. So will the testing of 
ulibc-ng, musl and other fundamental components, as to the trade offs

of performance vs robustness of supported frameworks.



Or something to do with how jobs are executed?


YES, exactly. In fact, it is most interesting to test with Hi 
Performance Computing (HPC) on top of Mesos.  The simple way to think 
about one of the ultimate goals of the research is how to completely 
replace Hadoop with Mesos, a DFS and many other components. Portability, 
not limited to Arm64v8 is a companion track related to HPC on clusters 
and bare metal hardware. But most HPC installations moving forward, will 
have to also support some typical (admin/web/security/AI/etc) types of 
workloads, as part of the evaluation.



Mesos does seem to be flexible enough for these sorts of explorations
and experiments. There is no inclination for constricting ideas and 
testing of competing components; only the desire to experiment, share
and refine the component mix (aka the cluster-stack), all in open 
forums. I. E. Cluster-Stack_A vs Cluster-Stack_B etc etc for various 
workloads, on fixed, modest cluster sizes.



Think of it as experimentation that will eventually lead to published 
results, such as what mips/mops and various benchmarks have done for 
hardware, only the cluster-stack is the variable(s) under study.


All ideas, comments and criticisms are most welcome. HDFS is mostly 
described as the main bottleneck to HPC efforts and much of what folks 
are doing with various DFS replacements is being too strictly 
constrained, imho.



James



Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: james [gar...@verizon.net]
Sent: 11 May 2016 17:08
To: user@mesos.apache.org
Subject: Re: distributed file systems

Hello Rodrick,

That EFS looks interesting, but I did not find the location for the
source-code/git download?  I do not remember the (linux) kernel hooks
for that Distributed File System, or is completely on top of the systems
codes?

License details and I'm not sure if it's 100% opensource?

Beegfs [A] is partially opensource, but that does not fit what is needed
for experimentation. A robust community around open sources and tools,
such as github, should have been mentioned. Equally important
is a community keen on sharing and supporting other efforts to replicate
and use the components of these cluster centric codes. [B,C]


James

[A] http://www.beegfs.com/content/

[B] https://forums.aws.amazon.com/thread.jspa?threadID=217783

[C]
http://searchaws.techtarget.com/news/4500272286/Amazon-EFS-stuck-in-beta-lacks-native-Windows-support




On 05/11/2016 01:07 AM, Rodrick Brown wrote:

Does EFS count? :-)

https://aws.amazon.com/efs/


--

*Rodrick Brown* / Systems Engineer

+1 917 445 6839 / rodr...@orchardplatform.com


*Orchard Platform*

101 5th Avenue, 4th Floor, New York, NY 10003

http://www.orchardplatform.com 

Orchard Blog  | Marketplace
Lending Meetup 


On May 10 2016, at 9:07 pm, james  wrote:

 Hello,


 Has anyone customized/compiled mesos and successfully deployed a mesos
 cluster to run on cephfs, orangefs [1], or any other distributed file
 systems?

 If so, some detail on your setup would be appreciated.


 [1]
 http://www.phoronix.com/scan.php?page=news_item=OrangeFS-Lands-Linux-4.6


*NOTICE TO RECIPIENTS*: This communication is confidential and intended
for the use of the addressee only. If you are not an intended recipient
of this communication, please delete it immediately and notify the
sender by return email. Unauthorized reading, dissemination,
distribution or copying of this communication is prohibited. This
communication does not constitute an offer to sell or a solicitation of
an indication of interest to purchase any loan, security or any other
financial product or instrument, nor is it an offer to sell or a
solicitation of an indication of interest to purchase any products or
services to any persons who are prohibited from receiving such
information under applicable law. The contents of this communication may
not be accurate or complete and are subject to change without notice. As
such, Orchard App, Inc. (including its subsidiaries and affiliates,
"Orchard") makes no representation regarding the accuracy or
completeness of the information contained herein. The intended recipient
is advised to consult its own professional advisors, 

Are you using New HTTP API Yet ?

2016-05-11 Thread Vladimir Vivien
Is anyone using the new Mesos HTTP Scheduler/Executor APIs to create
frameworks? If so:
- what language ?
- are you using an existing binding as API wrapper (whichh one) ?
- or using your own custom built API wrapper ?
- do you prefer old bindings vs newer http-based api ?
- any links discussing about your impl that can be shared ?

Thanks for your help.
-- 
Vladimir Vivien


RE: Marathon deploying MySQL and Wordpress application

2016-05-11 Thread suruchi.kumari
Hi,

I would like to know that can we deploy MySQL and WordPress together through 
Marathon UI (using the command option of Marathon UI).

Can I place both commands and environment variables of MySQL and WordPress 
together in the white space of the command option provided  in the Marathon UI.

-Original Message-
From: Stephen Gran [mailto:stephen.g...@piksel.com]
Sent: 11 May 2016 15:26
To: user@mesos.apache.org
Subject: Re: Marathon scaling application

Hi,

The logs say that the only enabled containerizer is mesos.  Perhaps you need to 
set that to mesos,docker.

Cheers,

On 11/05/16 10:48, suruchi.kum...@accenture.com wrote:
> Hi,
>
> 1.I did not launch the marathon  job with json file.
>
> 2. version of mesos is 0.27.2 and marathon is 0.15.3
>
> 3.what OS is on the nodes :Ubuntu 14.04 LTS
>
> 4.Here are the slave logs :-
>
> E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor
> 'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> a0d72cc7-f02b-44d7-b93a-3b1df6e74414
>
> E0511 01:41:44.518671  1457 slave.cpp:3729] Container
> '20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 20095298-d0c5-4c23-ae0b-a0b9393ecfb4
>
> E0511 02:41:44.632048  1462 slave.cpp:3729] Container
> '944a6719-b942-4a06-8d4a-08e1f624f62e' for executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 944a6719-b942-4a06-8d4a-08e1f624f62e
>
> E0511 03:41:44.781136  1464 slave.cpp:3729] Container
> '2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 2677810f-0f42-45fd-87aa-329a9fbe5af0
>
> E0511 04:41:44.891082  1463 slave.cpp:3729] Container
> 'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> acefe126-7d69-4525-987c-bafbf1dd1d6f
>
> E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown
> socket with fd 10: Transport endpoint is not connected
>
> E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources
> for container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor
> 'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources
> for container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor
> 'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources
> for container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor
> 'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on 

Re: Marathon deploying MySQL and Wordpress application

2016-05-11 Thread Casey Bisson
Good day,

You might consider this WordPress implementation:

https://www.joyent.com/blog/wordpress-on-autopilot

It should all run in Marathon with the right service manifests. The Container 
Solutions team demonstrated the MySQL implementation in:

https://container-solutions.com/containerpilot-on-mantl/

—Casey


> On May 11, 2016, at 10:18 PM,  
>  wrote:
> 
> Hi,
> 
> I would like to know that can we deploy MySQL and WordPress together through 
> Marathon UI (using the command option of Marathon UI).
> 
> Can I place both commands and environment variables of MySQL and WordPress 
> together in the white space of the command option provided  in the Marathon 
> UI.



RE: Marathon MySQL and Wordpress Deployment

2016-05-11 Thread suruchi.kumari
Hi,

Would like to know can we deploy MySQL and Wordpress together through Marathon 
UI (using command option in Marathon UI).

Can we put the commands of both together in the white space with the 
environment varibles of command option.Is it possible to run that way.




-Original Message-
From: Stephen Gran [mailto:stephen.g...@piksel.com]
Sent: 11 May 2016 15:26
To: user@mesos.apache.org
Subject: Re: Marathon scaling application

Hi,

The logs say that the only enabled containerizer is mesos.  Perhaps you need to 
set that to mesos,docker.

Cheers,

On 11/05/16 10:48, suruchi.kum...@accenture.com wrote:
> Hi,
>
> 1.I did not launch the marathon  job with json file.
>
> 2. version of mesos is 0.27.2 and marathon is 0.15.3
>
> 3.what OS is on the nodes :Ubuntu 14.04 LTS
>
> 4.Here are the slave logs :-
>
> E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor
> 'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> a0d72cc7-f02b-44d7-b93a-3b1df6e74414
>
> E0511 01:41:44.518671  1457 slave.cpp:3729] Container
> '20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 20095298-d0c5-4c23-ae0b-a0b9393ecfb4
>
> E0511 02:41:44.632048  1462 slave.cpp:3729] Container
> '944a6719-b942-4a06-8d4a-08e1f624f62e' for executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 944a6719-b942-4a06-8d4a-08e1f624f62e
>
> E0511 03:41:44.781136  1464 slave.cpp:3729] Container
> '2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 2677810f-0f42-45fd-87aa-329a9fbe5af0
>
> E0511 04:41:44.891082  1463 slave.cpp:3729] Container
> 'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> acefe126-7d69-4525-987c-bafbf1dd1d6f
>
> E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown
> socket with fd 10: Transport endpoint is not connected
>
> E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources
> for container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor
> 'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources
> for container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor
> 'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources
> for container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor
> 'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on status update for
> 

Re: distributed file systems

2016-05-11 Thread Rodrick Brown
Does EFS count? :-)

  

https://aws.amazon.com/efs/  

  

  

\--

**Rodrick Brown** / Systems Engineer 

+1 917 445 6839 /
[rodr...@orchardplatform.com](mailto:char...@orchardplatform.com)

**Orchard Platform** 

101 5th Avenue, 4th Floor, New York, NY 10003

[http://www.orchardplatform.com](http://www.orchardplatform.com/)

[Orchard Blog](http://www.orchardplatform.com/blog/) | [Marketplace Lending
Meetup](http://www.meetup.com/Peer-to-Peer-Lending-P2P/)

  

On May 10 2016, at 9:07 pm, james gar...@verizon.net wrote:  

> Hello,

>

>  
Has anyone customized/compiled mesos and successfully deployed a mesos  
cluster to run on cephfs, orangefs [1], or any other distributed file  
systems?

>

> If so, some detail on your setup would be appreciated.

>

>  
[1]  
[http://www.phoronix.com/scan.php?page=news_itempx=OrangeFS-Lands-
Linux-4.6](http://www.phoronix.com/scan.php?page=news_item=OrangeFS-Lands-
Linux-4.6)


-- 
*NOTICE TO RECIPIENTS*: This communication is confidential and intended for 
the use of the addressee only. If you are not an intended recipient of this 
communication, please delete it immediately and notify the sender by return 
email. Unauthorized reading, dissemination, distribution or copying of this 
communication is prohibited. This communication does not constitute an 
offer to sell or a solicitation of an indication of interest to purchase 
any loan, security or any other financial product or instrument, nor is it 
an offer to sell or a solicitation of an indication of interest to purchase 
any products or services to any persons who are prohibited from receiving 
such information under applicable law. The contents of this communication 
may not be accurate or complete and are subject to change without notice. 
As such, Orchard App, Inc. (including its subsidiaries and affiliates, 
"Orchard") makes no representation regarding the accuracy or completeness 
of the information contained herein. The intended recipient is advised to 
consult its own professional advisors, including those specializing in 
legal, tax and accounting matters. Orchard does not provide legal, tax or 
accounting advice.


RE: Marathon scaling application

2016-05-11 Thread suruchi.kumari
Hi,

Thank you ..it worked :)

-Original Message-
From: Stephen Gran [mailto:stephen.g...@piksel.com]
Sent: 11 May 2016 15:26
To: user@mesos.apache.org
Subject: Re: Marathon scaling application

Hi,

The logs say that the only enabled containerizer is mesos.  Perhaps you need to 
set that to mesos,docker.

Cheers,

On 11/05/16 10:48, suruchi.kum...@accenture.com wrote:
> Hi,
>
> 1.I did not launch the marathon  job with json file.
>
> 2. version of mesos is 0.27.2 and marathon is 0.15.3
>
> 3.what OS is on the nodes :Ubuntu 14.04 LTS
>
> 4.Here are the slave logs :-
>
> E0511 00:41:43.982487  1460 slave.cpp:3800] Termination of executor
> 'nginx.226620ca-1711-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> a0d72cc7-f02b-44d7-b93a-3b1df6e74414
>
> E0511 01:41:44.518671  1457 slave.cpp:3729] Container
> '20095298-d0c5-4c23-ae0b-a0b9393ecfb4' for executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 01:41:44.518831  1457 slave.cpp:3800] Termination of executor
> 'nginx.847bdd1b-1719-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 20095298-d0c5-4c23-ae0b-a0b9393ecfb4
>
> E0511 02:41:44.632048  1462 slave.cpp:3729] Container
> '944a6719-b942-4a06-8d4a-08e1f624f62e' for executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 02:41:44.632735  1457 slave.cpp:3800] Termination of executor
> 'nginx.e6557acc-1721-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 944a6719-b942-4a06-8d4a-08e1f624f62e
>
> E0511 03:41:44.781136  1464 slave.cpp:3729] Container
> '2677810f-0f42-45fd-87aa-329a9fbe5af0' for executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 03:41:44.782914  1460 slave.cpp:3800] Termination of executor
> 'nginx.482be42d-172a-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> 2677810f-0f42-45fd-87aa-329a9fbe5af0
>
> E0511 04:41:44.891082  1463 slave.cpp:3729] Container
> 'acefe126-7d69-4525-987c-bafbf1dd1d6f' for executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed to start: None of the
> enabled containerizers (mesos) could create a container for the
> provided TaskInfo/ExecutorInfo message
>
> E0511 04:41:44.891180  1463 slave.cpp:3800] Termination of executor
> 'nginx.aa066c3e-1732-11e6-9f8a-fa163ecc33f1' of framework
> a039103f-aab7-4f15-8578-0d52ac8f60e0- failed: Unknown container:
> acefe126-7d69-4525-987c-bafbf1dd1d6f
>
> E0510 10:27:25.997802  1352 process.cpp:1958] Failed to shutdown
> socket with fd 10: Transport endpoint is not connected
>
> E0511 05:39:43.651479  1351 slave.cpp:3252] Failed to update resources
> for container 53bb3453-31b2-4cf7-a9e1-5f700510eeb4 of executor
> 'nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f28ab0-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.53bb3453-31b2-4cf7-a9e1-5f700510eeb4':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651845  1351 slave.cpp:3252] Failed to update resources
> for container ec4e97ad-2365-4c29-9ed7-64cd9261c666 of executor
> 'nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.38f48682-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.ec4e97ad-2365-4c29-9ed7-64cd9261c666':
> exit status = exited with status 1 stderr = Cannot connect to the
> Docker daemon. Is the docker daemon running on this host?
>
> E0511 05:39:43.651983  1351 slave.cpp:3252] Failed to update resources
> for container 116be528-b81f-4e4c-b2a4-11bb10707031 of executor
> 'nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1' running task
> nginx.413b3558-169b-11e6-9f8a-fa163ecc33f1 on status update for
> terminal task, destroying container: Failed to 'docker -H
> unix:///var/run/docker.sock inspect
> mesos-f986e4ba-91ba-4624-b685-4c004407c6db-S1.116be528-b81f-4e4c-b2a4-11bb10707031':
> exit status = exited with status 1 stderr = Cannot connect to 

Re: Enable s3a for fetcher

2016-05-11 Thread Ken Sipe
Jamie,

The general philosophy is that services should depend very little on the base 
image (some would say no dependency).   There has been an HDFS on the base 
image which we have leveraged while we work on higher priorities.  It was 
always our intent to remove it.  Another example and another enabler to this 
working is there is Java JRE on the base.  It would be a bad idea to get 
addicted to it :)   

That said,  it has always been our intention to support different protocols 
(such as retrieving artifacts from HDFS which  other services (such as Chronos) 
could leverage).It makes sense that we support s3 retrieval as well.   It 
does mean that we need a pluggable way to hook in solutions to protocols other 
that http.   We have had some discussion around it and have a design idea in 
place.   At this point it is an issue of priority and timing.

ken
> On May 10, 2016, at 1:21 PM, Briant, James  
> wrote:
> 
> I’m happy to have default IAM role on the box that can read-only fetch from 
> my s3 bucket. s3a gets the credentials from AWS instance metadata. It works.
> 
> If hadoop is gone, does that mean that hfds: URIs don’t work either?
> 
> Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs 
> and s3.
> 
> In the absence of S3, how do you propose I make large binaries available to 
> my cluster, and only to my cluster, on AWS?
> 
> Jamie
> 
> From: Cody Maloney >
> Reply-To: "user@mesos.apache.org " 
> >
> Date: Tuesday, May 10, 2016 at 10:58 AM
> To: "user@mesos.apache.org " 
> >
> Subject: Re: Enable s3a for fetcher
> 
> The s3 fetcher stuff inside of DC/OS is not supported. The `hadoop` binary 
> has been entirely removed from DC/OS 1.8 already. There have been various 
> proposals to make it so the mesos fetcher is much more pluggable / extensible 
> (https://issues.apache.org/jira/browse/MESOS-2731 
>  for instance). 
> 
> Generally speaking people want a lot of different sorts of fetching, and 
> there are all sorts of questions of how to properly get auth to the various 
> chunks (if you're using s3a:// presumably you need to get credentials there 
> somehow. Otherwise you could just use http://). Need to design / build that 
> into Mesos and DC/OS to be able to use this stuff.
> 
> Cody
> 
> On Tue, May 10, 2016 at 9:55 AM Briant, James  > wrote:
>> I want to use s3a: urls in fetcher. I’m using dcos 1.7 which has hadoop 2.5 
>> on its agents. This version has the necessary hadoop-aws and aws-sdk:
>> 
>> hadoop--afadb46fe64d0ee7ce23dbe769e44bfb0767a8b9]$ ls 
>> usr/share/hadoop/tools/lib/ | grep aws
>> aws-java-sdk-1.7.4.jar
>> hadoop-aws-2.5.0-cdh5.3.3.jar
>> 
>> What config/scripts do I need to hack to get these guys on the classpath so 
>> that "hadoop fs -copyToLocal” works?
>> 
>> Thanks,
>> Jamie



Re: Enable s3a for fetcher

2016-05-11 Thread Ken Sipe
to Josephs point… hdfs and s3 challenges are dcos issues not a mesos issue. 
We do however need Mesos to support custom protocols for the fetcher.   At our 
current pace of releases it sounds not too far away.

ken
> On May 10, 2016, at 2:20 PM, Joseph Wu  wrote:
> 
> Mesos does not explicitly support HDFS and S3.  Rather, Mesos will assume you 
> have a hadoop binary and use it (blindly) for certain types of URIs.  If the 
> hadoop binary is not present, the mesos-fetcher will fail to fetch your HDFS 
> or S3 URIs.
> 
> Mesos does not ship/package hadoop, so these URIs are not expected to work 
> out of the box (for plain Mesos distributions).  In all cases, the operator 
> must preconfigure hadoop on each node (similar to how Docker in Mesos works).
> 
> Here's the epic tracking the modularization of the mesos-fetcher (I estimate 
> it'll be done by 0.30):
> https://issues.apache.org/jira/browse/MESOS-3918 
> 
> 
> ^ Once done, it should be easier to plug in more fetchers, such as one for 
> your use-case.
> 
> On Tue, May 10, 2016 at 11:21 AM, Briant, James 
> > wrote:
> I’m happy to have default IAM role on the box that can read-only fetch from 
> my s3 bucket. s3a gets the credentials from AWS instance metadata. It works.
> 
> If hadoop is gone, does that mean that hfds: URIs don’t work either?
> 
> Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs 
> and s3.
> 
> In the absence of S3, how do you propose I make large binaries available to 
> my cluster, and only to my cluster, on AWS?
> 
> Jamie
> 
> From: Cody Maloney >
> Reply-To: "user@mesos.apache.org " 
> >
> Date: Tuesday, May 10, 2016 at 10:58 AM
> To: "user@mesos.apache.org " 
> >
> Subject: Re: Enable s3a for fetcher
> 
> The s3 fetcher stuff inside of DC/OS is not supported. The `hadoop` binary 
> has been entirely removed from DC/OS 1.8 already. There have been various 
> proposals to make it so the mesos fetcher is much more pluggable / extensible 
> (https://issues.apache.org/jira/browse/MESOS-2731 
>  for instance). 
> 
> Generally speaking people want a lot of different sorts of fetching, and 
> there are all sorts of questions of how to properly get auth to the various 
> chunks (if you're using s3a:// presumably you need to get credentials there 
> somehow. Otherwise you could just use http://). Need to design / build that 
> into Mesos and DC/OS to be able to use this stuff.
> 
> Cody
> 
> On Tue, May 10, 2016 at 9:55 AM Briant, James  > wrote:
> I want to use s3a: urls in fetcher. I’m using dcos 1.7 which has hadoop 2.5 
> on its agents. This version has the necessary hadoop-aws and aws-sdk:
> 
> hadoop--afadb46fe64d0ee7ce23dbe769e44bfb0767a8b9]$ ls 
> usr/share/hadoop/tools/lib/ | grep aws
> aws-java-sdk-1.7.4.jar
> hadoop-aws-2.5.0-cdh5.3.3.jar
> 
> What config/scripts do I need to hack to get these guys on the classpath so 
> that "hadoop fs -copyToLocal” works?
> 
> Thanks,
> Jamie
> 



Re: Enable s3a for fetcher

2016-05-11 Thread Ken Sipe
Jamie,

I’m in Europe this week… so the timing of my responses are out of sync / 
delayed.   There are 2 issues to work with here.  The first is having a 
pluggable mesos fetcher… sounds like that is scheduled for 0.30.   The other is 
what is available on dcos.  Could you move that discussion to that mailing 
list?  I will definitely work with you on getting this resolved.

ken
> On May 10, 2016, at 3:45 PM, Briant, James  
> wrote:
> 
> Ok. Thanks Joseph. I will figure out how to get a more recent hadoop onto my 
> dcos agents then.
> 
> Jamie
> 
> From: Joseph Wu >
> Reply-To: "user@mesos.apache.org " 
> >
> Date: Tuesday, May 10, 2016 at 1:40 PM
> To: user >
> Subject: Re: Enable s3a for fetcher
> 
> I can't speak to what DCOS does or will do (you can ask on the associated 
> mailing list: us...@dcos.io ).
> 
> We will be maintaining existing functionality for the fetcher, which means 
> supporting the schemes:
> * file
> * http, https, ftp, ftps
> * hdfs, hftp, s3, s3n  <--  These rely on hadoop.
> 
> And we will retain the --hadoop_home agent flag, which you can use to specify 
> the hadoop binary.
> 
> Other schemes might work right now, if you hack around with your node setup.  
> But there's no guarantee that your hack will work between Mesos versions.  In 
> future, we will associate a fetcher plugin for each scheme.  And you will be 
> able to load custom fetcher plugins for additional schemes.
> TLDR: no "nerfing" and less hackiness :)
> 
> On Tue, May 10, 2016 at 12:58 PM, Briant, James 
> > wrote:
>> This is the mesos latest documentation:
>> 
>> If the requested URI is based on some other protocol, then the fetcher tries 
>> to utilise a local Hadoop client and hence supports any protocol supported 
>> by the Hadoop client, e.g., HDFS, S3. See the slave configuration 
>> documentation  
>> for how to configure the slave with a path to the Hadoop client. [emphasis 
>> added]
>> 
>> What you are saying is that dcos simply wont install hadoop on agents?
>> 
>> Next question then: will you be nerfing fetcher.cpp, or will I be able to 
>> install hadoop on the agents myself, such that mesos will recognize s3a?
>> 
>> 
>> From: Joseph Wu >
>> Reply-To: "user@mesos.apache.org " 
>> >
>> Date: Tuesday, May 10, 2016 at 12:20 PM
>> To: user >
>> 
>> Subject: Re: Enable s3a for fetcher
>> 
>> Mesos does not explicitly support HDFS and S3.  Rather, Mesos will assume 
>> you have a hadoop binary and use it (blindly) for certain types of URIs.  If 
>> the hadoop binary is not present, the mesos-fetcher will fail to fetch your 
>> HDFS or S3 URIs.
>> 
>> Mesos does not ship/package hadoop, so these URIs are not expected to work 
>> out of the box (for plain Mesos distributions).  In all cases, the operator 
>> must preconfigure hadoop on each node (similar to how Docker in Mesos works).
>> 
>> Here's the epic tracking the modularization of the mesos-fetcher (I estimate 
>> it'll be done by 0.30):
>> https://issues.apache.org/jira/browse/MESOS-3918 
>> 
>> 
>> ^ Once done, it should be easier to plug in more fetchers, such as one for 
>> your use-case.
>> 
>> On Tue, May 10, 2016 at 11:21 AM, Briant, James 
>> > wrote:
>>> I’m happy to have default IAM role on the box that can read-only fetch from 
>>> my s3 bucket. s3a gets the credentials from AWS instance metadata. It works.
>>> 
>>> If hadoop is gone, does that mean that hfds: URIs don’t work either?
>>> 
>>> Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs 
>>> and s3.
>>> 
>>> In the absence of S3, how do you propose I make large binaries available to 
>>> my cluster, and only to my cluster, on AWS?
>>> 
>>> Jamie
>>> 
>>> From: Cody Maloney >
>>> Reply-To: "user@mesos.apache.org " 
>>> >
>>> Date: Tuesday, May 10, 2016 at 10:58 AM
>>> To: "user@mesos.apache.org " 
>>> >
>>> Subject: Re: Enable s3a for fetcher
>>> 
>>> The s3 fetcher stuff inside of DC/OS is not supported. The `hadoop` binary 
>>> has been entirely removed from DC/OS 1.8 already. There have been various 
>>> proposals to make it so the mesos fetcher is much more pluggable / 
>>> 

Re: Marathon scaling application

2016-05-11 Thread Ken Sipe
It is hard to say with the information provided.   I would check the slave log 
the failure node.  I suspect the failure is recorded there.

otherwise more information is necessary:
1. the marathon job (did you launch with a json file? that would be helpful)
2. the slave logs

it could also be useful to understand:
1. the version of mesos and marathon
2. what OS is on the nodes

ken

> On May 11, 2016, at 3:10 AM, suruchi.kum...@accenture.com wrote:
> 
> I have problem scaling the applications through Marathon.
>  
> I have a setup of two slave nodes.The first slave node having CPU=1 and 
> RAM=2GB and the Second node having CPU=4 and RAM=8GB.
>  
> It is able to scale maximum 5 instances on the first node but  when I tried 
> scaling it further the host gets changed to the second slave node.And the 
> task fails to start and error in the debug section of the Marathon UI shows 
> "Abnormal executor termination".
>  
> I would like to know why is it not getting scheduled on the other slave 
> node???
>  
> Can you please help me with this issue.
>  
> Thanks
> 
> 
> This message is for the designated recipient only and may contain privileged, 
> proprietary, or otherwise confidential information. If you have received it 
> in error, please notify the sender immediately and delete the original. Any 
> other use of the e-mail by you is prohibited. Where allowed by local law, 
> electronic communications with Accenture and its affiliates, including e-mail 
> and instant messaging (including content), may be scanned by our systems for 
> the purposes of information security and assessment of internal compliance 
> with Accenture policy. 
> __
> 
> www.accenture.com 


Re: How is the OS X environment created with Mesos

2016-05-11 Thread DiGiorgio, Mr. Rinaldo S.

On May 5, 2016, at 13:28, haosdent 
> wrote:

>There is no explicit statement about what Mesos means when it runs a task as 
>some other user.
I think this is just ensure the running user of the task is the user you given. 
In Mesos, it jus call the [setuid](http://linux.die.net/man/2/setuid) to change 
the user, It would not execute something like the bashrc script of user.

I have been unable to solve this problem for the last few days. I am wondering 
if you have any ideas.



When Mesos starts a task on an OSX machine, the task is run with setuid to the 
user I have asked for.  When that user runs I cannot get that user to have a 
default login keychain.  I want to initialize the environment so that user has 
something that looks like this.

 existinguser$ security login-keychain


 "/Users/rinaldo/Library/Keychains/login.keychain”


I have tried many options to create the above keychain for the other user that 
is running in a process that was created by mesos and changed to that user with 
setuid.

I understand that is likely not a Mesos issue. I am hoping someone on this 
alias has come across this issue or something similar.  I have tried the 
following and they have all failed.

su -c   as existinguser

/bin/login as existinguser

OSX is not Open Source so it is difficult to understand what it is they do to 
create a user environment.  The “security” application has many options to 
create keychains but when I use those options the Keychains endup in


"/Library/Keychains/System.keychain"

   "/Library/Keychains/System.keychain”


I have no investigated how a user is able to create a keychain in the 
System.keychain when running as a user in a Mesos created process.


Rinaldo




On Thu, May 5, 2016 at 7:41 PM, DiGiorgio, Mr. Rinaldo S. 
> wrote:
Hi,

Recently I noticed that the Mesos Jenkins plugin supports the setting 
of environment variables. Somewhere between 0.26 and 0.28.1, settings like

USER=
HOME=

were required to get things to work the way they had worked. I have 
been able to set the environment this way but I have some concerns about it.

There is no explicit statement about what Mesos means when it runs a 
task as some other user.  Clearly it is not running some of the scripts 
normally run during login.  This was a constant source of confusion with 
Jenkins. If one can state what exactly is done to create the user environment 
each platform and how it is different that others it will save countless hours 
of debugging IMO. I realize OSX is an odd system -- linux at times, Apple 
specific at times in areas that conflict with Linux but this will only get more 
complicated when Windows agents become available.



Rinaldo



--
Best Regards,
Haosdent Huang



RE: Enable s3a for fetcher

2016-05-11 Thread Aaron Carey
We'd be very excited to see a pluggable mesos fetcher!


--

Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150


From: Ken Sipe [kens...@gmail.com]
Sent: 11 May 2016 08:40
To: user@mesos.apache.org
Subject: Re: Enable s3a for fetcher

Jamie,

I’m in Europe this week… so the timing of my responses are out of sync / 
delayed.   There are 2 issues to work with here.  The first is having a 
pluggable mesos fetcher… sounds like that is scheduled for 0.30.   The other is 
what is available on dcos.  Could you move that discussion to that mailing 
list?  I will definitely work with you on getting this resolved.

ken
On May 10, 2016, at 3:45 PM, Briant, James 
> wrote:

Ok. Thanks Joseph. I will figure out how to get a more recent hadoop onto my 
dcos agents then.

Jamie

From: Joseph Wu >
Reply-To: "user@mesos.apache.org" 
>
Date: Tuesday, May 10, 2016 at 1:40 PM
To: user >
Subject: Re: Enable s3a for fetcher

I can't speak to what DCOS does or will do (you can ask on the associated 
mailing list: us...@dcos.io).

We will be maintaining existing functionality for the fetcher, which means 
supporting the schemes:
* file
* http, https, ftp, ftps
* hdfs, hftp, s3, s3n  <--  These rely on hadoop.

And we will retain the --hadoop_home agent flag, which you can use to specify 
the hadoop binary.

Other schemes might work right now, if you hack around with your node setup.  
But there's no guarantee that your hack will work between Mesos versions.  In 
future, we will associate a fetcher plugin for each scheme.  And you will be 
able to load custom fetcher plugins for additional schemes.
TLDR: no "nerfing" and less hackiness :)

On Tue, May 10, 2016 at 12:58 PM, Briant, James 
> wrote:
This is the mesos latest documentation:

If the requested URI is based on some other protocol, then the fetcher tries to 
utilise a local Hadoop client and hence supports any protocol supported by the 
Hadoop client, e.g., HDFS, S3. See the slave configuration 
documentation for 
how to configure the slave with a path to the Hadoop client. [emphasis added]

What you are saying is that dcos simply wont install hadoop on agents?

Next question then: will you be nerfing fetcher.cpp, or will I be able to 
install hadoop on the agents myself, such that mesos will recognize s3a?


From: Joseph Wu >
Reply-To: "user@mesos.apache.org" 
>
Date: Tuesday, May 10, 2016 at 12:20 PM
To: user >

Subject: Re: Enable s3a for fetcher

Mesos does not explicitly support HDFS and S3.  Rather, Mesos will assume you 
have a hadoop binary and use it (blindly) for certain types of URIs.  If the 
hadoop binary is not present, the mesos-fetcher will fail to fetch your HDFS or 
S3 URIs.

Mesos does not ship/package hadoop, so these URIs are not expected to work out 
of the box (for plain Mesos distributions).  In all cases, the operator must 
preconfigure hadoop on each node (similar to how Docker in Mesos works).

Here's the epic tracking the modularization of the mesos-fetcher (I estimate 
it'll be done by 0.30):
https://issues.apache.org/jira/browse/MESOS-3918

^ Once done, it should be easier to plug in more fetchers, such as one for your 
use-case.

On Tue, May 10, 2016 at 11:21 AM, Briant, James 
> wrote:
I’m happy to have default IAM role on the box that can read-only fetch from my 
s3 bucket. s3a gets the credentials from AWS instance metadata. It works.

If hadoop is gone, does that mean that hfds: URIs don’t work either?

Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs and 
s3.

In the absence of S3, how do you propose I make large binaries available to my 
cluster, and only to my cluster, on AWS?

Jamie

From: Cody Maloney >
Reply-To: "user@mesos.apache.org" 
>
Date: Tuesday, May 10, 2016 at 10:58 AM
To: "user@mesos.apache.org" 
>
Subject: Re: Enable s3a for fetcher

The s3 fetcher stuff inside of DC/OS is not supported. The `hadoop` binary has 
been entirely removed from DC/OS 1.8 already. There have been various proposals 
to make it so the mesos fetcher is much more pluggable / extensible