Re: Anybody hiring mess experience engineers?

2017-02-04 Thread Stephen Boesch
Please take job inquiries/offers off of the main channel. thanks.

2017-02-04 12:19 GMT-08:00 Vaibhav Khanduja :

> Thanks Brock.
>
> Since I am based in Santa Clara, CA. I was wondering if anything is
> located local.  Skills you need tough definitely match with me – Spark, HPC
> etc.
>
> From: Brock Palen 
> Date: Saturday, February 4, 2017 at 11:21 AM
> To: Vaibhav Khanduja 
> Subject: Re: Anybody hiring mess experience engineers?
>
> Posting is closed but have not interviewed yet.  I have attached it.
>
> We have a 5000 core ARM cluster that needs Mesos deployed on it and tools
> like Presto, Hbase and Spark.
>
>
> Brock Palen
> www.umich.edu/~brockp
> Director Advanced Research Computing - TS
> XSEDE Campus Champion
> bro...@umich.edu
> (734)936-1985 <(734)%20936-1985>
>
> On Sat, Feb 4, 2017 at 2:07 PM, Vaibhav Khanduja <
> vaibhavkhand...@gmail.com> wrote:
>
>>
>>
>


Re: what's the pronunciation of "MESOS"?

2016-08-09 Thread Stephen Boesch
@Jared / Yu Wei:  Mesos is essentially a spanish word: so MAY-sos  would
travel well.

2016-08-09 11:35 GMT-07:00 Ken Sipe :

> Apparently it depends on if you are British or not :)
> http://dictionary.cambridge.org/us/pronunciation/english/the-mesosphere
>
> apparently the absence of “phere" changes everything:
> https://www.howtopronounce.com/mesos/
>
> and for this looking for which percentile they are in:
> http://www.basilmarket.com/How-do-You-pronounce-mesos-Thread-b5eAL-1
>
>
> On Aug 9, 2016, at 12:30 PM, Charles Allen 
> wrote:
>
> My wife thought I was crazy sitting here mumbling "mAY-sohs" "MEH-sohs"
> "Mee-sohs"
>
> On Mon, Aug 8, 2016 at 9:22 PM Yu Wei  wrote:
>
>> Thanks Joe.
>>
>> It's really interesting.
>>
>>
>> Jared, (韦煜)
>> Software developer
>> Interested in open source software, big data, Linux
>>
>>
>> --
>> *From:* Joseph Jacks 
>> *Sent:* Tuesday, August 9, 2016 10:53 AM
>> *To:* user@mesos.apache.org
>> *Subject:* Re: what's the pronunciation of "MESOS"?
>>
>> "MAY-zoss" is most common and correct.
>>
>> "MEH-zoss" is second most common and also correct, I think.
>>
>> "MEE-zoss" is third most common, but incorrect.
>>
>> JJ.
>>
>> On Aug 8, 2016, at 10:48 PM, Yu Wei  wrote:
>>
>>
>> Thx,
>>
>> Jared, (韦煜)
>> Software developer
>> Interested in open source software, big data, Linux
>>
>>
>


Re: Help interpreting output from running java test-framework example

2015-09-17 Thread Stephen Boesch
Compared to Yarn Mesos is just faster. Mesos has a smaller  startup time
and the delay between tasks is smaller.  The run times for terasort 100GB
tended towards 110sec median on Mesos vs about double that on Yarn.

Unfortunately we require mature Multi-Tenancy/Isolation/Queues support
-which is still initial stages of WIP for Mesos. So we will need to use
YARN for the near and likely medium term.



2015-09-17 15:52 GMT-07:00 Marco Massenzio <ma...@mesosphere.io>:

> Hey Stephen,
>
> The spark on mesos is twice as fast as yarn on our 20 node cluster. In
>> addition Mesos  is handling datasizes that yarn simply dies on  it. But
>> mesos is  still just taking linearly increased time  compared to smaller
>> datasizes.
>
>
> Obviously delighted to hear that, BUT me not much like "but" :)
> I've added Tim who is one of the main contributors to our Mesos/Spark
> bindings, and it would be great to hear your use case/experience and find
> out whether we can improve on that front too!
>
> As the case may be, we could also jump on a hangout if it makes
> conversation easier/faster.
>
> Cheers,
>
> *Marco Massenzio*
>
> *Distributed Systems Engineerhttp://codetrips.com <http://codetrips.com>*
>
> On Wed, Sep 9, 2015 at 1:33 PM, Stephen Boesch <java...@gmail.com> wrote:
>
>> Thanks Vinod. I went back to see the logs and nothing interesting .
>> However int he process I found that my spark port was pointing to 7077
>> instead of 5050. After re-running .. spark on mesos worked!
>>
>> The spark on mesos is twice as fast as yarn on our 20 node cluster. In
>> addition Mesos  is handling datasizes that yarn simply dies on  it. But
>> mesos is  still just taking linearly increased time  compared to smaller
>> datasizes.
>>
>> We have significant additional work to incorporate mesos into operations
>> and support but given the strong perforrmance and stability characterstics
>> we are initially seeing here that effort is likely to get underway.
>>
>>
>>
>> 2015-09-09 12:54 GMT-07:00 Vinod Kone <vinodk...@gmail.com>:
>>
>>> sounds like it. can you see what the slave/agent and executor logs say?
>>>
>>> On Tue, Sep 8, 2015 at 11:46 AM, Stephen Boesch <java...@gmail.com>
>>> wrote:
>>>
>>>>
>>>> I am in the process of learning how to run a mesos cluster with the
>>>> intent for it to be the resource manager for Spark.  As a small step in
>>>> that direction a basic test of mesos was performed, as suggested by the
>>>> Mesos Getting Started page.
>>>>
>>>> In the following output we see tasks launched and resources offered on
>>>> a 20 node cluster:
>>>>
>>>> [stack@yarnmaster-8245 build]$ ./src/examples/java/test-framework
>>>> $(hostname -s):5050
>>>> I0908 18:40:10.900964 31959 sched.cpp:157] Version: 0.23.0
>>>> I0908 18:40:10.918957 32000 sched.cpp:254] New master detected at
>>>> master@10.64.204.124:5050
>>>> I0908 18:40:10.921525 32000 sched.cpp:264] No credentials provided.
>>>> Attempting to register without authentication
>>>> I0908 18:40:10.928963 31997 sched.cpp:448] Framework registered with
>>>> 20150908-182014-2093760522-5050-15313-
>>>> Registered! ID = 20150908-182014-2093760522-5050-15313-
>>>> Received offer 20150908-182014-2093760522-5050-15313-O0 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Launching task 0 using offer 20150908-182014-2093760522-5050-15313-O0
>>>> Launching task 1 using offer 20150908-182014-2093760522-5050-15313-O0
>>>> Launching task 2 using offer 20150908-182014-2093760522-5050-15313-O0
>>>> Launching task 3 using offer 20150908-182014-2093760522-5050-15313-O0
>>>> Launching task 4 using offer 20150908-182014-2093760522-5050-15313-O0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O1 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O2 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O3 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O4 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O5 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-182014-2093760522-5050-15313-O6 with cpus: 16.0
>>>> and mem: 119855.0
>>>> Received offer 20150908-

Documentation for Multi-Tenancy support

2015-09-15 Thread Stephen Boesch
I was unable to locate that documentation on the main doc site:

http://mesos.apache.org/documentation/latest/

Possibly the terminology were different for mesos? Or there is a different
approach?

We are looking for policies/profiles that may be applied to either groups
of users or a role account that provide quotas on cluster resources.  In
YARN these are typically embodied as queues.

Pointers on how to translate the Yarn constructs to Mesos would also be
helpful.

thanks!


Re: Help interpreting output from running java test-framework example

2015-09-09 Thread Stephen Boesch
Thanks Vinod. I went back to see the logs and nothing interesting .
However int he process I found that my spark port was pointing to 7077
instead of 5050. After re-running .. spark on mesos worked!

The spark on mesos is twice as fast as yarn on our 20 node cluster. In
addition Mesos  is handling datasizes that yarn simply dies on  it. But
mesos is  still just taking linearly increased time  compared to smaller
datasizes.

We have significant additional work to incorporate mesos into operations
and support but given the strong perforrmance and stability characterstics
we are initially seeing here that effort is likely to get underway.



2015-09-09 12:54 GMT-07:00 Vinod Kone <vinodk...@gmail.com>:

> sounds like it. can you see what the slave/agent and executor logs say?
>
> On Tue, Sep 8, 2015 at 11:46 AM, Stephen Boesch <java...@gmail.com> wrote:
>
>>
>> I am in the process of learning how to run a mesos cluster with the
>> intent for it to be the resource manager for Spark.  As a small step in
>> that direction a basic test of mesos was performed, as suggested by the
>> Mesos Getting Started page.
>>
>> In the following output we see tasks launched and resources offered on a
>> 20 node cluster:
>>
>> [stack@yarnmaster-8245 build]$ ./src/examples/java/test-framework
>> $(hostname -s):5050
>> I0908 18:40:10.900964 31959 sched.cpp:157] Version: 0.23.0
>> I0908 18:40:10.918957 32000 sched.cpp:254] New master detected at
>> master@10.64.204.124:5050
>> I0908 18:40:10.921525 32000 sched.cpp:264] No credentials provided.
>> Attempting to register without authentication
>> I0908 18:40:10.928963 31997 sched.cpp:448] Framework registered with
>> 20150908-182014-2093760522-5050-15313-
>> Registered! ID = 20150908-182014-2093760522-5050-15313-
>> Received offer 20150908-182014-2093760522-5050-15313-O0 with cpus: 16.0
>> and mem: 119855.0
>> Launching task 0 using offer 20150908-182014-2093760522-5050-15313-O0
>> Launching task 1 using offer 20150908-182014-2093760522-5050-15313-O0
>> Launching task 2 using offer 20150908-182014-2093760522-5050-15313-O0
>> Launching task 3 using offer 20150908-182014-2093760522-5050-15313-O0
>> Launching task 4 using offer 20150908-182014-2093760522-5050-15313-O0
>> Received offer 20150908-182014-2093760522-5050-15313-O1 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O2 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O3 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O4 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O5 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O6 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O7 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O8 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O9 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O10 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O11 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O12 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O13 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O14 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O15 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O16 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O17 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O18 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O19 with cpus: 16.0
>> and mem: 119855.0
>> Received offer 20150908-182014-2093760522-5050-15313-O20 with cpus: 16.0
>> and mem: 119855.0
>> Status update: task 0 is in state TASK_LOST
>> Aborting because task 0 is in unexpected state TASK_LOST with reason
>> 'REASON_EXECUTOR_TERMINATED' from source 'SOURCE_SLAVE' with message
>> 'Executor terminated'
>> I0908 18:40:12.466081 31996 sched.cpp:1625] Asked to abort the driver
>> I0908 18:40:12.467051 31996 sched.cpp:861] Aborting framework
>> '20150908-182014-2093760522-5050-15313-'
>> I0908 18:40:12.468053 31959 sched.cpp:1591] Asked to stop the driver
>> I0908 18:40:12.468683 31991 sched.cpp:835] Stopping framework
>> '20150908-182014-2093760522-5050-15313-'
>>
>>
>> Why did the task transition to TASK_LOST ?   Is there a misconfiguration
>> on the cluster?
>>
>
>


Help interpreting output from running java test-framework example

2015-09-08 Thread Stephen Boesch
I am in the process of learning how to run a mesos cluster with the intent
for it to be the resource manager for Spark.  As a small step in that
direction a basic test of mesos was performed, as suggested by the Mesos
Getting Started page.

In the following output we see tasks launched and resources offered on a 20
node cluster:

[stack@yarnmaster-8245 build]$ ./src/examples/java/test-framework
$(hostname -s):5050
I0908 18:40:10.900964 31959 sched.cpp:157] Version: 0.23.0
I0908 18:40:10.918957 32000 sched.cpp:254] New master detected at
master@10.64.204.124:5050
I0908 18:40:10.921525 32000 sched.cpp:264] No credentials provided.
Attempting to register without authentication
I0908 18:40:10.928963 31997 sched.cpp:448] Framework registered with
20150908-182014-2093760522-5050-15313-
Registered! ID = 20150908-182014-2093760522-5050-15313-
Received offer 20150908-182014-2093760522-5050-15313-O0 with cpus: 16.0 and
mem: 119855.0
Launching task 0 using offer 20150908-182014-2093760522-5050-15313-O0
Launching task 1 using offer 20150908-182014-2093760522-5050-15313-O0
Launching task 2 using offer 20150908-182014-2093760522-5050-15313-O0
Launching task 3 using offer 20150908-182014-2093760522-5050-15313-O0
Launching task 4 using offer 20150908-182014-2093760522-5050-15313-O0
Received offer 20150908-182014-2093760522-5050-15313-O1 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O2 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O3 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O4 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O5 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O6 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O7 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O8 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O9 with cpus: 16.0 and
mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O10 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O11 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O12 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O13 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O14 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O15 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O16 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O17 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O18 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O19 with cpus: 16.0
and mem: 119855.0
Received offer 20150908-182014-2093760522-5050-15313-O20 with cpus: 16.0
and mem: 119855.0
Status update: task 0 is in state TASK_LOST
Aborting because task 0 is in unexpected state TASK_LOST with reason
'REASON_EXECUTOR_TERMINATED' from source 'SOURCE_SLAVE' with message
'Executor terminated'
I0908 18:40:12.466081 31996 sched.cpp:1625] Asked to abort the driver
I0908 18:40:12.467051 31996 sched.cpp:861] Aborting framework
'20150908-182014-2093760522-5050-15313-'
I0908 18:40:12.468053 31959 sched.cpp:1591] Asked to stop the driver
I0908 18:40:12.468683 31991 sched.cpp:835] Stopping framework
'20150908-182014-2093760522-5050-15313-'


Why did the task transition to TASK_LOST ?   Is there a misconfiguration on
the cluster?


Re: Basic installation question

2015-09-05 Thread Stephen Boesch
Yes I had started the slaves as

service mesos-slave start

But had not done the correct way on the master, which is supposed to be:

service mesos-master start

The slaves do appear after having made that correction: thanks.


2015-09-05 14:55 GMT-07:00 Marco Massenzio <ma...@mesosphere.io>:

> Stephen:
>
> Klaus is correct, you are starting the Master in "standalone" mode, not
> with zookeeper support: it needs adding the --zk=zk://10.xx.xx.124:2181/mesos
> --quorum=1 options (at the very least).
>
> As you correctly noted, the contents of the /mesos znode is empty and thus
> the agent nodes cannot find elected Master leader (also, if you are running
> more than one Master, they won't 'know' about each other and won't be able
> to elect a leader).
>
> To check that your settings work, you can (a) look in Master logs (it will
> log a lot of info when connecting to ZK) and (b) see that under /mesos a
> number of json.info_nn nodes will appear (whose contents are JSON so
> you can double check that the contents make sense).
>
> You can find more info here[0].
>
> [0]
> http://codetrips.com/2015/08/16/apache-mesos-leader-master-discovery-using-zookeeper-part-2/
>
> *Marco Massenzio*
>
> *Distributed Systems Engineerhttp://codetrips.com <http://codetrips.com>*
>
> On Fri, Sep 4, 2015 at 5:33 PM, Stephen Boesch <java...@gmail.com> wrote:
>
>>
>> I installed using yum -y install mesos. That did work.
>>
>> Now the master and slaves do not see each other.
>>
>>
>> Here is the master:
>> $ ps -ef | grep mesos | grep -v grep
>> stack30236 17902  0 00:09 pts/400:00:04
>> /mnt/mesos/build/src/.libs/lt-mesos-master --work_dir=/tmp/mesos
>> --ip=10.xx.xx.124
>>
>>
>> Here is one of the 20 slaves:
>>
>>  ps -ef | grep mesos | grep -v grep
>> root 26086 1  0 00:10 ?00:00:00 /usr/sbin/mesos-slave
>> --master=zk://10.xx.xx.124:2181/mesos --log_dir=/var/log/mesos
>> root 26092 26086  0 00:10 ?00:00:00 logger -p user.info -t
>> mesos-slave[26086]
>> root 26093 26086  0 00:10 ?00:00:00 logger -p user.err -t
>> mesos-slave[26086]
>>
>>
>> Note the slave and master are on correct same ip address
>>
>> The /etc/mesos/zk seems to be set properly : and I do see the /mesos node
>> in zookeeper is updated after restarting the master
>>
>> However the zookeeper node is empty:
>>
>> [zk: localhost:2181(CONNECTED) 10] ls /mesos
>> []
>>
>> The node is world accessible so no permission issue:
>>
>> [zk: localhost:2181(CONNECTED) 12] getAcl /mesos
>> 'world,'anyone
>> : cdrwa
>>
>> Why is the zookeeper node empty?  Is this the reason the  master and
>> slaves are not connecting?
>>
>> 2015-09-04 14:56 GMT-07:00 craig w <codecr...@gmail.com>:
>>
>>> No problem, they have a "downloads" link inn their menu:
>>> https://mesosphere.com/downloads/
>>> On Sep 4, 2015 5:43 PM, "Stephen Boesch" <java...@gmail.com> wrote:
>>>
>>>> @Craig . That is an incomplete answer - given that such links are not
>>>> presented in an obvious manner .  Maybe you managed to find  a link on
>>>> their site that provides prebuilt for Centos7: if so then please share it.
>>>>
>>>>
>>>> I had previously found a link on their site for prebuilt binaries but
>>>> is based on using CDH4 (which is not possible for my company). It is also
>>>> old.
>>>>
>>>> https://docs.mesosphere.com/tutorials/install_centos_rhel/
>>>>
>>>>
>>>> 2015-09-04 14:27 GMT-07:00 craig w <codecr...@gmail.com>:
>>>>
>>>>> Mesosphere has packages prebuilt, go to their site to find how to
>>>>> install
>>>>> On Sep 4, 2015 5:11 PM, "Stephen Boesch" <java...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>> After following the directions here:
>>>>>> http://mesos.apache.org/gettingstarted/
>>>>>>
>>>>>> Which for centos7 includes the following:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>   # Change working directory.
>>>>>> $ cd mesos
>>>>>>
>>>>>> # Bootstrap (Only required if building from git repository).
>>>>>> $ ./bootstrap
>>>>>>
>>>>>> # Configure and build.
>>&g

Basic installation question

2015-09-04 Thread Stephen Boesch
After following the directions here:
http://mesos.apache.org/gettingstarted/

Which for centos7 includes the following:




  # Change working directory.
$ cd mesos

# Bootstrap (Only required if building from git repository).
$ ./bootstrap

# Configure and build.
$ mkdir build
$ cd build
$ ../configure
$ make

In order to speed up the build and reduce verbosity of the logs, you can
append-j  V=0 to make.

# Run test suite.
$ make check

# Install (Optional).
$ make install



But the installation is not correct afterwards: here is the bin directory:

$ ll bin
total 92
-rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-tests.sh.in
-rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-slave.sh.in
-rw-r--r--.  1 stack stack 1772 Jul 17 23:14 valgrind-mesos-master.sh.in
-rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-local.sh.in
-rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-tests.sh.in
-rw-r--r--.  1 stack stack  901 Jul 17 23:14 mesos-tests-flags.sh.in
-rw-r--r--.  1 stack stack 1019 Jul 17 23:14 mesos-slave.sh.in
-rw-r--r--.  1 stack stack 1721 Jul 17 23:14 mesos-slave-flags.sh.in
-rw-r--r--.  1 stack stack 1366 Jul 17 23:14 mesos.sh.in
-rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-master.sh.in
-rw-r--r--.  1 stack stack  858 Jul 17 23:14 mesos-master-flags.sh.in
-rw-r--r--.  1 stack stack 1023 Jul 17 23:14 mesos-local.sh.in
-rw-r--r--.  1 stack stack  935 Jul 17 23:14 mesos-local-flags.sh.in
-rw-r--r--.  1 stack stack 1466 Jul 17 23:14 lldb-mesos-tests.sh.in
-rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-slave.sh.in
-rw-r--r--.  1 stack stack 1492 Jul 17 23:14 lldb-mesos-master.sh.in
-rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-local.sh.in
-rw-r--r--.  1 stack stack 1498 Jul 17 23:14 gdb-mesos-tests.sh.in
-rw-r--r--.  1 stack stack 1527 Jul 17 23:14 gdb-mesos-slave.sh.in
-rw-r--r--.  1 stack stack 1530 Jul 17 23:14 gdb-mesos-master.sh.in
-rw-r--r--.  1 stack stack 1521 Jul 17 23:14 gdb-mesos-local.sh.in
drwxr-xr-x.  2 stack stack 4096 Jul 17 23:21 .
drwxr-xr-x. 11 stack stack 4096 Sep  4 20:08 ..

So .. two things:

(a) what is missing from the installation instructions?

(b) Is there an *up to date *rpm/yum installation for centos7?


Re: Basic installation question

2015-09-04 Thread Stephen Boesch
@Craig . That is an incomplete answer - given that such links are not
presented in an obvious manner .  Maybe you managed to find  a link on
their site that provides prebuilt for Centos7: if so then please share it.


I had previously found a link on their site for prebuilt binaries but is
based on using CDH4 (which is not possible for my company). It is also old.

https://docs.mesosphere.com/tutorials/install_centos_rhel/


2015-09-04 14:27 GMT-07:00 craig w <codecr...@gmail.com>:

> Mesosphere has packages prebuilt, go to their site to find how to install
> On Sep 4, 2015 5:11 PM, "Stephen Boesch" <java...@gmail.com> wrote:
>
>>
>> After following the directions here:
>> http://mesos.apache.org/gettingstarted/
>>
>> Which for centos7 includes the following:
>>
>>
>>
>>
>>   # Change working directory.
>> $ cd mesos
>>
>> # Bootstrap (Only required if building from git repository).
>> $ ./bootstrap
>>
>> # Configure and build.
>> $ mkdir build
>> $ cd build
>> $ ../configure
>> $ make
>>
>> In order to speed up the build and reduce verbosity of the logs, you can
>> append-j  V=0 to make.
>>
>> # Run test suite.
>> $ make check
>>
>> # Install (Optional).
>> $ make install
>>
>>
>>
>> But the installation is not correct afterwards: here is the bin directory:
>>
>> $ ll bin
>> total 92
>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-tests.sh.in
>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-slave.sh.in
>> -rw-r--r--.  1 stack stack 1772 Jul 17 23:14 valgrind-mesos-master.sh.in
>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-local.sh.in
>> -rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-tests.sh.in
>> -rw-r--r--.  1 stack stack  901 Jul 17 23:14 mesos-tests-flags.sh.in
>> -rw-r--r--.  1 stack stack 1019 Jul 17 23:14 mesos-slave.sh.in
>> -rw-r--r--.  1 stack stack 1721 Jul 17 23:14 mesos-slave-flags.sh.in
>> -rw-r--r--.  1 stack stack 1366 Jul 17 23:14 mesos.sh.in
>> -rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-master.sh.in
>> -rw-r--r--.  1 stack stack  858 Jul 17 23:14 mesos-master-flags.sh.in
>> -rw-r--r--.  1 stack stack 1023 Jul 17 23:14 mesos-local.sh.in
>> -rw-r--r--.  1 stack stack  935 Jul 17 23:14 mesos-local-flags.sh.in
>> -rw-r--r--.  1 stack stack 1466 Jul 17 23:14 lldb-mesos-tests.sh.in
>> -rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-slave.sh.in
>> -rw-r--r--.  1 stack stack 1492 Jul 17 23:14 lldb-mesos-master.sh.in
>> -rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-local.sh.in
>> -rw-r--r--.  1 stack stack 1498 Jul 17 23:14 gdb-mesos-tests.sh.in
>> -rw-r--r--.  1 stack stack 1527 Jul 17 23:14 gdb-mesos-slave.sh.in
>> -rw-r--r--.  1 stack stack 1530 Jul 17 23:14 gdb-mesos-master.sh.in
>> -rw-r--r--.  1 stack stack 1521 Jul 17 23:14 gdb-mesos-local.sh.in
>> drwxr-xr-x.  2 stack stack 4096 Jul 17 23:21 .
>> drwxr-xr-x. 11 stack stack 4096 Sep  4 20:08 ..
>>
>> So .. two things:
>>
>> (a) what is missing from the installation instructions?
>>
>> (b) Is there an *up to date *rpm/yum installation for centos7?
>>
>>
>>
>>
>>
>>
>>


Re: Basic installation question

2015-09-04 Thread Stephen Boesch
Thanks Marco,
Your prior email was on-target:  i was pointing to the $mesos/bin not
$mesos/build/bin. I am moving forward now to next steps.

   Thanks also for the links to the downloads: our automated VM installs
will likely want to use those.


2015-09-04 14:51 GMT-07:00 Marco Massenzio <ma...@mesosphere.io>:

> Hey Stephen,
>
> the Mesos packages for download from Mesosphere are available here:
> https://mesosphere.com/downloads/
> (for Mesos, just click on the Getting Started button - sorry, no direct
> URL - it will show the steps to install on the supported distros using
> apt-get/yum).
>
> Those work and I obviously recommend them :)
> But I think you wanted the "full developer experience" as you pointed to
> the make steps.
>
> Also, if you haven't looked at the tutorials in a while (as you seem to
> imply in your message) I would recommend you give them another shot: we've
> been doing some work on revamping them and making them more accessible.
>
>
>
> *Marco Massenzio*
>
> *Distributed Systems Engineerhttp://codetrips.com <http://codetrips.com>*
>
> On Fri, Sep 4, 2015 at 2:38 PM, Stephen Boesch <java...@gmail.com> wrote:
>
>> @Craig . That is an incomplete answer - given that such links are not
>> presented in an obvious manner .  Maybe you managed to find  a link on
>> their site that provides prebuilt for Centos7: if so then please share it.
>>
>>
>> I had previously found a link on their site for prebuilt binaries but is
>> based on using CDH4 (which is not possible for my company). It is also old.
>>
>> https://docs.mesosphere.com/tutorials/install_centos_rhel/
>>
>>
>> 2015-09-04 14:27 GMT-07:00 craig w <codecr...@gmail.com>:
>>
>>> Mesosphere has packages prebuilt, go to their site to find how to install
>>> On Sep 4, 2015 5:11 PM, "Stephen Boesch" <java...@gmail.com> wrote:
>>>
>>>>
>>>> After following the directions here:
>>>> http://mesos.apache.org/gettingstarted/
>>>>
>>>> Which for centos7 includes the following:
>>>>
>>>>
>>>>
>>>>
>>>>   # Change working directory.
>>>> $ cd mesos
>>>>
>>>> # Bootstrap (Only required if building from git repository).
>>>> $ ./bootstrap
>>>>
>>>> # Configure and build.
>>>> $ mkdir build
>>>> $ cd build
>>>> $ ../configure
>>>> $ make
>>>>
>>>> In order to speed up the build and reduce verbosity of the logs, you
>>>> can append-j  V=0 to make.
>>>>
>>>> # Run test suite.
>>>> $ make check
>>>>
>>>> # Install (Optional).
>>>> $ make install
>>>>
>>>>
>>>>
>>>> But the installation is not correct afterwards: here is the bin
>>>> directory:
>>>>
>>>> $ ll bin
>>>> total 92
>>>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-tests.sh.in
>>>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-slave.sh.in
>>>> -rw-r--r--.  1 stack stack 1772 Jul 17 23:14
>>>> valgrind-mesos-master.sh.in
>>>> -rw-r--r--.  1 stack stack 1769 Jul 17 23:14 valgrind-mesos-local.sh.in
>>>> -rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-tests.sh.in
>>>> -rw-r--r--.  1 stack stack  901 Jul 17 23:14 mesos-tests-flags.sh.in
>>>> -rw-r--r--.  1 stack stack 1019 Jul 17 23:14 mesos-slave.sh.in
>>>> -rw-r--r--.  1 stack stack 1721 Jul 17 23:14 mesos-slave-flags.sh.in
>>>> -rw-r--r--.  1 stack stack 1366 Jul 17 23:14 mesos.sh.in
>>>> -rw-r--r--.  1 stack stack 1026 Jul 17 23:14 mesos-master.sh.in
>>>> -rw-r--r--.  1 stack stack  858 Jul 17 23:14 mesos-master-flags.sh.in
>>>> -rw-r--r--.  1 stack stack 1023 Jul 17 23:14 mesos-local.sh.in
>>>> -rw-r--r--.  1 stack stack  935 Jul 17 23:14 mesos-local-flags.sh.in
>>>> -rw-r--r--.  1 stack stack 1466 Jul 17 23:14 lldb-mesos-tests.sh.in
>>>> -rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-slave.sh.in
>>>> -rw-r--r--.  1 stack stack 1492 Jul 17 23:14 lldb-mesos-master.sh.in
>>>> -rw-r--r--.  1 stack stack 1489 Jul 17 23:14 lldb-mesos-local.sh.in
>>>> -rw-r--r--.  1 stack stack 1498 Jul 17 23:14 gdb-mesos-tests.sh.in
>>>> -rw-r--r--.  1 stack stack 1527 Jul 17 23:14 gdb-mesos-slave.sh.in
>>>> -rw-r--r--.  1 stack stack 1530 Jul 17 23:14 gdb-mesos-master.sh.in
>>>> -rw-r--r--.  1 stack stack 1521 Jul 17 23:14 gdb-mesos-local.sh.in
>>>> drwxr-xr-x.  2 stack stack 4096 Jul 17 23:21 .
>>>> drwxr-xr-x. 11 stack stack 4096 Sep  4 20:08 ..
>>>>
>>>> So .. two things:
>>>>
>>>> (a) what is missing from the installation instructions?
>>>>
>>>> (b) Is there an *up to date *rpm/yum installation for centos7?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>