Help with the documentation is always welcome :).

Thanks!
On Jan 18, 2016 12:00 PM, "Matthew J. Loppatto" <mloppa...@keywcorp.com>
wrote:

> I was looking through more of my mesos slave stderr logs and found a
> message in one of them that said there was no class defined for
> myriad_executor.  Adding the following to my yarn-site.xml file resolved
> this issue:
>
> <property>
>         <name>yarn.nodemanager.aux-services.myriad_executor.class</name>
>         <value>org.apache.myriad.executor.MyriadExecutorAuxService</value>
> </property>
>
> Now my node manager is in a running state and appears to be staying there
> :)
>
> I'm not sure if this is related to the stderr message below but all seems
> to be working now.
>
> As a first time user of Myriad, I was planning on adding info to the setup
> guide to clarify things that weren't immediately obvious to me while
> setting it up.  Let me know if there would be any interest in this.
>
> Thanks everyone for your help!
> Matt
>
> ________________________________________
> From: Matthew J. Loppatto [mloppa...@keywcorp.com]
> Sent: Monday, January 18, 2016 7:53 AM
> To: dev@myriad.incubator.apache.org
> Subject: RE: Myriad Vagrant Setup Issue
>
> Hi all,
>
> My latest stdout logs are empty.  My latest stderr logs in
> /tmp/mesos/slaves/... show only the following:
>
> ABORT:
> (/tmp/mesos-build/mesos-repo/3rdparty/libprocess/src/subprocess.cpp:177):
> Failed to os::execvpe in childMain: Argument list too long*** Aborted at
> 1452885649 (unix time) try "date -d @1452885649" if you are using GNU date
> ***
> PC: @     0x7f670062fcc9 (unknown)
> *** SIGABRT (@0x41a4) received by PID 16804 (TID 0x7f66f7119700) from PID
> 16804; stack trace: ***
>     @     0x7f67009ce340 (unknown)
>     @     0x7f670062fcc9 (unknown)
>     @     0x7f67006330d8 (unknown)
>     @           0x40ac42 _Abort()
>     @           0x40ac7c _Abort()
>     @     0x7f670234a1ed process::childMain()
>     @     0x7f670234c23d std::_Function_handler<>::_M_invoke()
>     @     0x7f67006f347d (unknown)
>
>
> ________________________________________
> From: Swapnil Daingade [sdaing...@maprtech.com]
> Sent: Friday, January 15, 2016 5:03 PM
> To: dev@myriad.incubator.apache.org
> Subject: Re: Myriad Vagrant Setup Issue
>
> Hi Matt,
>
> Looks like the Mesos slave now launches the NodeManager mesos task.
> However, the NodeManager seems to be dying after a while.
>
> The log files for the NodeManager Mesos task (that Darin mentioned) below
> should help figure out why the NodeManager died.
> Could you please post those files as well.
>
> Regards
> Swapnil
>
>
> On Fri, Jan 15, 2016 at 11:42 AM, Darin Johnson <dbjohnson1...@gmail.com>
> wrote:
>
> > Matt, if you can't access the UI, on the slave you should still be able
> to
> > access stderr and stdout going to:
> >
> > /tmp/mesos/slaves/<Hash representing slaveID>/frameworks/<Hash
> representing
> > frameworkID>/executors/myriad_executor<Hash>/runs/latest/stderr
> >
> > /tmp/mesos/slaves/<Hash representing slaveID>/frameworks/<Hash
> representing
> > frameworkID>/executors/myriad_executor<Hash>/runs/latest/stdout
> > Replace /tmp/mesos/ with your workdir (likely /var/run/mesos/ or
> > /tmp/mesos).  The error messages here are usually informative.
> >
> > On Fri, Jan 15, 2016 at 11:13 AM, Matthew J. Loppatto <
> > mloppa...@keywcorp.com> wrote:
> >
> > > Hey Darin,
> > >
> > > For some reason my Mesos UI hangs when loading the logs but I posted
> the
> > > contents of my mesos slave logs in /var/log/mesos to this public Gist:
> > > https://gist.github.com/FearTheParrot/b00aa7eee9ae169498d3
> > >
> > > Matt
> > >
> > > -----Original Message-----
> > > From: Darin Johnson [mailto:dbjohnson1...@gmail.com]
> > > Sent: Friday, January 15, 2016 10:55 AM
> > > To: Dev
> > > Subject: Re: Myriad Vagrant Setup Issue
> > >
> > > Hey Matt, if you look at the mesos ui is there any information in the
> > > stderr or stdout of the Slave Host it's staging on?
> > >
> > > Darin
> > >
> > > On Fri, Jan 15, 2016 at 10:36 AM, Matthew J. Loppatto <
> > > mloppa...@keywcorp.com> wrote:
> > >
> > > > I've gotten a little farther on this issue by increasing the mesos
> > > > slave memory to 4 GB from 2GB.  The node manager task get launched
> and
> > > > sits in the STAGING state for a minute and then the mesos-slave.INFO
> > log
> > > shows:
> > > >
> > > > I0115 15:19:12.114537 30903 slave.cpp:3841] Terminating executor
> > > >
> myriad_executor20160115-145750-344821002-5050-30838-000020160115-14575
> > > > 0-344821002-5050-30838-O18020160115-145750-344821002-5050-30838-S0
> > > > of framework 20160115-145750-344821002-5050-30838-0000 because it did
> > > > not register within 1mins
> > > >
> > > > I then increased the mesos slave's executor_registration_timeout
> > > > setting from 1mins to 5mins to see if that would make a difference
> but
> > > > still get the following in the log:
> > > >
> > > > I0115 15:19:12.114537 30903 slave.cpp:3841] Terminating executor
> > > >
> myriad_executor20160115-145750-344821002-5050-30838-000020160115-14575
> > > > 0-344821002-5050-30838-O18020160115-145750-344821002-5050-30838-S0
> > > > of framework 20160115-145750-344821002-5050-30838-0000 because it did
> > > > not register within 5mins
> > > >
> > > > Is there any guidance on why the Myriad executor fails to register
> > > > with the Mesos slave?
> > > >
> > > > Thanks,
> > > > Matt
> > > >
> > > > -----Original Message-----
> > > > From: Matthew J. Loppatto
> > > > Sent: Thursday, January 14, 2016 2:25 PM
> > > > To: 'dev@myriad.incubator.apache.org'
> > > > Subject: RE: Myriad Vagrant Setup Issue
> > > >
> > > > Sarjeet,
> > > >
> > > > Thanks for the reply.  I modified the medium profile in my
> > > > myriad-config-default.yml file to use 1 cpu and 1024 MB mem and am
> > > > seeing a similar issue in the YARN resource manager log:
> > > >
> > > > Offer not sufficient for task with, cpu: 1.4, memory: 2432.0, ports:
> > > > 1001
> > > >
> > > > If I try lowering the medium profile memory below 1024 I get the
> > > > following message in the log:
> > > >
> > > > NodeManager from vagrant-ubuntu-trusty-64 doesn’t satisfy minimum
> > > > allocations, Sending SHUTDOWN signal to NodeManager.
> > > >
> > > > Increasing the memory of the VM to 6 GB also didn't solve the issue.
> > > > Are there any other measures I can take to resolve the insufficient
> > > > resource messages?
> > > >
> > > > Thanks,
> > > > Matt
> > > >
> > > > -----Original Message-----
> > > > From: sarjeet singh [mailto:sarje...@usc.edu]
> > > > Sent: Thursday, January 14, 2016 12:41 PM
> > > > To: dev@myriad.incubator.apache.org
> > > > Subject: Re: Myriad Vagrant Setup Issue
> > > >
> > > > Matthew,
> > > >
> > > > You can modify profile configurations for Nodemanagers in
> > > > myriad-config-default.yml and reduce medium (default) NM
> configuration
> > > > to match with your VM capacity so a default NM (medium profile) could
> > > > launch without any issue.
> > > >
> > > > - Sarjeet Singh
> > > >
> > > > On Thu, Jan 14, 2016 at 10:56 PM, Matthew J. Loppatto <
> > > > mloppa...@keywcorp.com> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm trying to setup Myriad for an R&D project at my company but I'm
> > > > > having some trouble even getting the Vagrant VM working properly.
> I
> > > > > followed the instructions here:
> > > > >
> > > > >
> https://github.com/apache/incubator-myriad/blob/master/docs/vagrant.
> > > > > md
> > > > >
> > > > > with some minor corrections but the Node Manager fails to start.
> It
> > > > > looks like a resource issue based on the log output.  The Mesos UI
> > > > > shows a slave process with 2 cpu and 2 GB mem, but the log states
> > > > > the task requires 4 cpu and 5.5 GB mem.
> > > > >
> > > > > I've detailed my configuration and log output in this public Gist:
> > > > >
> > > > > https://gist.github.com/FearTheParrot/626259c23a854645fcbf
> > > > >
> > > > > Would it be possible to provision the Mesos slave with more
> > > > > resources while also reducing the profile size of the Node Manager?
> > > > > The Vagrant VM only has 4 GB ram and 2 cpu.
> > > > >
> > > > > Any help would be appreciated.
> > > > >
> > > > > Thanks!
> > > > > Matt
> > > > >
> > > >
> > >
> >
>

Reply via email to