Based on the count, that looks like all ~150 submissions.
I couldn't find any Myriad-specific, so I guess none of us submitted a
Myriad talk to MesosCon.
Found this though:
Experiences running HPC & Big Data frameworks on Cray Analytics Platform
"... Cray .. leverage Apache Mesos and create
Yeah I didn't see one either.
Darin
On Wed, Mar 23, 2016 at 1:10 PM, Sarjeet Singh
wrote:
> I couldn't find any associated link of myriad talk for MesosCon voting.
> Anyone?
>
> Though, I found these proposal doc:
>
> Developers: http://bit.ly/1RpZPvj
> Users:
I couldn't find any associated link of myriad talk for MesosCon voting.
Anyone?
Though, I found these proposal doc:
Developers: http://bit.ly/1RpZPvj
Users: http://bit.ly/1Mspaxp
*It seems the deadline for the proposal voting is today, March 23 2016.*
-Sarjeet
DarinJ created MYRIAD-192:
-
Summary: Better Support Cgroups
Key: MYRIAD-192
URL: https://issues.apache.org/jira/browse/MYRIAD-192
Project: Myriad
Issue Type: Bug
Components: Scheduler
[
https://issues.apache.org/jira/browse/MYRIAD-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Adam B updated MYRIAD-188:
--
Fix Version/s: Myriad 0.1.1
> Zero sized node managers can cause the Resource Manager to crash with an NPE
>
[
https://issues.apache.org/jira/browse/MYRIAD-153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Adam B updated MYRIAD-153:
--
Fix Version/s: Myriad 0.1.1
> Placeholder tasks yarn_container_* is not cleaned after yarn job is complete.
>
Swanil,
I concur and want to keep both options for Mesos and Docker networking
available, and putting the configuration for both in should be a priority.
However, one has to be careful with this as the NM's register with the RM
via heartbeats with their container port (Not the host port), this
Hey, Bjorn sorry for the delay, looking at the difference between the
exceptions and my own experience I believe you left some cgroup configs in
yarn-site.xml of the node manager.
On Mar 18, 2016 2:58 AM, "Björn Hagemeier"
wrote:
> Hi Darin,
>
> thanks a lot for this.