Spark should be setting the FrameworkInfo.webui_url on scheduler startup.
https://github.com/apache/spark/blob/v1.4.1/core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala#L76
What versions of Mesos and Spark are you using?
On Tue, Jul 28, 2015 at 3:40 PM, Philip Weaver
Hi everyone,
I’m trying to get access to Spark web UI from Mesos Master but with no success:
the host name displayed properly, but the link is not active, just text. Maybe
it’s a well-known issue or I misconfigured something, but this problem is
really annoying.
When running spark-submit
It quits before it writes any logs, when I look at the directory for logs,
it has empty files!
On Tue, Jul 28, 2015 at 2:34 PM, Vinod Kone vinodk...@gmail.com wrote:
can you paste the logs?
On Tue, Jul 28, 2015 at 2:31 PM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
Well,
Hi Mesos users,
I am wondering if anyone is using this isolator (i.e.,
--isolation=filesystem/shared)? If not, we plan to remove it from the
source code in favor of using the upcoming more general linux filesystem
isolator (https://reviews.apache.org/r/36429/).
- Jie
0. How do i go about the issue of HA at the scheduler level?
One alternative to having to do your own leader election is to use a
meta-framework like Marathon or Aurora to automatically restart your
scheduler. There will be a short downtime during the failover, but as soon
as the new scheduler
So, I don't mean to sound like a newbie here, but in running my current
setup which has 4.6.3, (and I tried to run 4.8) how can I get Mesos 0.23 to
compile. Is this something I need to change in certain files? In certain
steps? Is this something that should be a bug in Mesos to handle the
Hi,
I just set up mesos-dns with my mesos+marathon cluster, and it appears to
be working fine, but I can't get SRV records.
mesos-dns executed by running: $ sudo /usr/local/mesos-dns/mesos-dns
-config /usr/local/mesos-dns/config.json
verified to be working by running dig from another machine
Hi,
Is it possible to build a custom executor which is not associated with a
particular scheduler framework? I want to be able to write a custom executor
which is available to multiple schedulers (eg Marathon, Chronos and our own
custom scheduler). Is this possible? I couldn't quite figure out
A simple nginx reverse proxy will get you most of the way there, but only
for the master webui. Since the tasks' sandboxes are hosted on each slave's
webui, you would also have to reverse proxy each slave's webui in order for
sandboxes to be publicly accessible. More complicated, but not
The mesos-slaves have their own UI separate from the master? If so, what's
the URL to get to it? I just tried http://mesos-slave:5051 and got a
blank page.
On Tue, Jul 28, 2015 at 6:51 AM, Adam Bordelon a...@mesosphere.io wrote:
A simple nginx reverse proxy will get you most of the way there,
Hi Haripiya,
When you run Spark on Mesos it needs to run
spark driver
mesos scheduler
and both need to be visible to outside world on public iface IP
you need to tell Spark and Mesos on which interface to bind - by default
they resolve node hostname to ip - this is loopback address in your
What does
dig _search3d._tcp.marathon.mesos SRV
give you?
See also http://mesosphere.github.io/mesos-dns/docs/naming.html
Cheers,
Michael
--
Michael Hausenblas
Ireland, Europe
http://mhausenblas.info/
On 28 Jul 2015, at 17:14, Itamar Ostricher ita...@yowza3d.com wrote:
Hi Itamar,
You need to change the dns name you’re querying for a bit:
Use dig _search3d._tcp.marathon.mesos
See: https://mesosphere.github.io/mesos-dns/docs/naming.html#srv-records
Andras
From: Itamar Ostricher [mailto:ita...@yowza3d.com]
Sent: Tuesday, July 28, 2015 12:15 PM
Yes. You can mix and match languages. In fact, a major Mesos framework does
this - Aurora. It's scheduler is written in Java and its executor is
written in Python. I've experimented myself in writing the scheduler in
Golang and executor in Erlang.
In addition to this, making your executor
ah, thanks! missed that part... it did the trick :-)
On Tue, Jul 28, 2015 at 7:25 PM Andras Kerekes
andras.kere...@ishisystems.com wrote:
Hi Itamar,
You need to change the dns name you’re querying for a bit:
Use *dig _search3d._tcp.marathon.mesos*
See:
Hi, @Araon If you want to develop your custom framework, you could checkout
this document
https://github.com/apache/mesos/blob/master/docs/app-framework-development-guide.md
first.
I want to be able to write a custom executor which is available to
multiple schedulers (eg Marathon, Chronos and our
Hi all,
I am trying to use Spark 1.4.1 with Mesos 0.23.0.
When I try to start my spark-shell, it gives me the following warning :
**
Scheduler driver bound to loopback interface! Cannot communicate with
remote master(s). You might want to set
Can you explain what your motivations are and what your new custom executor
will do?
Tim
On Tue, Jul 28, 2015 at 5:08 AM, Aaron Carey aca...@ilm.com wrote:
Hi,
Is it possible to build a custom executor which is not associated with a
particular scheduler framework? I want to be able to
did you set LIBPROCESS_IP env variable as the warning suggested?
On Tue, Jul 28, 2015 at 11:16 AM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
Hi all,
I am trying to use Spark 1.4.1 with Mesos 0.23.0.
When I try to start my spark-shell, it gives me the following warning :
If you are not using any dns like service under /etc/mesos-master/ create two
files called ip and hostname and put the ip of the eth interface.
Sent from my Samsung device
Original message
From: Haripriya Ayyalasomayajula aharipriy...@gmail.com
Date: 28/07/2015 20:18
I am trying to do this
export LIBPROCESS_IP=zk://my_ipaddress:2181/mesos
./bin/spark-shell
It gives me this error and aborts
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0728 15:43:39.361445 13209 process.cpp:847] Parsing
LIBPROCESS_IP=zk://my_ipaddress:2181/ failed:
LIBPROCESS_IP is the IP address that you want the scheduler (driver) to
bind to. It has nothing to do with the ZooKeeper address.
In other words, do
export LIBPROCESS_IP=scheduler_ip_address
On Tue, Jul 28, 2015 at 2:00 PM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
I am
Well, when I try doing that I get this error:
Failed to initialize: Failed to bind on scheduler_ip_address Cannot assign
requested address: Cannot assign requested address [99]
When I do a
ps -ef | grep mesos
on both my master and slave nodes, it works fine. And, I am also able to
ping both
I also have this problem, thanks!
On Tue, Jul 28, 2015 at 3:34 PM, Anton Kirillov antonv.kiril...@gmail.com
wrote:
Hi everyone,
I’m trying to get access to Spark web UI from Mesos Master but with no
success: the host name displayed properly, but the link is not active, just
text. Maybe
For me, it's 0.23.0 and 1.4.0, respectively.
On Tue, Jul 28, 2015 at 4:08 PM, Adam Bordelon a...@mesosphere.io wrote:
Spark should be setting the FrameworkInfo.webui_url on scheduler startup.
spark-env.sh works as it will be called by spark-submit/spark-shell, or you
can just set it before you call spark-shell yourself.
Tim
On Tue, Jul 28, 2015 at 1:43 PM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
Hi,
Where can I set the libprocess_ip env variable? spark_env.sh?
Hi,
Where can I set the libprocess_ip env variable? spark_env.sh? Thats the
only place I can think of. Can you please point me to any related
documentation?
On Tue, Jul 28, 2015 at 12:46 PM, Nikolaos Ballas neXus
nikolaos.bal...@nexusgroup.com wrote:
If you are not using any dns like service
can you paste the logs?
On Tue, Jul 28, 2015 at 2:31 PM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
Well, when I try doing that I get this error:
Failed to initialize: Failed to bind on scheduler_ip_address Cannot assign
requested address: Cannot assign requested address [99]
28 matches
Mail list logo