Maybe your master or zeppelin server is running out of memory and the more data
it receives the more memory swapping it has to dosomething to check.
Get Outlook for Android
On Wed, May 17, 2017 at 11:14 AM -0400, "Junaid Nasir" wrote:
I have a large data
Thanks. It looks like they posted the release just now because it wasn't
showing before.
Get Outlook for Android
On Fri, May 5, 2017 at 11:04 AM -0400, "Jules Damji" wrote:
Go to this link http://spark.apache.org/downloads.html
CheersJules
Sent from
Hi
Website says it is released. Where can it be downloaded?
Thanks
Get Outlook for Android
So what was the answer?
Sent from my Verizon, Samsung Galaxy smartphone
Original message From: Andrew Holway
Date: 1/15/17 11:37 AM (GMT-05:00) To: Marco
Mistroni Cc: Neil Jonkers , User
Anyone got a good guide for getting spark master to talk to remote workers
inside dockers? I followed the tips found by searching but doesn't work still.
Spark 1.6.2.
I exposed all the ports and tried to set local IP inside container to the host
IP but spark complains it can't bind ui ports.
replying for info since it's not identical to your request but in the same
spirit.
Darren
Sent from my Verizon, Samsung Galaxy smartphone
Original message From: Chetan Khatri
<chetan.opensou...@gmail.com> Date: 1/4/17 6:34 AM (GMT-05:00) To: Lars
Albertsson <la...@ma
uma...@me.com>
Date: 9/2/16 4:03 AM (GMT-05:00) To: Mich Talebzadeh
<mich.talebza...@gmail.com> Cc: Jakob Odersky <ja...@odersky.com>, ayan guha
<guha.a...@gmail.com>, Tal Grynbaum <tal.grynb...@gmail.com>, darren
<dar...@ontrenet.com>, kant kodali <kanth...@gm
This topic is a concern for us as well. In the data science world no one uses
native scala or java by choice. It's R and Python. And python is growing. Yet
in spark, python is 3rd in line for feature support, if at all.
This is why we have decoupled from spark in our project. It's really
This is fantastic news.
Sent from my Verizon 4G LTE smartphone
Original message
From: Paolo Patierno
Date: 7/3/16 4:41 AM (GMT-05:00)
To: user@spark.apache.org
Subject: AMQP extension for Apache Spark Streaming (messaging/IoT)
Hi all,
I'm
from my Verizon Wireless 4G LTE smartphone
Original message
From: Malcolm Lockyer <malcolm.lock...@hapara.com>
Date: 05/30/2016 10:40 PM (GMT-05:00)
To: user@spark.apache.org
Subject: Re: Spark + Kafka processing trouble
On Tue, May 31, 2016 at 1:56 PM, Darren Govon
So you are calling a SQL query (to a single database) within a spark operation
distributed across your workers?
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Malcolm Lockyer
Date: 05/30/2016 9:45 PM (GMT-05:00)
Hi I have a python egg with a __main__.py in it. I am able to execute the egg
by itself fine.
Is there a way to just submit the egg to spark and have it run? It seems an
external .py script is needed which would be unfortunate if true.
Thanks
Sent from my Verizon Wireless 4G LTE
te: 03/02/2016 5:43 PM (GMT-05:00)
To: Darren Govoni <dar...@ontrenet.com>, Jules Damji <dmat...@comcast.net>,
Joshua Sorrell <jsor...@gmail.com>
Cc: user@spark.apache.org
Subject: Re: Does pyspark still lag far behind the Scala API in terms of
features
Plenty of people g
Dataframes are essentially structured tables with schemas. So where does the
non typed data sit before it becomes structured if not in a traditional RDD?
For us almost all the processing comes before there is structure to it.
Sent from my Verizon Wireless 4G LTE smartphone
This might be hard to do. One generalization of this problem is
https://en.m.wikipedia.org/wiki/Longest_path_problem
Given a node (e.g. A), find longest path. All interior relations are transitive
and can be inferred.
But finding a distributed spark way of doing it in P time would be
I meant to write 'last task in stage'.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Darren Govoni <dar...@ontrenet.com>
Date: 02/16/2016 6:55 AM (GMT-05:00)
To: Abhishek Modi <abshkm...@gmail.com>, user@spark.apache.org
I think this is part of the bigger issue of serious deadlock conditions
occurring in spark many of us have posted on.
Would the task in question be the past task of a stage by chance?
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Abhishek Modi
Why not deploy it. Then build a custom distribution with Scala 2.11 and just
overlay it.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Nuno Santos
Date: 01/25/2016 7:38 AM (GMT-05:00)
To: user@spark.apache.org
Subject:
: "Sanders, Isaac B" <sande...@rose-hulman.edu>
Date: 01/25/2016 8:59 AM (GMT-05:00)
To: Ted Yu <yuzhih...@gmail.com>
Cc: Darren Govoni <dar...@ontrenet.com>, Renu Yadav <yren...@gmail.com>, Muthu
Jayakumar <bablo...@gmail.com>, user@spark.apache.or
4 PM (GMT-05:00)
To: Renu Yadav <yren...@gmail.com>
Cc: Darren Govoni <dar...@ontrenet.com>, Muthu Jayakumar <bablo...@gmail.com>,
Ted Yu <yuzhih...@gmail.com>, user@spark.apache.org
Subject: Re: 10hrs of Scheduler Delay
I am not getting anywhere with any of the su
2/2016 3:50 PM (GMT-05:00)
To: Darren Govoni <dar...@ontrenet.com>, "Sanders, Isaac B"
<sande...@rose-hulman.edu>, Ted Yu <yuzhih...@gmail.com>
Cc: user@spark.apache.org
Subject: Re: 10hrs of Scheduler Delay
Does increasing the number of partition helps? You cou
Me too. I had to shrink my dataset to get it to work. For us at least Spark
seems to have scaling issues.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: "Sanders, Isaac B"
Date: 01/21/2016 11:18 PM (GMT-05:00)
To:
I've experienced this same problem. Always the last stage hangs. Indeterminant.
No errors in logs. I run spark 1.5.2. Can't find an explanation. But it's
definitely a showstopper.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Ted Yu
I also would be interested in some best practice for making this work.
Where will the writeup be posted? On mesosphere website?
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Sathish Kumaran Vairavelu
Date: 01/19/2016
What's the rationale behind that? It certainly limits the kind of flow logic we
can do in one statement.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: David Russell
Date: 01/18/2016 10:44 PM (GMT-05:00)
To:
Hi,
I've had this nagging problem where a task will hang and the
entire job hangs. Using pyspark. Spark 1.5.1
The job output looks like this, and hangs after the last task:
..
15/12/29 17:00:38 INFO BlockManagerInfo: Added broadcast_0_piece0 in
here's executor trace.
Thread 58: Executor task launch
worker-3 (RUNNABLE)
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:152)
I'll throw a thought in here.
Dataframes are nice if your data is uniform and clean with consistent schema.
However in many big data problems this is seldom the case.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Chris Fregly
I use python too. I'm actually surprises it's not the primary language since it
is by far more used in data science than java snd Scala combined.
If I had a second choice of script language for general apps I'd want groovy
over scala.
Sent from my Verizon Wireless 4G LTE smartphone
to me doesn't give me a direction to look without the actual logs
from $SPARK_HOME or the stderr from the worker UI.
Just imho maybe someone know what this means but it seems like it
could be caused by a lot of things.
On 12/2/2015 6:48 PM, Darren Govoni wrote:
Hi all,
Wondering if someone ca
Hi all,
Wondering if someone can provide some insight why this pyspark app is
just hanging. Here is output.
...
15/12/03 01:47:05 INFO TaskSetManager: Starting task 21.0 in stage 0.0
(TID 21, 10.65.143.174, PROCESS_LOCAL, 1794787 bytes)
15/12/03 01:47:05 INFO TaskSetManager: Starting task
I agree 100%. Making the model requires large data and many cpus.
Using it does not.
This is a very useful side effect of ML models.
If mlib can't use models outside spark that's a real shame.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From:
Hi,
I read on this page
http://spark.apache.org/docs/latest/streaming-kafka-integration.html
about python support for "receiverless" kafka integration (Approach 2)
but it says its incomplete as of version 1.4.
Has this been updated in version 1.5.
val aDstream = ...
val distinctStream = aDstream.transform(_.distinct())
but the elements in distinctStream are not distinct.
Did I use it wrong?
Thanks, Shao
On Wed, Mar 18, 2015 at 3:34 PM, Shao, Saisai saisai.s...@intel.com wrote:
Yeah, as I said your job processing time is much larger than the sliding
window, and streaming job is executed one by one in sequence, so the next
job will wait until the first job is finished, so the
, Darren Hoo darren@gmail.com wrote:
Thanks, Shao
On Wed, Mar 18, 2015 at 3:34 PM, Shao, Saisai saisai.s...@intel.com
wrote:
Yeah, as I said your job processing time is much larger than the
sliding window, and streaming job is executed one by one in sequence, so
the next job will wait until
On Wed, Mar 18, 2015 at 8:31 PM, Shao, Saisai saisai.s...@intel.com wrote:
From the log you pasted I think this (-rw-r--r-- 1 root root 80K Mar
18 16:54 shuffle_47_519_0.data) is not shuffle spilled data, but the
final shuffle result.
why the shuffle result is written to disk?
As I
is larger
than the sliding window, so maybe you computation power cannot reach to the
qps you wanted.
I think you need to identify the bottleneck at first, and then trying to
tune your code, balance the data, add more computation resources.
Thanks
Jerry
*From:* Darren Hoo
38 matches
Mail list logo