Spark Streaming does “NOT” crash UNCEREMNOUSLY – please maintain responsible
and objective communication and facts
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, May 18, 2015 2:28 PM
To: Evo Eftimov
Cc: Dmitry Goldenberg; user@spark.apache.org
Subject: Re: Spark Streaming
2.0,
> as Evo says Spark Streaming DOES crash in “unceremonious way” when the free
> RAM available for In Memory Cashed RDDs gets exhausted
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Monday, May 18, 2015 2:03 PM
> *To:* Evo Eftimov
> *Cc:* Dmitry Goldenberg; user@spa
communication and facts
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, May 18, 2015 2:28 PM
To: Evo Eftimov
Cc: Dmitry Goldenberg; user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
we = Sigmoid
back-pressuring mechanism = Stoping the receiver from
y, May 18, 2015 2:03 PM
> *To:* Evo Eftimov
> *Cc:* Dmitry Goldenberg; user@spark.apache.org
>
> *Subject:* Re: Spark Streaming and reducing latency
>
>
>
> We fix the receivers rate at which it should consume at any given point of
> time. Also we have a back-pressu
: Spark Streaming and reducing latency
We fix the receivers rate at which it should consume at any given point of
time. Also we have a back-pressuring mechanism attached to the receivers so it
won't simply crashes in the "unceremonious way" like Evo said. Mesos has some
sort
>
> *From:* Evo Eftimov [mailto:evo.efti...@isecc.com]
> *Sent:* Monday, May 18, 2015 12:13 PM
> *To:* 'Dmitry Goldenberg'; 'Akhil Das'
> *Cc:* 'user@spark.apache.org'
> *Subject:* RE: Spark Streaming and reducing latency
>
>
>
> You can use
-4-1410542878200 not found
From: Evo Eftimov [mailto:evo.efti...@isecc.com]
Sent: Monday, May 18, 2015 12:13 PM
To: 'Dmitry Goldenberg'; 'Akhil Das'
Cc: 'user@spark.apache.org'
Subject: RE: Spark Streaming and reducing latency
You can use
spark.strea
May 18, 2015 11:46 AM
To: Akhil Das
Cc: user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
Thanks, Akhil. So what do folks typically do to increase/contract the capacity?
Do you plug in some cluster auto-scaling solution to make this elastic?
Does Spark have any hoo
in that case, to ensure the consumers don't get
>> overwhelmed with new data?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-and-reducing-latency-tp22922.html
>> Sent from
environment. Storm for example implements this pattern or you
can just put together your own solution
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Sunday, May 17, 2015 4:04 PM
To: dgoldenberg
Cc: user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
With receiver
re there techniques, in that case, to ensure the consumers don't get
> overwhelmed with new data?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming
h new data?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-and-reducing-latency-tp22922.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsu
12 matches
Mail list logo