Spark Streaming does “NOT” crash UNCEREMNOUSLY – please maintain responsible
and objective communication and facts
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, May 18, 2015 2:28 PM
To: Evo Eftimov
Cc: Dmitry Goldenberg; user@spark.apache.org
Subject: Re: Spark Streaming
2.0,
> as Evo says Spark Streaming DOES crash in “unceremonious way” when the free
> RAM available for In Memory Cashed RDDs gets exhausted
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Monday, May 18, 2015 2:03 PM
> *To:* Evo Eftimov
> *Cc:* Dmitry Goldenberg; user@spa
communication and facts
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, May 18, 2015 2:28 PM
To: Evo Eftimov
Cc: Dmitry Goldenberg; user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
we = Sigmoid
back-pressuring mechanism = Stoping the receiver from
y, May 18, 2015 2:03 PM
> *To:* Evo Eftimov
> *Cc:* Dmitry Goldenberg; user@spark.apache.org
>
> *Subject:* Re: Spark Streaming and reducing latency
>
>
>
> We fix the receivers rate at which it should consume at any given point of
> time. Also we have a back-pressu
7;; 'Akhil Das'
Cc: 'user@spark.apache.org'
Subject: RE: Spark Streaming and reducing latency
You can use
spark.streaming.receiver.maxRate
not set
Maximum rate (number of records per second) at which each receiver will receive
data. Effectively, each stream will consu
>
> *From:* Evo Eftimov [mailto:evo.efti...@isecc.com]
> *Sent:* Monday, May 18, 2015 12:13 PM
> *To:* 'Dmitry Goldenberg'; 'Akhil Das'
> *Cc:* 'user@spark.apache.org'
> *Subject:* RE: Spark Streaming and reducing latency
>
>
>
> You can use
-4-1410542878200 not found
From: Evo Eftimov [mailto:evo.efti...@isecc.com]
Sent: Monday, May 18, 2015 12:13 PM
To: 'Dmitry Goldenberg'; 'Akhil Das'
Cc: 'user@spark.apache.org'
Subject: RE: Spark Streaming and reducing latency
You can use
spark.strea
May 18, 2015 11:46 AM
To: Akhil Das
Cc: user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
Thanks, Akhil. So what do folks typically do to increase/contract the capacity?
Do you plug in some cluster auto-scaling solution to make this elastic?
Does Spark have any hoo
Thanks, Akhil. So what do folks typically do to increase/contract the capacity?
Do you plug in some cluster auto-scaling solution to make this elastic?
Does Spark have any hooks for instrumenting auto-scaling?
In other words, how do you avoid overwheling the receivers in a scenario when
your sy
environment. Storm for example implements this pattern or you
can just put together your own solution
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Sunday, May 17, 2015 4:04 PM
To: dgoldenberg
Cc: user@spark.apache.org
Subject: Re: Spark Streaming and reducing latency
With receiver
With receiver based streaming, you can actually
specify spark.streaming.blockInterval which is the interval at which the
receiver will fetch data from the source. Default value is 200ms and hence
if your batch duration is 1 second, it will produce 5 blocks of data. And
yes, with sparkstreaming when
11 matches
Mail list logo