Re: Explanation regarding Spark Streaming

2016-08-06 Thread Mich Talebzadeh
park > <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/> > > > > *From:* Mich Talebzadeh [mailto:mich.talebza...@gmail.com] > *Sent:* Saturday, August 6, 2016 12:25 PM > *To:* Mohammed Guller > *Cc:* Jacek Laskowski; Saurav Sinha; user

RE: Explanation regarding Spark Streaming

2016-08-06 Thread Mohammed Guller
ct: Re: Explanation regarding Spark Streaming Hi, I think the default storage level <http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence> is MEMORY_ONLY HTH Dr Mich Talebzadeh LinkedIn https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcP

Re: Explanation regarding Spark Streaming

2016-08-06 Thread Mich Talebzadeh
mmed > > > > *From:* Jacek Laskowski [mailto:ja...@japila.pl] > *Sent:* Saturday, August 6, 2016 1:54 AM > *To:* Mohammed Guller > *Cc:* Saurav Sinha; user > *Subject:* RE: Explanation regarding Spark Streaming > > > > Hi, > > Thanks for explanation, but it

RE: Explanation regarding Spark Streaming

2016-08-06 Thread Mohammed Guller
performance even worse. Mohammed From: Jacek Laskowski [mailto:ja...@japila.pl] Sent: Saturday, August 6, 2016 1:54 AM To: Mohammed Guller Cc: Saurav Sinha; user Subject: RE: Explanation regarding Spark Streaming Hi, Thanks for explanation, but it does not prove Spark will OOM at some point. You

Re: Explanation regarding Spark Streaming

2016-08-06 Thread Mich Talebzadeh
ntly at the same rate. >> >> Also keep in mind that windowing operations on a DStream implicitly >> persist every RDD in a DStream in memory. >> >> Mohammed >> >> -Original Message- >> From: Jacek Laskowski [mailto:ja...@japila.pl]

RE: Explanation regarding Spark Streaming

2016-08-06 Thread Jacek Laskowski
:ja...@japila.pl] > Sent: Thursday, August 4, 2016 4:25 PM > To: Mohammed Guller > Cc: Saurav Sinha; user > Subject: Re: Explanation regarding Spark Streaming > > On Fri, Aug 5, 2016 at 12:48 AM, Mohammed Guller <moham...@glassbeam.com> > wrote: > > and eventually you will run out of memory. > > Why? Mind elaborating? > > Jacek >

RE: Explanation regarding Spark Streaming

2016-08-05 Thread Mohammed Guller
[mailto:ja...@japila.pl] Sent: Thursday, August 4, 2016 4:25 PM To: Mohammed Guller Cc: Saurav Sinha; user Subject: Re: Explanation regarding Spark Streaming On Fri, Aug 5, 2016 at 12:48 AM, Mohammed Guller <moham...@glassbeam.com> wrote: > and eventually you will run out of memory.

Re: Explanation regarding Spark Streaming

2016-08-04 Thread Jacek Laskowski
On Fri, Aug 5, 2016 at 12:48 AM, Mohammed Guller wrote: > and eventually you will run out of memory. Why? Mind elaborating? Jacek - To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Re: Explanation regarding Spark Streaming

2016-08-04 Thread Mich Talebzadeh
will run out > of memory. > > > > Mohammed > > Author: Big Data Analytics with Spark > <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/> > > > > *From:* Saurav Sinha [mailto:sauravsinh...@gmail.com] > *Sent:* Wednesday, August

RE: Explanation regarding Spark Streaming

2016-08-04 Thread Mohammed Guller
016 11:57 PM To: user Subject: Explanation regarding Spark Streaming Hi, I have query Q1. What will happen if spark streaming job have batchDurationTime as 60 sec and processing time of complete pipeline is greater then 60 sec. -- Thanks and Regards, Saurav Sinha Contact: 9742879062

Explanation regarding Spark Streaming

2016-08-04 Thread Saurav Sinha
Hi, I have query Q1. What will happen if spark streaming job have batchDurationTime as 60 sec and processing time of complete pipeline is greater then 60 sec. -- Thanks and Regards, Saurav Sinha Contact: 9742879062