@Tathagata Das so basically you are saying it is supported out of the box,
but we should expect a significant performance hit - is that right?



On Tue, Feb 24, 2015 at 5:37 AM, Tathagata Das <t...@databricks.com> wrote:

> The default persistence level is MEMORY_AND_DISK, so the LRU policy would
> discard the blocks to disk, so the streaming app will not fail. However,
> since things will get constantly read in and out of disk as windows are
> processed, the performance wont be great. So it is best to have sufficient
> memory to keep all the window data in memory.
>
> TD
>
> On Mon, Feb 23, 2015 at 8:26 AM, Shao, Saisai <saisai.s...@intel.com>
> wrote:
>
>> I don't think current Spark Streaming supports window operations which
>> beyond its available memory, internally Spark Streaming puts all the data
>> in the memory belongs to the effective window, if the memory is not enough,
>> BlockManager will discard the blocks at LRU policy, so something unexpected
>> will be occurred.
>>
>> Thanks
>> Jerry
>>
>> -----Original Message-----
>> From: avilevi3 [mailto:avile...@gmail.com]
>> Sent: Monday, February 23, 2015 12:57 AM
>> To: user@spark.apache.org
>> Subject: spark streaming window operations on a large window size
>>
>> Hi guys,
>>
>> does spark streaming supports window operations on a sliding window that
>> is data is larger than the available memory?
>> we would like to
>> currently we are using kafka as input, but we could change that if needed.
>>
>> thanks
>> Avi
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-streaming-window-operations-on-a-large-window-size-tp21764.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
>> commands, e-mail: user-h...@spark.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to