eration?
From: Sandeep Giri
[mailto:sand...@knowbigdata.com<mailto:sand...@knowbigdata.com>]
Sent: Thursday, October 29, 2015 3:09 PM
To: user <user@spark.apache.org<mailto:user@spark.apache.org>>; dev
<d...@spark.apache.org<mailto:d...@spark.apache.org>>
Sub
Giri [mailto:sand...@knowbigdata.com]
>> *Sent:* Thursday, October 29, 2015 3:09 PM
>> *To:* user <user@spark.apache.org>; dev <d...@spark.apache.org>
>> *Subject:* Maintaining overall cumulative data in Spark Streaming
>>
>>
>>
>> Dear All,
&g
Dear All,
If a continuous stream of text is coming in and you have to keep publishing
the overall word count so far since 0:00 today, what would you do?
Publishing the results for a window is easy but if we have to keep
aggregating the results, how to go about it?
I have tried to keep an
and...@knowbigdata.com>
Sent: 10/29/2015 6:08 PM
To: user<mailto:user@spark.apache.org>; dev<mailto:d...@spark.apache.org>
Subject: Maintaining overall cumulative data in Spark Streaming
Dear All,
If a continuous stream of text is coming in and you have to keep publishing the
overall word count
ent:* Thursday, October 29, 2015 3:09 PM
> *To:* user <user@spark.apache.org>; dev <d...@spark.apache.org>
> *Subject:* Maintaining overall cumulative data in Spark Streaming
>
>
>
> Dear All,
>
>
>
> If a continuous stream of text is coming in and you ha