Thanks for your answers. I know Kafka's model but I would rather like to
avoid having to setup both Spark and Kafka to handle my use case. I wonder
if it might be possible to handle that using Spark's standard streams ?
--
Arnaud Bailly
twitter: abailly
skype: arnaud-bailly
linkedin: http
fine with devoting some time to it.
--
Arnaud Bailly
twitter: abailly
skype: arnaud-bailly
linkedin: http://fr.linkedin.com/in/arnaudbailly/
On Thu, Jul 7, 2016 at 2:17 PM, Sivakumaran S <siva.kuma...@me.com> wrote:
> Arnauld,
>
> You could aggregate the first table
juil. 2016 12:55, "Sivakumaran S" <siva.kuma...@me.com> a écrit :
> Hi Arnauld,
>
> Sorry for the doubt, but what exactly is multiple aggregation? What is the
> use case?
>
> Regards,
>
> Sivakumaran
>
>
> On 07-Jul-2016, at 11:18 AM, Arnaud Bailly
reads from this output
Does this makes sense?
Furthermore, I would like to understand what are the technical hurdles that
are preventing Spark SQL from implementing multiple aggregation right now?
Thanks,
--
Arnaud Bailly
twitter: abailly
skype: arnaud-bailly
linkedin: http://fr.li
stream to stream JOINs in Spark's code?
Thanks,
--
Arnaud Bailly
twitter: abailly
skype: arnaud-bailly
linkedin: http://fr.linkedin.com/in/arnaudbailly/
On Thu, Jul 7, 2016 at 9:17 AM, Tathagata Das <tathagata.das1...@gmail.com>
wrote:
> We will look into streaming-streaming joins
datastreams? If I
have a query that aggregates some value over some key, and I delete all
instances of that key, I would expect the query to output a result removing
the key's aggregated value. The same is true for updates...
Thanks for any insights you might want to share.
Regards,
--
Arnaud Bailly