Hello,

I understand multiple aggregations over streaming dataframes is not
currently supported in Spark 2.0. Is there a workaround? Out of the top of
my head I could think of having a two stage approach:
 - first query writes output to disk/memory using "complete" mode
 - second query reads from this output

Does this makes sense?

Furthermore, I would like to understand what are the technical hurdles that
are preventing Spark SQL from implementing multiple aggregation right now?

Thanks,
-- 
Arnaud Bailly

twitter: abailly
skype: arnaud-bailly
linkedin: http://fr.linkedin.com/in/arnaudbailly/

Reply via email to