Re: Spark DataSets and multiple write(.) calls

2018-11-20 Thread Gourav Sengupta
Hi, this is interesting, can you please share the code for this and if possible the source schema and it will be great if you could kindly share a sample file. Regards, Gourav Sengupta On Tue, Nov 20, 2018 at 9:50 AM Michael Shtelma wrote: > > You can also cache the data frame on disk, if

Re: Spark DataSets and multiple write(.) calls

2018-11-20 Thread Michael Shtelma
You can also cache the data frame on disk, if it does not fit into memory. An alternative would be to write out data frame as parquet and then read it, you can check if in this case the whole pipeline works faster as with the standard cache. Best, Michael On Tue, Nov 20, 2018 at 9:14 AM

Re: Spark DataSets and multiple write(.) calls

2018-11-20 Thread Dipl.-Inf. Rico Bergmann
Hi! Thanks Vadim for your answer. But this would be like caching the dataset, right? Or is checkpointing faster then persisting to memory or disk? I attach a pdf of my dataflow program. If I could compute the output of outputs 1-5 in parallel the output of flatmap1 and groupBy could be reused,

Re: Spark DataSets and multiple write(.) calls

2018-11-19 Thread Vadim Semenov
You can use checkpointing, in this case Spark will write out an rdd to whatever destination you specify, and then the RDD can be reused from the checkpointed state avoiding recomputing. On Mon, Nov 19, 2018 at 7:51 AM Dipl.-Inf. Rico Bergmann < i...@ricobergmann.de> wrote: > Thanks for your

Re: Spark DataSets and multiple write(.) calls

2018-11-19 Thread Dipl.-Inf. Rico Bergmann
Thanks for your advise. But I'm using Batch processing. Does anyone have a solution for the batch processing case? Best, Rico. Am 19.11.2018 um 09:43 schrieb Magnus Nilsson: > > > Magnus Nilsson > > > 9:43 AM (0 minutes ago) > > > to info > > I had the same

Re: Spark DataSets and multiple write(.) calls

2018-11-19 Thread Magnus Nilsson
Magnus Nilsson 9:43 AM (0 minutes ago) to info I had the same requirements. As far as I know the only way is to extend the foreachwriter, cache the microbatch result and write to each output. https://docs.databricks.com/spark/latest/structured-streaming/foreach.html Unfortunately it seems as if

Spark DataSets and multiple write(.) calls

2018-11-19 Thread Dipl.-Inf. Rico Bergmann
Hi! I have a SparkSQL programm, having one input and 6 ouputs (write). When executing this programm every call to write(.) executes the plan. My problem is, that I want all these writes to happen in parallel (inside one execution plan), because all writes have a common and compute intensive