Since ForeachWriter works at a record level so you cannot do bulk ingest
into KairosDB, which supports bulk inserts. This will be slow.
Instead, you can have your own Sink implementation which is a batch
(DataFrame) level.

Thanks,
http://www.snappydata.io/blog <http://snappydata.io>

On Thu, Jun 21, 2018 at 10:54 AM, subramgr <subramanian.gir...@gmail.com>
wrote:

> Hi Spark Mailing list,
>
> We are looking for pushing the output of the structured streaming query
> output to KairosDB. (time series database)
>
> What would be the recommended way of doing this? Do we implement the *Sink*
> trait or do we use the *ForEachWriter*
>
> At each trigger point if I do a *dataset.collect()* the size of the data is
> not huge it should be in lower 10MBs
>
> Any suggestions?
>
> Thanks
> Girish
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to