Thanks for the update.. I'm interested in writing the results to MySQL as
well, can you share some light or code sample on how you setup the
driver/connection pool/etc.?

On Thu, Sep 25, 2014 at 4:00 PM, maddenpj <madde...@gmail.com> wrote:

> Update for posterity, so once again I solved the problem shortly after
> posting to the mailing list. So updateStateByKey uses the default
> partitioner, which in my case seemed like it was set to one.
>
> Changing my call from .updateStateByKey[Long](updateFn) ->
> .updateStateByKey[Long](updateFn, numPartitions) resolved it for me.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-No-parallelism-in-writing-to-database-MySQL-tp15174p15182.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to