Spark Cassandra connector does it! but I don't think it really implements a
custom partitioner I think it just leverages token aware policy and does
batch writes by default within a partition but you can also do across
partitions with the same replica!

On Thu, Oct 27, 2016 at 8:41 AM, Shannon Carey <sca...@expedia.com> wrote:

> It certainly seems possible to write a Partitioner that does what you
> describe. I started implementing one but didn't have time to finish it. I
> think the main difficulty is in properly dealing with partition ownership
> changes in Cassandra… if you are maintaining state in Flink and the
> partitioning changes, your job might produce inaccurate output. If, on the
> other hand, you are only using the partitioner just before the output,
> dynamic partitioning changes might be ok.
>
>
> From: kant kodali <kanth...@gmail.com>
> Date: Thursday, October 27, 2016 at 3:17 AM
> To: <user@flink.apache.org>
> Subject: Can we do batch writes on cassandra using flink while leveraging
> the locality?
>
> locality? For example the batch writes in Cassandra will put pressure on
> the coordinator but since the connectors are built by leveraging the
> locality I was wondering if we could do batch of writes on a node where the
> batch belongs?
>

Reply via email to