It certainly seems possible to write a Partitioner that does what you describe. 
I started implementing one but didn't have time to finish it. I think the main 
difficulty is in properly dealing with partition ownership changes in 
Cassandra… if you are maintaining state in Flink and the partitioning changes, 
your job might produce inaccurate output. If, on the other hand, you are only 
using the partitioner just before the output, dynamic partitioning changes 
might be ok.


From: kant kodali <kanth...@gmail.com<mailto:kanth...@gmail.com>>
Date: Thursday, October 27, 2016 at 3:17 AM
To: <user@flink.apache.org<mailto:user@flink.apache.org>>
Subject: Can we do batch writes on cassandra using flink while leveraging the 
locality?

locality? For example the batch writes in Cassandra will put pressure on the 
coordinator but since the connectors are built by leveraging the locality I was 
wondering if we could do batch of writes on a node where the batch belongs?

Reply via email to