Soheil, 
As Jeff mentioned that you need to provide more information. There are no known 
issues that I can think of that would cause such behavior. It would be great if 
you could provide us with a reduced test case so we can try and reproduce this 
behavior or at least help you debug the issue better. Could you detail the 
version of Cassandra, the number of nodes, the keyspace definition, RF / CL, 
perhaps a bit of your client code that does the writes, did you get back any 
errors on the client or on the server side? These details would be helpful to 
further help you.
Thanks,
Dinesh 

    On Saturday, April 21, 2018, 11:06:12 AM PDT, Soheil Pourbafrani 
<soheil.i...@gmail.com> wrote:  
 
 I consume data from Kafka and insert them into Cassandra cluster using Java 
API. The table has 4 keys including a timestamp based on millisecond. But when 
executing the code, it just inserts 120 to 190 rows and ignores other incoming 
data!
What parts can be the cause of the problem? Bad insert code in key fields that 
overwrite data, improper cluster configuration,....?  

Reply via email to