Hi, I¹m currently doing some tests comparing 2 use cases: 6 servers 2400 ops per second from 100 clients Enough ops total to take 30 minutes (2400 * 60 * 30) 120 million 1KB records to start
Case A: Mix of 95% reads, 5% updates Case B: Mix of 95% reads, 5% inserts >From a client API standpoint, these cases are identical. The only difference is that in Case A I only do operations on the 120 million keys I already inserted. In Case B, I do operations on new keys. I was expecting performance to be similar, but I¹m having problems with Case B. In particular, after running continuously for 5-10, the latencies go way up and my throughput goes way down to a couple of operations a second. I left it like this for a while in case there were compactions going on, but it never returned to its original throughput. I have no problems with Case A. Any ideas on what¹s going on here? It may be related to the concurrent read workload, since I successfully loaded 120 million records originally, and that took several hours. Thanks, Adam
