Hi Alexis,
Assuming the bulk load for a batch of sequential keys performs better than
accessing them one by one, the main problem comes to do we really need to
access all the keys that were bulk-loaded to cache before. In other words,
cache hit rate is the key issue. If the rate is high, even thou
Hi, Marek. Sorry for this late reply because of the Spring Festival in China.
When the upsert keys is empty that can't be deduced or are different with the
pk in. sink, flink will genereta upsert materializer when
`table.exec.sink.upsert-materialize = FORCE`. You can see the code here[1].
You
Hi Zakelly,
thanks for the information, that's interesting. Would you say that reading
a subset from RocksDB is fast enough to be pretty much negligible, or could
it be a bottleneck if the state of each key is "large"? Again assuming the
number of distinct partition keys is large.
Regards,
Alexis