Hey team,

Is there a reason not to deprecate Scan.setCaching and Scan.getCaching?

Way back it was originally the primary way to control how much data is
returned per RPC. But it’s awkward for that purpose because rows can be
varying sizes. These days setMaxResultSize is a much better API for that
purpose.

Another reason to setCaching is for small scans where you just want N
results and you close the scanner after fetching those. But these days
setLimit is a better way to achieve that.

I can’t think of another reason why one would want to exclusively use
setCaching, but open to opinions.

Thoughts? I can file a jira and submit a patch if there is consensus.

Reply via email to