> > With a new interface we enable ourselves to rethink a bit of the API to be
> > simpler.
> >
> > I am currently proposing the following methods in the simpler AsyncTable
> > interface:
> > exists(Get): ResponsePromise<Boolean>
> > exists(List<Get>): ResponsePromise<Boolean[]>
> > get(Get): ResponsePromise<Result>
> > get(List<Get>): ResponsePromise<Result[]>
> > mutate(Mutation): ResponsePromise<Void> - Instead of separate Put, Delete,
> > Increment, IncrementValue and Append methods
> > checkAndMutate(byte[], byte[], byte[], CompareOp, byte[], Mutation):
> > ResponsePromise<Void> - Will not accept Append and Increment
> >
> 
> Over in issue, you are thinking 'not accept Append and Increment' because
> they are one-at-a-time-nonce-dependent... Is the above call one-at-a-time?
Indeed. I did this because it seems the current API was designed this way to 
not allow multiple nonce calls at once.
> > checkAndMutate(byte[], byte[], byte[], CompareOp, byte[], RowMutations):
> > ResponsePromise<Void> - Will not accept Append and Increment
> >
> 
> The thinking on the above method is that if doing bulk checkAndMutate, that
> they should all be inside a single row?
That is correct. Although you can send out multiple requests to different rows 
at once with the Async api.
> PromiseKeeper is a loaded term! See https://promisekeepers.org/#what ...
> (smile)
Ah I did not know since we don’t have those here. So it is better to rename? 
Any nice suggestions?
> > There will be a new AsyncResultScanner which handles incoming batches of
> > result. It will not be possible to do next on it since this does not makes
> > sense in an async context. There will be however a way to request a new
> > batch with a promise.
> >
> >
> I like this.... no next. What you thinking as means of specifying
> 'batches'? We've been trying to move away from specifying batches in terms
> of row count to instead do batches of a particular size (See HBASE-13441).
As I currently implemented them you can request nextBatch promises until 
isClosed() is true. So if batches are based on size in the future it is easy to 
change the internals.

Reply via email to