Pull requests are available that complete this feature:
https://github.com/apache/hbase/pull/6168,
https://github.com/apache/hbase/pull/6167. I would appreciate reviews
from anyone interested.
On Tue, Jul 23, 2024 at 9:16 AM Charles Connell wrote:
>
> Thank you for the feedback. I will work on th
Thank you for the feedback. I will work on the approach that Bryan
suggests. I will need
https://issues.apache.org/jira/browse/HBASE-28346 merged before I can
put up a PR.
On Sat, Jul 20, 2024 at 11:34 AM 张铎(Duo Zhang) wrote:
>
> Ah, you are right, we can add a flag to let the new client set it t
Ah, you are right, we can add a flag to let the new client set it to a
non default value.
In this way I prefer we implement the 'partial result' logic. Sleeping
at server side is not a good idea.
Bryan Beaudreault 于2024年7月20日周六 23:09写道:
>
> Since the protocol is protobuf, it should be quite simp
Since the protocol is protobuf, it should be quite simple. We can add a new
field supports_partial to the AggregationRequest proto. Only new clients
would set this to true, and that would trigger the partial results on the
client. We have a similar concept for how we handle supporting
retryImmediat
I do not think it is easy to change the current implementation to be
'partial results'.
The current assumption of request/response is 'send a range and the
agg type'/'return the agg result of the whole range'.
If you want to make it possible to return earlier, I think we need to
tell the client t
I forgot to respond to your last question.
I'm looking through the implementation of the following classes/methods:
- RawAsyncTableImpl - onLocateComplete()
- CoprocessorServiceBuilderImpl (inner class of above) - execute()
- CoprocessorCallback (inner interface of AsyncTable)
- AsyncAggregationC
There actually is prior art on the partial data approach. That’s how scans
work generally. MultiGets also return partial results when the
throttle/limit is exceeded via MultiResultTooLargeException (which is
immediately retried to get results for the rest of the batch). So this is a
common pattern.