For the first code snippet, it seems start row /  stop row can be specified
to narrow the range of scan.

For your last question:

bq. all the filters in the filterList are RowFilters ...

The filter list currently is not doing optimization even if all filters in
the list are RowFilters.

Cheers

On Wed, Aug 15, 2018 at 2:07 AM Biplob Biswas <revolutioni...@gmail.com>
wrote:

> Hi,
>
> During our implementation for fetching multiple records from an HBase
> table, we came across a discussion regarding the best way to get records
> out.
>
> The first implementation is something like:
>
>           FilterList filterList = new FilterList(Operator.MUST_PASS_ONE);
> >           for (String rowKey : rowKeys) {
> >             filterList.addFilter(new RowFilter(CompareOp.EQUAL,new
> > BinaryComparator(Bytes.toBytes(rowKey))));
> >           }
> >
> >   Scan scan = new Scan();
> >   scan.setFilter(filterList);
> >   ResultScanner resultScanner = table.getScanner(scan);
>
>
> and the second implementation is somethign like this:
>
>          List<Get> listGet = rowKeys.stream()
> >               .map(entry -> {
> >                 Get get = new Get(Bytes.toBytes(entry));
> >                 return get;
> >               })
> >               .collect(Collectors.toList());
> >   Result[] results = table.get(listGet)
>
>
> The only difference I see directly is that filterList would do a full table
> scan whereas multiget wouldn't do anything as such.
>
> But what other benefits one has over the other? Also, when HBase finds out
> that all the filters in the filterList are RowFilters, would it perform
> some kind of optimization and perform multiget rather than doing a full
> table scan?
>
>
> Thanks & Regards
> Biplob Biswas
>

Reply via email to