cgivre commented on code in PR #3000:
URL: https://github.com/apache/drill/pull/3000#discussion_r2200897541


##########
contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseGroupScan.java:
##########
@@ -382,7 +402,7 @@ public String getDigest() {
   public String toString() {
     return "HBaseGroupScan [HBaseScanSpec="

Review Comment:
   Can we update this method with the `PlanStringBuilder` as shown below?   
Please update with appropriate fields
   ```
     @Override
     public String toString() {
       return new PlanStringBuilder(this)
         .field("HbaseScanSpec", hbaseScanSpec)
         .field("columns", columns)
         .field("maxRecords", maxRecords)
         .toString();
     }
   ```



##########
contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseGroupScan.java:
##########
@@ -396,6 +416,33 @@ public List<SchemaPath> getColumns() {
     return columns;
   }
 
+  @JsonProperty
+  public int getMaxRecords() {
+    return maxRecords;
+  }
+
+  /**
+   * Default is not to support limit pushdown.
+   */
+  @Override
+  @JsonIgnore

Review Comment:
   Is there a reason for the `JsonIgnore`? 



##########
contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseSubScan.java:
##########
@@ -98,6 +101,11 @@ public HBaseStoragePlugin getStorageEngine(){
     return hbaseStoragePlugin;
   }
 
+  @JsonIgnore
+  public int getMaxRecords() {

Review Comment:
   Again is there a reason for the `JsonIgnore`?  



##########
contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseRecordReader.java:
##########
@@ -84,12 +84,16 @@ public class HBaseRecordReader extends AbstractRecordReader 
implements DrillHBas
 
   private final Connection connection;
 
-  public HBaseRecordReader(Connection connection, 
HBaseSubScan.HBaseSubScanSpec subScanSpec, List<SchemaPath> projectedColumns) {
+  public HBaseRecordReader(Connection connection, 
HBaseSubScan.HBaseSubScanSpec subScanSpec, List<SchemaPath> projectedColumns, 
int maxRecords) {
     this.connection = connection;
     hbaseTableName = TableName.valueOf(
         Preconditions.checkNotNull(subScanSpec, "HBase reader needs a sub-scan 
spec").getTableName());
     hbaseScan = new Scan(subScanSpec.getStartRow(), subScanSpec.getStopRow());
     hbaseScanColumnsOnly = new Scan();
+    //todo Whether to remove if condition judgment

Review Comment:
   I'm not sure what this comment means but please remove or explain further.   
Just so you are aware, Calcite does provide the logic as to when to push the 
limit down and when not to so I think we are good here.  To verify this, you 
can add a unit test with the query:
   
   ```sql
   SELECT * 
   FROM hbase
   LIMIT 10 OFFSET 5
   ```
   and you should see the `maxRecords` parameter set to 15. 
   
   If you look at the unit tests for the Phoenix plugin, you can see a good 
example of this:
   
   
https://github.com/apache/drill/blob/78a52087bb85bcc7034157544193155b87503111/contrib/storage-phoenix/src/test/java/org/apache/drill/exec/store/phoenix/PhoenixSQLTest.java#L92-L128
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to