ndimiduk commented on code in PR #5384:
URL: https://github.com/apache/hbase/pull/5384#discussion_r1323173236


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:
##########
@@ -1697,8 +1722,27 @@ protected HFileBlock 
readBlockDataInternal(FSDataInputStream is, long offset,
           headerBuf = HEAP.allocate(hdrSize);
           readAtOffset(is, headerBuf, hdrSize, false, offset, pread);
           headerBuf.rewind();
+
+          // The caller didn't provide an anticipated block size and headerBuf 
was null, this is
+          // probably the first time this HDFS block has been read. The value 
we just read has not
+          // had HBase checksum validation ; assume it was not protected by 
HDFS checksum either.
+          // Sanity check the value. If it doesn't seem right, either trigger 
fall-back to hdfs
+          // checksum or abort the read.
+          //
+          // TODO: Should we also check the value vs. some multiple of 
fileContext.getBlocksize() ?
+          onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);
+          if (!checkOnDiskSizeWithHeader(onDiskSizeWithHeader)) {
+            if (verifyChecksum) {
+              invalidateNextBlockHeader();
+              span.addEvent("Falling back to HDFS checksumming.", 
attributesBuilder.build());
+              return null;
+            } else {
+              throw new IOException("Read invalid onDiskSizeWithHeader=" + 
onDiskSizeWithHeader);
+            }
+          }
+        } else {
+          onDiskSizeWithHeader = getOnDiskSizeWithHeader(headerBuf, 
checksumSupport);

Review Comment:
   This discussion is from an outdated portion of the PR, but I think its 
concern has been addressed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to