Re: [DISCUSS] Support/Fate of HBase v1 in Hadoop
+1 Option2 I also agree with the idea of upgrading hbase 2.2 to 2.5. Shilun Fan. > Hbase v1 is EOL for a while now, so option 2 probably makes sense. While > you are at it you should probably update the hbase2 version, because 2.2.x > is also very old and EOL. 2.5.x is the currently maintained release for > hbase2, with 2.5.7 being the latest. We’re soon going to release 2.6.0 as > well. On Tue, Mar 5, 2024 at 6:56 AM Ayush Saxena wrote: > Hi Folks, > As of now we have two profiles for HBase: one for HBase v1(1.7.1) & other > for v2(2.2.4). The versions are specified over here: [1], how to build is > mentioned over here: [2] > > As of now we by default run our Jenkins "only" for HBase v1, so we have > seen HBase v2 profile silently breaking a couple of times. > > Considering there are stable versions for HBase v2 as per [3] & HBase v2 > seems not too new, I have some suggestions, we can consider: > > * Make HBase v2 profile as the default profile & let HBase v1 profile stay > in our code. > * Ditch HBase v1 profile & just lets support HBase v2 profile. > * Let everything stay as is, just add a Jenkins job/ Github action which > compiles HBase v2 as well, so we make sure no change breaks it. > > Personally I would go with the second option, the last HBase v1 release > seems to be 2 years back, it might be pulling in some > problematic transitive dependencies & it will open scope for us to support > HBase 3.x when they have a stable release in future. > > > Let me know your thoughts!!! > > -Ayush > > > [1] > > https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/hadoop-project/pom.xml#L206-L207 > > [2] > > https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/BUILDING.txt#L168-L172 > > [3] https://hbase.apache.org/downloads.html >
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1323/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.ipc.TestIPC hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.namenode.ha.TestHAMetrics hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.server.namenode.ha.TestEditLogTailer hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestDecommission hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.namenode.ha.TestStandbyIsHot hadoop.hdfs.TestDFSInotifyEventInputStream hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.yarn.sls.TestSLSRunner hadoop.yarn.sls.TestReservationSystemInvariants hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-mvnsite-root.txt [572K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-javadoc-root.txt [36K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [224K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [516K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1318/artifact/out/patch-uni
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/640/ No changes -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-common-project/hadoop-common Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-common-project Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-client Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:[line 158] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1519/ No changes -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-common-project/hadoop-common Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-common-project Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Dereferenced at ValueQueue.java:[line 332] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-client Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:[line 158] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-httpfs Redundant nullcheck of xAttrs, which is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:[line 1373] spotbugs : module:hadoop-yarn-project/hadoop-yarn org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may return null, but is declared @Nonnull At ServiceScheduler.java:is declared @Nonnull At ServiceScheduler.java:[line 555] spotbugs : module:hadoop-hdfs-project/hadoop-hdfs-rbf Redundant nullcheck of dns, which is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:[line 1092] spotbugs : module:hadoop-hdfs-project Redundant nullcheck of xAttrs, which is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:is known to be non-null in org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) Redundant null check at HttpFSFileSystem.java:[line 1373] Redundant nullcheck of sockStreamList, which is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:is known to be non-null in org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant null check at PeerCache.java:[line 158] Redundant nullcheck of dns, which is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Redundant null check at RouterRpcServer.java:[line 1092
Re: [DISCUSS] Support/Fate of HBase v1 in Hadoop
Hbase v1 is EOL for a while now, so option 2 probably makes sense. While you are at it you should probably update the hbase2 version, because 2.2.x is also very old and EOL. 2.5.x is the currently maintained release for hbase2, with 2.5.7 being the latest. We’re soon going to release 2.6.0 as well. On Tue, Mar 5, 2024 at 6:56 AM Ayush Saxena wrote: > Hi Folks, > As of now we have two profiles for HBase: one for HBase v1(1.7.1) & other > for v2(2.2.4). The versions are specified over here: [1], how to build is > mentioned over here: [2] > > As of now we by default run our Jenkins "only" for HBase v1, so we have > seen HBase v2 profile silently breaking a couple of times. > > Considering there are stable versions for HBase v2 as per [3] & HBase v2 > seems not too new, I have some suggestions, we can consider: > > * Make HBase v2 profile as the default profile & let HBase v1 profile stay > in our code. > * Ditch HBase v1 profile & just lets support HBase v2 profile. > * Let everything stay as is, just add a Jenkins job/ Github action which > compiles HBase v2 as well, so we make sure no change breaks it. > > Personally I would go with the second option, the last HBase v1 release > seems to be 2 years back, it might be pulling in some > problematic transitive dependencies & it will open scope for us to support > HBase 3.x when they have a stable release in future. > > > Let me know your thoughts!!! > > -Ayush > > > [1] > > https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/hadoop-project/pom.xml#L206-L207 > > [2] > > https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/BUILDING.txt#L168-L172 > > [3] https://hbase.apache.org/downloads.html >
[DISCUSS] Support/Fate of HBase v1 in Hadoop
Hi Folks, As of now we have two profiles for HBase: one for HBase v1(1.7.1) & other for v2(2.2.4). The versions are specified over here: [1], how to build is mentioned over here: [2] As of now we by default run our Jenkins "only" for HBase v1, so we have seen HBase v2 profile silently breaking a couple of times. Considering there are stable versions for HBase v2 as per [3] & HBase v2 seems not too new, I have some suggestions, we can consider: * Make HBase v2 profile as the default profile & let HBase v1 profile stay in our code. * Ditch HBase v1 profile & just lets support HBase v2 profile. * Let everything stay as is, just add a Jenkins job/ Github action which compiles HBase v2 as well, so we make sure no change breaks it. Personally I would go with the second option, the last HBase v1 release seems to be 2 years back, it might be pulling in some problematic transitive dependencies & it will open scope for us to support HBase 3.x when they have a stable release in future. Let me know your thoughts!!! -Ayush [1] https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/hadoop-project/pom.xml#L206-L207 [2] https://github.com/apache/hadoop/blob/dae871e3e0783e1fe6ea09131c3f4650abfa8a1d/BUILDING.txt#L168-L172 [3] https://hbase.apache.org/downloads.html