[ANNOUNCE] New HBase committer Tianhang Tang(唐天航)
On behalf of the Apache HBase PMC, I am pleased to announce that Tianhang Tang(thangTang) has accepted the PMC's invitation to become a committer on the project. We appreciate all of Tianhang's generous contributions thus far and look forward to his continued involvement. Congratulations and welcome, Tianhang Tang! 我很高兴代表 Apache HBase PMC 宣布 唐天航 已接受我们的邀请,成 为 Apache HBase 项目的 Committer。感谢 唐天航 一直以来为 HBase 项目 做出的贡献,并期待他在未来继续承担更多的责任。 欢迎 唐天航!
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
Then Hadoop should add one and although we would need a reflection based check in the interim we can converge toward the ideal. In any case I believe we can avoid a direct dependency on Ozone and should strongly avoid taking such unnecessary dependencies. The Hadoop and HBase build dependency sets are already very large and we and other users are being hit with significant security issue remediation work, much of which represents compatibility problems and is not upstreamable (like protobuf 2 removal in 2.x). We struggle with the existing dependencies enough already at my employer. > On Mar 15, 2023, at 1:53 PM, Sean Busbey wrote: > > the check that Stephen is referring to is for logic around lease recovery > and not stream flush/sync. the lease recovery is specific to DFS IIRC and > doesn't have a FileSystem marker. > >> On Wed, Mar 15, 2023 at 3:22 PM Andrew Purtell wrote: >> >> So we can test StreamCapabilities in code, in worst case by wrapping some >> probe code during startup with try-catch and examining the exception. >> >>> On Wed, Mar 15, 2023 at 1:09 PM Viraj Jasani wrote: >>> >>> As of today, both WAL impl (fshlog and asyncfs) throw >>> StreamLacksCapabilityException if the FS Data OutputStream probe fails >> for >>> Hflush/Hsync: >>> >>> StreamLacksCapabilityException(StreamCapabilities.HFLUSH) >>> and >>> StreamLacksCapabilityException(StreamCapabilities.HSYNC) >>> >>> >>> On Wed, Mar 15, 2023 at 12:51 PM Andrew Purtell >>> wrote: >>> Does Hadoop have a marker interface that lets an application know its FileSystem instances can support hsync/hflush? Ideally all we should >> need to do is test with instanceof for that marker and use reflection (in >> the worst case) to get a handle to the hsync or hflush method, and then >> call it. This approach should be taken wherever we have a requirement to >> use a special WAL specific API provided by the underlying FileSystem, so we >> can abstract it sufficiently to not require a direct dependency on Ozone or >>> S3A or any non HDFS filesystem. On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu < >> tak...@apache.org wrote: > Hi team, > > Recently, Wei-Chiu and I have been discussing about if HBase can use > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > and HFile, for HFile it’s pluggable by configuring the file system to > use Ozone File System (Ozone) > > But we found that the WAL it’s a bit different, especially > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > the file system is an instance of HDFS, and thus WAL recovery to > execute file lease recovery from RS crashes. Here, if we would like >> to > add Ozone, it does not matter by importing as a direct dependency to > perform similar lease recovery or via reflection by class name in > plaintext String, we still need to somehow introduce Ozone to be > another supported file system. (we can discuss how we can implement > better as well) > > We also found other places e.g. FSUtils and HFileSystem have used > DistributedFileSystem, but it should be able to move them into either > hbase-asyncfs or a new FS related component to separate the use of > different supported file systems. > > So, we’re wondering if anyone would have any objections to adding > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > how this could be added without adding Ozone as dependency, please > feel free to comment on this thread. > > > [1] Ozone is working on support for hsync and hflush, > https://issues.apache.org/jira/browse/HDDS-7593, > https://issues.apache.org/jira/browse/HDDS-4353 > [2] RecoverLeaseFSUtils#recoverFileLease, > > >>> >> https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > Thanks, > Stephen >>> >>
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
Inline > On Mar 15, 2023, at 2:11 PM, Wei-Chiu Chuang wrote: > > hsync/hflush, or any input/output stream APIs for that matter, can be > probed using StreamCapabilities.hasCapabilitiy() API. > > lease recovery isn't (DistributedFIleSystem.recoverLease()). safe mode > check isn't. There are a number of HDFS specific APIs that HBase uses. > > I'm all for abstracting out FS implementation details. But it would be an > overkill to try and add every single FS specific APIs to the generic > FileSystem API interface. > > One idea that Stephen had was to add a RecoverableFileSystem interface in > Hadoop which adds lease recovery capability, and then HDFS or Ozone can > implement this interface. Yes, please. > > In a less ideal world, I imagine we could have one hbase module for > utilities that does FS specific tasks. That way it is future proofing. Also a good idea. > >> On Wed, Mar 15, 2023 at 12:50 PM Andrew Purtell wrote: >> >> Does Hadoop have a marker interface that lets an application know its >> FileSystem instances can support hsync/hflush? Ideally all we should need >> to do is test with instanceof for that marker and use reflection (in the >> worst case) to get a handle to the hsync or hflush method, and then call >> it. This approach should be taken wherever we have a requirement to use a >> special WAL specific API provided by the underlying FileSystem, so we can >> abstract it sufficiently to not require a direct dependency on Ozone or S3A >> or any non HDFS filesystem. >> >> On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu >> wrote: >> >>> Hi team, >>> >>> Recently, Wei-Chiu and I have been discussing about if HBase can use >>> Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) >>> and HFile, for HFile it’s pluggable by configuring the file system to >>> use Ozone File System (Ozone) >>> >>> But we found that the WAL it’s a bit different, especially >>> RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if >>> the file system is an instance of HDFS, and thus WAL recovery to >>> execute file lease recovery from RS crashes. Here, if we would like to >>> add Ozone, it does not matter by importing as a direct dependency to >>> perform similar lease recovery or via reflection by class name in >>> plaintext String, we still need to somehow introduce Ozone to be >>> another supported file system. (we can discuss how we can implement >>> better as well) >>> >>> We also found other places e.g. FSUtils and HFileSystem have used >>> DistributedFileSystem, but it should be able to move them into either >>> hbase-asyncfs or a new FS related component to separate the use of >>> different supported file systems. >>> >>> So, we’re wondering if anyone would have any objections to adding >>> Ozone as a dependency to hbase-asyncfs? or if you have a better idea >>> how this could be added without adding Ozone as dependency, please >>> feel free to comment on this thread. >>> >>> >>> [1] Ozone is working on support for hsync and hflush, >>> https://issues.apache.org/jira/browse/HDDS-7593, >>> https://issues.apache.org/jira/browse/HDDS-4353 >>> [2] RecoverLeaseFSUtils#recoverFileLease, >>> >>> >> https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 >>> >>> Thanks, >>> Stephen >>> >> >> >> -- >> Best regards, >> Andrew >> >> Unrest, ignorance distilled, nihilistic imbeciles - >>It's what we’ve earned >> Welcome, apocalypse, what’s taken you so long? >> Bring us the fitting end that we’ve been counting on >> - A23, Welcome, Apocalypse >>
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
hsync/hflush, or any input/output stream APIs for that matter, can be probed using StreamCapabilities.hasCapabilitiy() API. lease recovery isn't (DistributedFIleSystem.recoverLease()). safe mode check isn't. There are a number of HDFS specific APIs that HBase uses. I'm all for abstracting out FS implementation details. But it would be an overkill to try and add every single FS specific APIs to the generic FileSystem API interface. One idea that Stephen had was to add a RecoverableFileSystem interface in Hadoop which adds lease recovery capability, and then HDFS or Ozone can implement this interface. In a less ideal world, I imagine we could have one hbase module for utilities that does FS specific tasks. That way it is future proofing. On Wed, Mar 15, 2023 at 12:50 PM Andrew Purtell wrote: > Does Hadoop have a marker interface that lets an application know its > FileSystem instances can support hsync/hflush? Ideally all we should need > to do is test with instanceof for that marker and use reflection (in the > worst case) to get a handle to the hsync or hflush method, and then call > it. This approach should be taken wherever we have a requirement to use a > special WAL specific API provided by the underlying FileSystem, so we can > abstract it sufficiently to not require a direct dependency on Ozone or S3A > or any non HDFS filesystem. > > On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu > wrote: > > > Hi team, > > > > Recently, Wei-Chiu and I have been discussing about if HBase can use > > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > > and HFile, for HFile it’s pluggable by configuring the file system to > > use Ozone File System (Ozone) > > > > But we found that the WAL it’s a bit different, especially > > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > > the file system is an instance of HDFS, and thus WAL recovery to > > execute file lease recovery from RS crashes. Here, if we would like to > > add Ozone, it does not matter by importing as a direct dependency to > > perform similar lease recovery or via reflection by class name in > > plaintext String, we still need to somehow introduce Ozone to be > > another supported file system. (we can discuss how we can implement > > better as well) > > > > We also found other places e.g. FSUtils and HFileSystem have used > > DistributedFileSystem, but it should be able to move them into either > > hbase-asyncfs or a new FS related component to separate the use of > > different supported file systems. > > > > So, we’re wondering if anyone would have any objections to adding > > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > > how this could be added without adding Ozone as dependency, please > > feel free to comment on this thread. > > > > > > [1] Ozone is working on support for hsync and hflush, > > https://issues.apache.org/jira/browse/HDDS-7593, > > https://issues.apache.org/jira/browse/HDDS-4353 > > [2] RecoverLeaseFSUtils#recoverFileLease, > > > > > https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > > > Thanks, > > Stephen > > > > > -- > Best regards, > Andrew > > Unrest, ignorance distilled, nihilistic imbeciles - > It's what we’ve earned > Welcome, apocalypse, what’s taken you so long? > Bring us the fitting end that we’ve been counting on >- A23, Welcome, Apocalypse >
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
the check that Stephen is referring to is for logic around lease recovery and not stream flush/sync. the lease recovery is specific to DFS IIRC and doesn't have a FileSystem marker. On Wed, Mar 15, 2023 at 3:22 PM Andrew Purtell wrote: > So we can test StreamCapabilities in code, in worst case by wrapping some > probe code during startup with try-catch and examining the exception. > > On Wed, Mar 15, 2023 at 1:09 PM Viraj Jasani wrote: > > > As of today, both WAL impl (fshlog and asyncfs) throw > > StreamLacksCapabilityException if the FS Data OutputStream probe fails > for > > Hflush/Hsync: > > > > StreamLacksCapabilityException(StreamCapabilities.HFLUSH) > > and > > StreamLacksCapabilityException(StreamCapabilities.HSYNC) > > > > > > On Wed, Mar 15, 2023 at 12:51 PM Andrew Purtell > > wrote: > > > > > Does Hadoop have a marker interface that lets an application know its > > > FileSystem instances can support hsync/hflush? Ideally all we should > need > > > to do is test with instanceof for that marker and use reflection (in > the > > > worst case) to get a handle to the hsync or hflush method, and then > call > > > it. This approach should be taken wherever we have a requirement to > use a > > > special WAL specific API provided by the underlying FileSystem, so we > can > > > abstract it sufficiently to not require a direct dependency on Ozone or > > S3A > > > or any non HDFS filesystem. > > > > > > On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu < > tak...@apache.org > > > > > > wrote: > > > > > > > Hi team, > > > > > > > > Recently, Wei-Chiu and I have been discussing about if HBase can use > > > > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > > > > and HFile, for HFile it’s pluggable by configuring the file system to > > > > use Ozone File System (Ozone) > > > > > > > > But we found that the WAL it’s a bit different, especially > > > > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > > > > the file system is an instance of HDFS, and thus WAL recovery to > > > > execute file lease recovery from RS crashes. Here, if we would like > to > > > > add Ozone, it does not matter by importing as a direct dependency to > > > > perform similar lease recovery or via reflection by class name in > > > > plaintext String, we still need to somehow introduce Ozone to be > > > > another supported file system. (we can discuss how we can implement > > > > better as well) > > > > > > > > We also found other places e.g. FSUtils and HFileSystem have used > > > > DistributedFileSystem, but it should be able to move them into either > > > > hbase-asyncfs or a new FS related component to separate the use of > > > > different supported file systems. > > > > > > > > So, we’re wondering if anyone would have any objections to adding > > > > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > > > > how this could be added without adding Ozone as dependency, please > > > > feel free to comment on this thread. > > > > > > > > > > > > [1] Ozone is working on support for hsync and hflush, > > > > https://issues.apache.org/jira/browse/HDDS-7593, > > > > https://issues.apache.org/jira/browse/HDDS-4353 > > > > [2] RecoverLeaseFSUtils#recoverFileLease, > > > > > > > > > > > > > > https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > > > > > > > Thanks, > > > > Stephen > > >
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
So we can test StreamCapabilities in code, in worst case by wrapping some probe code during startup with try-catch and examining the exception. On Wed, Mar 15, 2023 at 1:09 PM Viraj Jasani wrote: > As of today, both WAL impl (fshlog and asyncfs) throw > StreamLacksCapabilityException if the FS Data OutputStream probe fails for > Hflush/Hsync: > > StreamLacksCapabilityException(StreamCapabilities.HFLUSH) > and > StreamLacksCapabilityException(StreamCapabilities.HSYNC) > > > On Wed, Mar 15, 2023 at 12:51 PM Andrew Purtell > wrote: > > > Does Hadoop have a marker interface that lets an application know its > > FileSystem instances can support hsync/hflush? Ideally all we should need > > to do is test with instanceof for that marker and use reflection (in the > > worst case) to get a handle to the hsync or hflush method, and then call > > it. This approach should be taken wherever we have a requirement to use a > > special WAL specific API provided by the underlying FileSystem, so we can > > abstract it sufficiently to not require a direct dependency on Ozone or > S3A > > or any non HDFS filesystem. > > > > On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu > > > wrote: > > > > > Hi team, > > > > > > Recently, Wei-Chiu and I have been discussing about if HBase can use > > > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > > > and HFile, for HFile it’s pluggable by configuring the file system to > > > use Ozone File System (Ozone) > > > > > > But we found that the WAL it’s a bit different, especially > > > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > > > the file system is an instance of HDFS, and thus WAL recovery to > > > execute file lease recovery from RS crashes. Here, if we would like to > > > add Ozone, it does not matter by importing as a direct dependency to > > > perform similar lease recovery or via reflection by class name in > > > plaintext String, we still need to somehow introduce Ozone to be > > > another supported file system. (we can discuss how we can implement > > > better as well) > > > > > > We also found other places e.g. FSUtils and HFileSystem have used > > > DistributedFileSystem, but it should be able to move them into either > > > hbase-asyncfs or a new FS related component to separate the use of > > > different supported file systems. > > > > > > So, we’re wondering if anyone would have any objections to adding > > > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > > > how this could be added without adding Ozone as dependency, please > > > feel free to comment on this thread. > > > > > > > > > [1] Ozone is working on support for hsync and hflush, > > > https://issues.apache.org/jira/browse/HDDS-7593, > > > https://issues.apache.org/jira/browse/HDDS-4353 > > > [2] RecoverLeaseFSUtils#recoverFileLease, > > > > > > > > > https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > > > > > Thanks, > > > Stephen >
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
As of today, both WAL impl (fshlog and asyncfs) throw StreamLacksCapabilityException if the FS Data OutputStream probe fails for Hflush/Hsync: StreamLacksCapabilityException(StreamCapabilities.HFLUSH) and StreamLacksCapabilityException(StreamCapabilities.HSYNC) On Wed, Mar 15, 2023 at 12:51 PM Andrew Purtell wrote: > Does Hadoop have a marker interface that lets an application know its > FileSystem instances can support hsync/hflush? Ideally all we should need > to do is test with instanceof for that marker and use reflection (in the > worst case) to get a handle to the hsync or hflush method, and then call > it. This approach should be taken wherever we have a requirement to use a > special WAL specific API provided by the underlying FileSystem, so we can > abstract it sufficiently to not require a direct dependency on Ozone or S3A > or any non HDFS filesystem. > > On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu > wrote: > > > Hi team, > > > > Recently, Wei-Chiu and I have been discussing about if HBase can use > > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > > and HFile, for HFile it’s pluggable by configuring the file system to > > use Ozone File System (Ozone) > > > > But we found that the WAL it’s a bit different, especially > > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > > the file system is an instance of HDFS, and thus WAL recovery to > > execute file lease recovery from RS crashes. Here, if we would like to > > add Ozone, it does not matter by importing as a direct dependency to > > perform similar lease recovery or via reflection by class name in > > plaintext String, we still need to somehow introduce Ozone to be > > another supported file system. (we can discuss how we can implement > > better as well) > > > > We also found other places e.g. FSUtils and HFileSystem have used > > DistributedFileSystem, but it should be able to move them into either > > hbase-asyncfs or a new FS related component to separate the use of > > different supported file systems. > > > > So, we’re wondering if anyone would have any objections to adding > > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > > how this could be added without adding Ozone as dependency, please > > feel free to comment on this thread. > > > > > > [1] Ozone is working on support for hsync and hflush, > > https://issues.apache.org/jira/browse/HDDS-7593, > > https://issues.apache.org/jira/browse/HDDS-4353 > > [2] RecoverLeaseFSUtils#recoverFileLease, > > > > > https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > > > Thanks, > > Stephen > > > > > -- > Best regards, > Andrew > > Unrest, ignorance distilled, nihilistic imbeciles - > It's what we’ve earned > Welcome, apocalypse, what’s taken you so long? > Bring us the fitting end that we’ve been counting on >- A23, Welcome, Apocalypse >
Re: [DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
Does Hadoop have a marker interface that lets an application know its FileSystem instances can support hsync/hflush? Ideally all we should need to do is test with instanceof for that marker and use reflection (in the worst case) to get a handle to the hsync or hflush method, and then call it. This approach should be taken wherever we have a requirement to use a special WAL specific API provided by the underlying FileSystem, so we can abstract it sufficiently to not require a direct dependency on Ozone or S3A or any non HDFS filesystem. On Wed, Mar 15, 2023 at 12:31 PM Tak Lon (Stephen) Wu wrote: > Hi team, > > Recently, Wei-Chiu and I have been discussing about if HBase can use > Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) > and HFile, for HFile it’s pluggable by configuring the file system to > use Ozone File System (Ozone) > > But we found that the WAL it’s a bit different, especially > RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if > the file system is an instance of HDFS, and thus WAL recovery to > execute file lease recovery from RS crashes. Here, if we would like to > add Ozone, it does not matter by importing as a direct dependency to > perform similar lease recovery or via reflection by class name in > plaintext String, we still need to somehow introduce Ozone to be > another supported file system. (we can discuss how we can implement > better as well) > > We also found other places e.g. FSUtils and HFileSystem have used > DistributedFileSystem, but it should be able to move them into either > hbase-asyncfs or a new FS related component to separate the use of > different supported file systems. > > So, we’re wondering if anyone would have any objections to adding > Ozone as a dependency to hbase-asyncfs? or if you have a better idea > how this could be added without adding Ozone as dependency, please > feel free to comment on this thread. > > > [1] Ozone is working on support for hsync and hflush, > https://issues.apache.org/jira/browse/HDDS-7593, > https://issues.apache.org/jira/browse/HDDS-4353 > [2] RecoverLeaseFSUtils#recoverFileLease, > > https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 > > Thanks, > Stephen > -- Best regards, Andrew Unrest, ignorance distilled, nihilistic imbeciles - It's what we’ve earned Welcome, apocalypse, what’s taken you so long? Bring us the fitting end that we’ve been counting on - A23, Welcome, Apocalypse
[DISCUSS] Add Ozone as dependency to hbase-asyncfs ?
Hi team, Recently, Wei-Chiu and I have been discussing about if HBase can use Ozone as another storage as WAL (see the hsync and hflush JIRAs [1]) and HFile, for HFile it’s pluggable by configuring the file system to use Ozone File System (Ozone) But we found that the WAL it’s a bit different, especially RecoverLeaseFSUtils#recoverFileLease [2], it has one check about if the file system is an instance of HDFS, and thus WAL recovery to execute file lease recovery from RS crashes. Here, if we would like to add Ozone, it does not matter by importing as a direct dependency to perform similar lease recovery or via reflection by class name in plaintext String, we still need to somehow introduce Ozone to be another supported file system. (we can discuss how we can implement better as well) We also found other places e.g. FSUtils and HFileSystem have used DistributedFileSystem, but it should be able to move them into either hbase-asyncfs or a new FS related component to separate the use of different supported file systems. So, we’re wondering if anyone would have any objections to adding Ozone as a dependency to hbase-asyncfs? or if you have a better idea how this could be added without adding Ozone as dependency, please feel free to comment on this thread. [1] Ozone is working on support for hsync and hflush, https://issues.apache.org/jira/browse/HDDS-7593, https://issues.apache.org/jira/browse/HDDS-4353 [2] RecoverLeaseFSUtils#recoverFileLease, https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L53-L63 Thanks, Stephen
[jira] [Created] (HBASE-27721) LEAK: RefCnt.release() was not called before it's garbage-collected in regionserver
chiranjeevi created HBASE-27721: --- Summary: LEAK: RefCnt.release() was not called before it's garbage-collected in regionserver Key: HBASE-27721 URL: https://issues.apache.org/jira/browse/HBASE-27721 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 2.5.2 Reporter: chiranjeevi 2023-03-15 07:15:12,119 ERROR [RpcServer.default.FPBQ.Fifo.handler=50,queue=2,port=16020] util.ResourceLeakDetector: LEAK: RefCnt.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: Created at: org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:59) org.apache.hadoop.hbase.nio.RefCnt.create(RefCnt.java:54) org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.(MemStoreLABImpl.java:108) sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown Source) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:54) org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:42) org.apache.hadoop.hbase.regionserver.MemStoreLAB.newInstance(MemStoreLAB.java:116) org.apache.hadoop.hbase.regionserver.SegmentFactory.createMutableSegment(SegmentFactory.java:81) org.apache.hadoop.hbase.regionserver.AbstractMemStore.resetActive(AbstractMemStore.java:93) org.apache.hadoop.hbase.regionserver.AbstractMemStore.(AbstractMemStore.java:83) org.apache.hadoop.hbase.regionserver.DefaultMemStore.(DefaultMemStore.java:78) sun.reflect.GeneratedConstructorAccessor62.newInstance(Unknown Source) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) org.apache.hadoop.hbase.util.ReflectionUtils.instantiate(ReflectionUtils.java:54) org.apache.hadoop.hbase.util.ReflectionUtils.newInstance(ReflectionUtils.java:91) org.apache.hadoop.hbase.regionserver.HStore.getMemstore(HStore.java:371) org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:280) org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6359) org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1114) org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27720) TestClusterRestartFailover is flakey
Nick Dimiduk created HBASE-27720: Summary: TestClusterRestartFailover is flakey Key: HBASE-27720 URL: https://issues.apache.org/jira/browse/HBASE-27720 Project: HBase Issue Type: Task Components: test Affects Versions: 2.5.4 Reporter: Nick Dimiduk Assignee: Nick Dimiduk I'm seeing failures like this in PR, {noformat} [ERROR] Failures: [ERROR] org.apache.hadoop.hbase.master.TestClusterRestartFailoverSplitWithoutZk.test [ERROR] Run 1: TestClusterRestartFailoverSplitWithoutZk>TestClusterRestartFailover.test:143 serverNode should be deleted after SCP finished expected null, but was: [ERROR] Run 2: TestClusterRestartFailoverSplitWithoutZk>TestClusterRestartFailover.test:147 serverCrashSubmittedCount(8) should be equal expected:<4> but was:<8> [ERROR] Run 3: TestClusterRestartFailoverSplitWithoutZk>TestClusterRestartFailover.test:147 serverCrashSubmittedCount(12) should be equal expected:<4> but was:<12> {noformat} Looks like subsequent runs would have passed, but for the firm metric count assertion. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27715) Refactoring the long tryAdvanceEntry method in WALEntryStream
[ https://issues.apache.org/jira/browse/HBASE-27715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-27715. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to master and branch-2. Thanks [~heliangjun] and [~zghao] for reviewing! > Refactoring the long tryAdvanceEntry method in WALEntryStream > - > > Key: HBASE-27715 > URL: https://issues.apache.org/jira/browse/HBASE-27715 > Project: HBase > Issue Type: Task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4 > > > Let's make it more readable and add more logs, for debugging. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27719) Fix surefire corrupted channel
Nick Dimiduk created HBASE-27719: Summary: Fix surefire corrupted channel Key: HBASE-27719 URL: https://issues.apache.org/jira/browse/HBASE-27719 Project: HBase Issue Type: Task Components: build, test Reporter: Nick Dimiduk Testing, at least with hbase-server, emits a warning, {noformat} [WARNING] Corrupted channel by directly writing to native stream in forked JVM 1. See FAQ web page and the dump file /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-5036/yetus-jdk8-hadoop2-check/src/hbase-server/target/surefire-reports/2023-03-14T13-20-30_932-jvmRun1.dumpstream {noformat} I cannot tell if this is causing maven to recognize the test run as a failure, though in this case, the test run was marked as failing, with two tests rerun and passing on their second attempt. Surefire apparently uses stdio to communicate with maven, so when our code under test produces output on the channel, it corrupts that communication. https://maven.apache.org/surefire/maven-surefire-plugin/examples/process-communication.html#the-tcp-ip-communication-channel -- This message was sent by Atlassian Jira (v8.20.10#820010)