[jira] [Created] (HBASE-27948) Report memstore on-heap and off-heap size as jmx metrics in sub=Memory bean

2023-06-22 Thread Jing Yu (Jira)
Jing Yu created HBASE-27948:
---

 Summary: Report memstore on-heap and off-heap size as jmx metrics 
in sub=Memory bean
 Key: HBASE-27948
 URL: https://issues.apache.org/jira/browse/HBASE-27948
 Project: HBase
  Issue Type: Improvement
Reporter: Jing Yu
Assignee: Jing Yu


Currently we only report "memStoreSize" jmx metric in sub=Memory bean. There 
are "Memstore On-Heap Size" and "Memsotre Off-Heap Size" in the RS UI. It would 
be useful to report them in JMX.

In addition, "memStoreSize" metric under sub=Memory is 0 for some reason (while 
that under sub=Server is not). Need to do some digging to see if it is a bug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Move default Hadoop version to 3.3.x

2023-06-22 Thread Wei-Chiu Chuang
I am +1 to use a feature branch.

On Tue, Jun 20, 2023 at 10:20 AM Tak Lon (Stephen) Wu 
wrote:

> or maybe we create a new feature branch hadoop-33-ozone that has these
> interfaces and ozone related support, then we put all the feature changes
> into this feature branch and merge later?
>
> The problem I see is that it's hard and very confusing to maintain two
> hadoop3 profiles, and I can see sooner or later hadoop 3.2.x could be EOL.
>
> Thanks,
> Stephen
>
>
>
> On Fri, Jun 16, 2023 at 11:32 AM Viraj Jasani  wrote:
>
> > How about using a new hadoop 3.3 profile for features that are explicitly
> > present in 3.3 (like FileSystem changes)? When the time comes, we switch
> to
> > 3.3 profile by default and drop old hadoop 3 profile that supports 3.2.x
> > versions as of today?
> >
> >
> > On Fri, Jun 16, 2023 at 7:11 AM 张铎(Duo Zhang) 
> > wrote:
> >
> >> In general, in HBase, we will use the last patch release of the oldest
> >> supported hadoop release line as our default hadoop dependency.
> >>
> >> For example, since we claim that 3.x will support hadoop 3.2.x and
> >> 3.3.x, then we will declare the default hadoop version as 3.2.4.
> >>
> >> I think we can discuss whether to move up to 3.3.6 as the default
> >> version, if there are no compatibility issues when communicating with
> >> 3.2.x hadoop clusters.
> >>
> >> But if we want to use the features which are only provided in 3.3.6,
> >> then we should be careful as this means our users can not build hbase
> >> with 3.2.x any more, which means we have dropped the support for
> >> 3.2.x.
> >>
> >> Thanks.
> >>
> >> Wei-Chiu Chuang  于2023年6月16日周五 06:03写道:
> >> >
> >> > Hi HBase devs,
> >> >
> >> > Over the past few years HBase supports the default Hadoop version of
> >> 3.2.x
> >> > but it also works on Hadoop 3.3.x.
> >> >
> >> > I'm wondering if it makes sense to move the current default
> >> hadoop.version
> >> >  from 3.2.4
> >> to
> >> > 3.3.x.
> >> >
> >> > Why?
> >> >
> >> > 1. From a stability and security point of view, Hadoop 3.3 is the most
> >> up
> >> > to date release line. And all HBase tests pass using 3.3.x. There
> hasn't
> >> > been a new Hadoop 3.2.x release for over a year.
> >> >
> >> > 2. We have a feature (using HBase on Ozone) that depends on an API in
> >> > Hadoop 3.3.6 that is not yet in any 3.2 release line. Moving the
> default
> >> > hadoop version to 3.3.6 will save a lot of hassle.
> >> >
> >> > Thoughts?
> >> >
> >> > Best,
> >> > Weichiu
> >>
> >
>


[jira] [Resolved] (HBASE-27904) A random data generator tool leveraging bulk load.

2023-06-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-27904.
--
Fix Version/s: (was: 2.6.0)
 Hadoop Flags: Reviewed
   Resolution: Fixed

> A random data generator tool leveraging bulk load.
> --
>
> Key: HBASE-27904
> URL: https://issues.apache.org/jira/browse/HBASE-27904
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Reporter: Himanshu Gwalani
>Assignee: Himanshu Gwalani
>Priority: Major
> Fix For: 3.0.0-beta-1
>
>
> As of now, there is no data generator tool in HBase leveraging bulk load. 
> Since bulk load skips client writes path, it's much faster to generate data 
> and use of for load/performance tests where client writes are not a mandate.
> {*}Example{*}: Any tooling over HBase that need x TBs of HBase Table for load 
> testing.
> {*}Requirements{*}:
> 1. Tooling should generate RANDOM data on the fly and should not require any 
> pre-generated data as CSV/XML files as input.
> 2. Tooling should support pre-splited tables (number of splits to be taken as 
> input).
> 3. Data should be UNIFORMLY distributed across all regions of the table.
> *High-level Steps*
> 1. A table will be created (pre-splited with number of splits as input)
> 2. The mapper of a custom Map Reduce job will generate random key-value pair 
> and ensure that those are equally distributed across all regions of the table.
> 3. 
> [HFileOutputFormat2|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java]
>  will be used to add reducer to the MR job and create HFiles based on key 
> value pairs generated by mapper. 
> 4. Bulk load those HFiles to the respective regions of the table using 
> [LoadIncrementalFiles|https://hbase.apache.org/2.2/devapidocs/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.html]
> *Results*
> We had POC for this tool in our organization, tested this tool with a 11 
> nodes HBase cluster (having HBase + Hadoop services running). The tool 
> generated:
> 1. *100* *GB* of data in *6 minutes*
> 2. *340 GB* of data in *13 minutes*
> 3. *3.5 TB* of data in *3 hours and 10 minutes*
> *Usage*
> hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool 
> -mapper-count 100 -table TEST_TABLE_1 -rows-per-mapper 100 -split-count 
> 100 -delete-if-exist -table-options "NORMALIZATION_ENABLED=false"
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Branching for 2.6 code line (branch-2.6)

2023-06-22 Thread Bryan Beaudreault
Thanks!

We're looking into one other emergent issue that we uncovered during the
rollout of server side TLS on RegionServers. It seems nettyDirectMemory has
increased substantially when under load with it enabled. Details in
https://issues.apache.org/jira/browse/HBASE-27947.


On Thu, Jun 22, 2023 at 12:02 PM 张铎(Duo Zhang) 
wrote:

> PR is ready
>
> https://github.com/apache/hbase/pull/5305
>
> PTAL.
>
> Thanks.
>
> 张铎(Duo Zhang)  于2023年6月22日周四 21:40写道:
> >
> > Ah, missed your last comment on HBASE-27782.
> >
> > Let me take a look.
> >
> > Netty has some rules about how the exceptions are passed through the
> > pipeline(especially the order, forward or backward...) but honestly I
> > always forget it just a day later after I finished the code...
> >
> > Bryan Beaudreault  于2023年6月17日周六 00:43写道:
> > >
> > > In terms of TLS:
> > >
> > > - All of our clients (many thousands) in production are using the
> > > NettyRpcConnection with TLS enabled. However, these clients are
> currently
> > > connecting to the RegionServer/HMaster through an haproxy process
> local to
> > > each server which handles SSL termination. So not quite end-to-end yet.
> > > - On the server side, most of our QA environment (a thousand
> regionservers
> > > and ~200 hmasters) are running it. So these are accepting TLS from
> clients
> > > and using TLS for intra-cluster communication.
> > >
> > > The migration is tricky for us due to the scale and the fact that we
> need
> > > to migrate off haproxy at the same time. Hopefully we should have some
> of
> > > production running end-to-end TLS within the next month or so.
> > >
> > > From what we've seen in QA so far, there have not been any major
> issues. We
> > > also couldn't discern any performance issues in testing, though we were
> > > comparing against our legacy haproxy setup and can't really compare
> against
> > > kerberos.
> > >
> > > One outstanding issue is
> https://issues.apache.org/jira/browse/HBASE-27782,
> > > which we still see periodically. It doesn't seem to cause actual
> issues,
> > > since the RpcClient still handles it gracefully, but it does cause
> noise
> > > and may have implications.
> > >
> > > On Fri, Jun 16, 2023 at 11:41 AM 张铎(Duo Zhang) 
> > > wrote:
> > >
> > > > So any updates here?
> > > >
> > > > Do we have any good news about the TLS usage in production so we can
> > > > move forward on release 2.6.x?
> > > >
> > > > Thanks.
> > > >
> > > > Andrew Purtell  于2023年4月7日周五 09:37写道:
> > > > >
> > > > > Agreed, that sounds like a good plan.
> > > > >
> > > > > On Wed, Mar 29, 2023 at 7:31 AM 张铎(Duo Zhang) <
> palomino...@gmail.com>
> > > > wrote:
> > > > >
> > > > > > I think we could follow the old pattern when we cut a new release
> > > > branch.
> > > > > > That is, after the new release branch is cut and the new minor
> release
> > > > is
> > > > > > out, we will do a final release of the oldest release line and
> then
> > > > mark it
> > > > > > as EOL.
> > > > > >
> > > > > > So here, I think once we cut branch-2.6 and release 2.6.0, we
> can do a
> > > > > > final release for 2.4.x and mark 2.4.x as EOL.
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > Bryan Beaudreault  于2023年3月27日周一
> 09:57写道:
> > > > > >
> > > > > > > Primary development on hbase-backup and TLS is complete. There
> are a
> > > > > > couple
> > > > > > > minor things I may want to add to TLS in the future, such as
> > > > pluggable
> > > > > > cert
> > > > > > > verification. But those are not needed for initial release IMO.
> > > > > > >
> > > > > > > We are almost ready integrating hbase-backup in production.
> We’ve
> > > > fixed a
> > > > > > > few minor things (all committed) but otherwise it’s worked
> well so
> > > > far in
> > > > > > > tests.
> > > > > > >
> > > > > > > We are a bit delayed in integrating TLS. I’m hopeful it will
> happen
> > > > in
> > > > > > the
> > > > > > > next 2-3 months. It’s a big project for us, so not quick, but
> > > > definitely
> > > > > > on
> > > > > > > the roadmap.
> > > > > > >
> > > > > > > It seems like cloudera may be closer to integrating TLS in
> > > > production.
> > > > > > > Balazs recently filed and fixed HBASE-27673 related to mTLS.
> Maybe
> > > > he can
> > > > > > > chime in on his status, or let me know if I am totally off
> base :)
> > > > > > >
> > > > > > > On Sun, Mar 26, 2023 at 9:25 PM Andrew Purtell <
> > > > andrew.purt...@gmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Before we open a new code line should we discuss EOL of 2.4?
> After
> > > > the
> > > > > > > > first 2.6 release? It’s not required of course but cuts down
> the
> > > > amount
> > > > > > > of
> > > > > > > > labor to have two 2.x code lines (presumably, one as stable
> and
> > > > one as
> > > > > > > > next) rather than three. Perhaps even before that, should we
> move
> > > > the
> > > > > > > > stable pointer to the latest 2.5 release?
> > > > > > > >
> > > > > > > > >
> > > > > > > > > On Mar 26, 2023, at 5

[jira] [Created] (HBASE-27947) RegionServer OOM under load when TLS is enabled

2023-06-22 Thread Bryan Beaudreault (Jira)
Bryan Beaudreault created HBASE-27947:
-

 Summary: RegionServer OOM under load when TLS is enabled
 Key: HBASE-27947
 URL: https://issues.apache.org/jira/browse/HBASE-27947
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Bryan Beaudreault


We are rolling out the server side TLS settings to all of our QA clusters. This 
has mostly gone fine, except on 1 cluster. Most clusters, including this one 
have a sampled {{nettyDirectMemory}} usage of about 30-100mb. This cluster 
tends to get bursts of traffic, in which case it would typically jump to 
400-500mb. Again this is sampled, so it could have been higher than that. When 
we enabled SSL on this cluster, we started seeing bursts up to at least 4gb. 
This exceeded our {{{}-XX:MaxDirectMemorySize{}}}, which caused OOM's and 
general chaos on the cluster.
 
We've gotten it under control a little bit by setting 
{{-Dorg.apache.hbase.thirdparty.io.netty.maxDirectMemory}} and 
{{{}-Dorg.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible{}}}. 
We've set netty's maxDirectMemory to be approx equal to 
({{{}-XX:MaxDirectMemorySize - BucketCacheSize - ReservoirSize{}}}). Now we are 
seeing netty's own OutOfDirectMemoryError, which is still causing pain for 
clients but at least insulates the other components of the regionserver.
 
We're still digging into exactly why this is happening. The cluster clearly has 
a bad access pattern, but it doesn't seem like SSL should increase the memory 
footprint by 5-10x like we're seeing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Branching for 2.6 code line (branch-2.6)

2023-06-22 Thread Duo Zhang
PR is ready

https://github.com/apache/hbase/pull/5305

PTAL.

Thanks.

张铎(Duo Zhang)  于2023年6月22日周四 21:40写道:
>
> Ah, missed your last comment on HBASE-27782.
>
> Let me take a look.
>
> Netty has some rules about how the exceptions are passed through the
> pipeline(especially the order, forward or backward...) but honestly I
> always forget it just a day later after I finished the code...
>
> Bryan Beaudreault  于2023年6月17日周六 00:43写道:
> >
> > In terms of TLS:
> >
> > - All of our clients (many thousands) in production are using the
> > NettyRpcConnection with TLS enabled. However, these clients are currently
> > connecting to the RegionServer/HMaster through an haproxy process local to
> > each server which handles SSL termination. So not quite end-to-end yet.
> > - On the server side, most of our QA environment (a thousand regionservers
> > and ~200 hmasters) are running it. So these are accepting TLS from clients
> > and using TLS for intra-cluster communication.
> >
> > The migration is tricky for us due to the scale and the fact that we need
> > to migrate off haproxy at the same time. Hopefully we should have some of
> > production running end-to-end TLS within the next month or so.
> >
> > From what we've seen in QA so far, there have not been any major issues. We
> > also couldn't discern any performance issues in testing, though we were
> > comparing against our legacy haproxy setup and can't really compare against
> > kerberos.
> >
> > One outstanding issue is https://issues.apache.org/jira/browse/HBASE-27782,
> > which we still see periodically. It doesn't seem to cause actual issues,
> > since the RpcClient still handles it gracefully, but it does cause noise
> > and may have implications.
> >
> > On Fri, Jun 16, 2023 at 11:41 AM 张铎(Duo Zhang) 
> > wrote:
> >
> > > So any updates here?
> > >
> > > Do we have any good news about the TLS usage in production so we can
> > > move forward on release 2.6.x?
> > >
> > > Thanks.
> > >
> > > Andrew Purtell  于2023年4月7日周五 09:37写道:
> > > >
> > > > Agreed, that sounds like a good plan.
> > > >
> > > > On Wed, Mar 29, 2023 at 7:31 AM 张铎(Duo Zhang) 
> > > wrote:
> > > >
> > > > > I think we could follow the old pattern when we cut a new release
> > > branch.
> > > > > That is, after the new release branch is cut and the new minor release
> > > is
> > > > > out, we will do a final release of the oldest release line and then
> > > mark it
> > > > > as EOL.
> > > > >
> > > > > So here, I think once we cut branch-2.6 and release 2.6.0, we can do a
> > > > > final release for 2.4.x and mark 2.4.x as EOL.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > Bryan Beaudreault  于2023年3月27日周一 09:57写道:
> > > > >
> > > > > > Primary development on hbase-backup and TLS is complete. There are a
> > > > > couple
> > > > > > minor things I may want to add to TLS in the future, such as
> > > pluggable
> > > > > cert
> > > > > > verification. But those are not needed for initial release IMO.
> > > > > >
> > > > > > We are almost ready integrating hbase-backup in production. We’ve
> > > fixed a
> > > > > > few minor things (all committed) but otherwise it’s worked well so
> > > far in
> > > > > > tests.
> > > > > >
> > > > > > We are a bit delayed in integrating TLS. I’m hopeful it will happen
> > > in
> > > > > the
> > > > > > next 2-3 months. It’s a big project for us, so not quick, but
> > > definitely
> > > > > on
> > > > > > the roadmap.
> > > > > >
> > > > > > It seems like cloudera may be closer to integrating TLS in
> > > production.
> > > > > > Balazs recently filed and fixed HBASE-27673 related to mTLS. Maybe
> > > he can
> > > > > > chime in on his status, or let me know if I am totally off base :)
> > > > > >
> > > > > > On Sun, Mar 26, 2023 at 9:25 PM Andrew Purtell <
> > > andrew.purt...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Before we open a new code line should we discuss EOL of 2.4? After
> > > the
> > > > > > > first 2.6 release? It’s not required of course but cuts down the
> > > amount
> > > > > > of
> > > > > > > labor to have two 2.x code lines (presumably, one as stable and
> > > one as
> > > > > > > next) rather than three. Perhaps even before that, should we move
> > > the
> > > > > > > stable pointer to the latest 2.5 release?
> > > > > > >
> > > > > > > >
> > > > > > > > On Mar 26, 2023, at 5:59 PM, 张铎  wrote:
> > > > > > > >
> > > > > > > > Bump.
> > > > > > > >
> > > > > > > > I believe the mTLS and backup related code have all been
> > > finished on
> > > > > > > > branch-2?
> > > > > > > >
> > > > > > > > Are there any other things which block us making the branch-2.6
> > > > > branch?
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > Mallikarjun  于2022年10月17日周一 02:09写道:
> > > > > > > >
> > > > > > > >> On hbase-backup, we are using in production for more then 1
> > > year. I
> > > > > > can
> > > > > > > >> vouch for it to be stable enough to be in a release version so
> > > that
> > > > > > more
> > 

[jira] [Resolved] (HBASE-27936) NPE in StoreFileReader.passesGeneralRowPrefixBloomFilter()

2023-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27936.
---
Fix Version/s: 2.6.0
   2.5.6
   3.0.0-beta-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to branch-2.5+.

Thanks [~vjasani] for reviewing!

> NPE in StoreFileReader.passesGeneralRowPrefixBloomFilter()
> --
>
> Key: HBASE-27936
> URL: https://issues.apache.org/jira/browse/HBASE-27936
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Liangjun He
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1
>
>
> When executing itbll, we encountered the following NPE exception:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileReader.passesGeneralRowPrefixBloomFilter(StoreFileReader.java:352)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileReader.passesBloomFilter(StoreFileReader.java:265)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:483)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:467)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:320)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:289)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:544)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor$1.createScanner(Compactor.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:358)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:122)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2407)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:667)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:716)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:750)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27946) Introduce HA hdfs+hbase colocated pod definition

2023-06-22 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-27946:


 Summary: Introduce HA hdfs+hbase colocated pod definition
 Key: HBASE-27946
 URL: https://issues.apache.org/jira/browse/HBASE-27946
 Project: HBase
  Issue Type: Sub-task
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


Like the hdfs+hbase colocated pod definition but with an HA deployment strategy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Branching for 2.6 code line (branch-2.6)

2023-06-22 Thread Duo Zhang
Ah, missed your last comment on HBASE-27782.

Let me take a look.

Netty has some rules about how the exceptions are passed through the
pipeline(especially the order, forward or backward...) but honestly I
always forget it just a day later after I finished the code...

Bryan Beaudreault  于2023年6月17日周六 00:43写道:
>
> In terms of TLS:
>
> - All of our clients (many thousands) in production are using the
> NettyRpcConnection with TLS enabled. However, these clients are currently
> connecting to the RegionServer/HMaster through an haproxy process local to
> each server which handles SSL termination. So not quite end-to-end yet.
> - On the server side, most of our QA environment (a thousand regionservers
> and ~200 hmasters) are running it. So these are accepting TLS from clients
> and using TLS for intra-cluster communication.
>
> The migration is tricky for us due to the scale and the fact that we need
> to migrate off haproxy at the same time. Hopefully we should have some of
> production running end-to-end TLS within the next month or so.
>
> From what we've seen in QA so far, there have not been any major issues. We
> also couldn't discern any performance issues in testing, though we were
> comparing against our legacy haproxy setup and can't really compare against
> kerberos.
>
> One outstanding issue is https://issues.apache.org/jira/browse/HBASE-27782,
> which we still see periodically. It doesn't seem to cause actual issues,
> since the RpcClient still handles it gracefully, but it does cause noise
> and may have implications.
>
> On Fri, Jun 16, 2023 at 11:41 AM 张铎(Duo Zhang) 
> wrote:
>
> > So any updates here?
> >
> > Do we have any good news about the TLS usage in production so we can
> > move forward on release 2.6.x?
> >
> > Thanks.
> >
> > Andrew Purtell  于2023年4月7日周五 09:37写道:
> > >
> > > Agreed, that sounds like a good plan.
> > >
> > > On Wed, Mar 29, 2023 at 7:31 AM 张铎(Duo Zhang) 
> > wrote:
> > >
> > > > I think we could follow the old pattern when we cut a new release
> > branch.
> > > > That is, after the new release branch is cut and the new minor release
> > is
> > > > out, we will do a final release of the oldest release line and then
> > mark it
> > > > as EOL.
> > > >
> > > > So here, I think once we cut branch-2.6 and release 2.6.0, we can do a
> > > > final release for 2.4.x and mark 2.4.x as EOL.
> > > >
> > > > Thanks.
> > > >
> > > > Bryan Beaudreault  于2023年3月27日周一 09:57写道:
> > > >
> > > > > Primary development on hbase-backup and TLS is complete. There are a
> > > > couple
> > > > > minor things I may want to add to TLS in the future, such as
> > pluggable
> > > > cert
> > > > > verification. But those are not needed for initial release IMO.
> > > > >
> > > > > We are almost ready integrating hbase-backup in production. We’ve
> > fixed a
> > > > > few minor things (all committed) but otherwise it’s worked well so
> > far in
> > > > > tests.
> > > > >
> > > > > We are a bit delayed in integrating TLS. I’m hopeful it will happen
> > in
> > > > the
> > > > > next 2-3 months. It’s a big project for us, so not quick, but
> > definitely
> > > > on
> > > > > the roadmap.
> > > > >
> > > > > It seems like cloudera may be closer to integrating TLS in
> > production.
> > > > > Balazs recently filed and fixed HBASE-27673 related to mTLS. Maybe
> > he can
> > > > > chime in on his status, or let me know if I am totally off base :)
> > > > >
> > > > > On Sun, Mar 26, 2023 at 9:25 PM Andrew Purtell <
> > andrew.purt...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Before we open a new code line should we discuss EOL of 2.4? After
> > the
> > > > > > first 2.6 release? It’s not required of course but cuts down the
> > amount
> > > > > of
> > > > > > labor to have two 2.x code lines (presumably, one as stable and
> > one as
> > > > > > next) rather than three. Perhaps even before that, should we move
> > the
> > > > > > stable pointer to the latest 2.5 release?
> > > > > >
> > > > > > >
> > > > > > > On Mar 26, 2023, at 5:59 PM, 张铎  wrote:
> > > > > > >
> > > > > > > Bump.
> > > > > > >
> > > > > > > I believe the mTLS and backup related code have all been
> > finished on
> > > > > > > branch-2?
> > > > > > >
> > > > > > > Are there any other things which block us making the branch-2.6
> > > > branch?
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > Mallikarjun  于2022年10月17日周一 02:09写道:
> > > > > > >
> > > > > > >> On hbase-backup, we are using in production for more then 1
> > year. I
> > > > > can
> > > > > > >> vouch for it to be stable enough to be in a release version so
> > that
> > > > > more
> > > > > > >> people can use it and polished it further.
> > > > > > >>
> > > > > > >>> On Sun, Oct 16, 2022, 11:25 PM Andrew Purtell <
> > > > > > andrew.purt...@gmail.com>
> > > > > > >>> wrote:
> > > > > > >>>
> > > > > > >>> My understanding is some folks evaluating and polishing TLS for
> > > > their
> > > > > > >>> production are also considering hbase-backup in the

[jira] [Created] (HBASE-27945) Introduce hdfs+hbase colocated pod definition

2023-06-22 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-27945:


 Summary: Introduce hdfs+hbase colocated pod definition
 Key: HBASE-27945
 URL: https://issues.apache.org/jira/browse/HBASE-27945
 Project: HBase
  Issue Type: Sub-task
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


Implement a deployment strategy that supports short-circuit reads by forcing 
the data node and region server processes to colocate by deployment them as 
sibling container within the same pod.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)