Re: export and import the data

2023-05-04 Thread Davide Vergari
If  hbase tables you can create a snapshot for each table then export with
the ExportSnapshot mapreduce job (should be already available on 0.98.x).
For data that are not in hbase you can use distcp

Il giorno gio 4 mag 2023 alle ore 17:13  ha scritto:

> Jignesh,   how much data?  Is the data currently in hbase format?
>  Very kindly,  Sean
>
>
> > On 05/04/2023 11:03 AM Jignesh Patel  wrote:
> >
> >
> > We are in the process of having hadoop os, however we are using a very
> old
> > version of hadoop.
> > Hadoop 2.6
> > and HBase 0.98.7.
> >
> > So how do we export and import the data from the cluster with the old OS
> to
> > the new OS. We are trying to use the same hadoop/hbase version.
> >
> > -Jignesh
>


Re: export and import the data

2023-05-04 Thread sck
Jignesh,   how much data?  Is the data currently in hbase format? Very 
kindly,  Sean  


> On 05/04/2023 11:03 AM Jignesh Patel  wrote:
> 
>  
> We are in the process of having hadoop os, however we are using a very old
> version of hadoop.
> Hadoop 2.6
> and HBase 0.98.7.
> 
> So how do we export and import the data from the cluster with the old OS to
> the new OS. We are trying to use the same hadoop/hbase version.
> 
> -Jignesh


export and import the data

2023-05-04 Thread Jignesh Patel
We are in the process of having hadoop os, however we are using a very old
version of hadoop.
Hadoop 2.6
and HBase 0.98.7.

So how do we export and import the data from the cluster with the old OS to
the new OS. We are trying to use the same hadoop/hbase version.

-Jignesh


Re: 关于offPeakCompactionTracker的疑惑

2023-05-04 Thread Duo Zhang
可以的,建 issue 吧,先不着急改具体代码
或者你可以内部先改改测试一下,然后把结果啥的也贴到 issue 上

章啸  于2023年5月4日周四 13:35写道:

> 我是发现OffPeak compaction并不能充分利用低峰期的集群资源去合并高峰期生成的hfile,因为同一时间只能有一个 off peak
> compact。
> 似乎这样并不是很合理,我想尝试修改这里(去掉static,改成一个store
> 内同一时间只有一个),但是不理解这样设计的初衷,以及我这样修改是否有问题。
> 我想我可以创建一个jira,把我的修改思路提上来。
>
> > 在 2023年5月4日,11:25,张铎  写道:
> >
> > 我 blame 翻了一下
> >
> >
> https://github.com/apache/hbase/blob/5998a0f349824adf823f79a52530e97dfc624b92/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/OffPeakCompactions.java
> >
> > 这个 AtomicBoolean 的作用其实就是替代这个文件里的一些逻辑的,在 HBASE-7437 的改动里把这个文件去掉了
> >
> >
> https://github.com/apache/hbase/commit/c9d33bef3f74cc771be1574db191666c2bc043d2#diff-bb21d9a53c6b006a954b4a981483fae7dae1c635298f24d208c6be80df1153a4
> >
> > 你可以看他的注释解释,意思就是说 OffPeak 的 compaction
> > 个数统计是全局的,同一时间只能有一个,可以看下面那个 tryStartOffPeakRequest 的实现
> >
> > 这个代码已经是十年之前的了,如果觉得不合适的也可以讨论修改。你具体是遇到了啥问题?
> >
> >  于2023年4月23日周日 09:30写道:
> >
> >>
> >>
> Hi,各位社区的大佬们。关于offPeakCompaction我有一个疑惑,在HStore中有一个static修饰的成员,这是HBASE-7437优化HBASE-7822中的bug而引入的。
> >>
> >> private static final AtomicBoolean offPeakCompactionTracker = new
> >> AtomicBoolean();
> >>
> >>
> 然后在请求compaction时,同一个rs中的不同store需要来抢着这个offPeakCompactionTracker,这样在低峰期,同一个时刻只能有一个store使用offpeak
> >> compaction的参数配置来运行compaction。
> >>
> >> // Normal case - coprocessor is not overriding file selection.
> >> if (!compaction.hasSelection()) {
> >>  boolean isUserCompaction = priority == Store.PRIORITY_USER;
> >>  boolean mayUseOffPeak =
> >>offPeakHours.isOffPeakHour() &&
> >> offPeakCompactionTracker.compareAndSet(false, true);
> >>  try {
> >>compaction.select(this.filesCompacting, isUserCompaction,
> >> mayUseOffPeak,
> >>  forceMajor && filesCompacting.isEmpty());
> >>  } catch (IOException e) {
> >>if (mayUseOffPeak) {
> >>  offPeakCompactionTracker.set(false);
> >>}
> >>throw e;
> >>  }
> >>  assert compaction.hasSelection();
> >>  if (mayUseOffPeak && !compaction.getRequest().isOffPeak()) {
> >>// Compaction policy doesn't want to take advantage of off-peak.
> >>offPeakCompactionTracker.set(false);
> >>  }
> >> }
> >>
> >>
> >> 对于这里,我有几个疑惑:
> >>
> >> 1. 为啥offpeak compaction需要做成rs级别不同的store之间互斥?(对此,我没有翻找到任何相关的jira或者设计文档。)
> >> 2. 如果去掉static的修饰,会有什么问题?
> >>
> >>
>


Re: [ANNOUNCE] New HBase committer Nihal Jain

2023-05-04 Thread Nihal Jain
Thank you so much everyone. :)

Appreciate the invitation and thanks again for all the support the team has
given me along the way.

Regards,
Nihal

On Thu, 4 May, 2023, 09:42 ramkrishna vasudevan, <
ramkrishna.s.vasude...@gmail.com> wrote:

> Congratulations !!!
>
> On Thu, May 4, 2023 at 8:39 AM 张铎(Duo Zhang) 
> wrote:
>
> > Congratulations!
> >
> > Viraj Jasani  于2023年5月3日周三 23:47写道:
> >
> > > Congratulations Nihal!! Very well deserved!!
> > >
> > > On Wed, May 3, 2023 at 5:12 AM Nick Dimiduk 
> wrote:
> > >
> > > > Hello!
> > > >
> > > > On behalf of the Apache HBase PMC, I am pleased to announce that
> Nihal
> > > Jain
> > > > has accepted the PMC's invitation to become a committer on the
> project.
> > > We
> > > > appreciate all of Nihal's generous contributions thus far and look
> > > forward
> > > > to his continued involvement.
> > > >
> > > > Congratulations and welcome, Nihal Jain!
> > > >
> > > > Thanks,
> > > > Nick
> > > >
> > >
> >
>


Re: [ANNOUNCE] New HBase committer Nihal Jain

2023-05-04 Thread Nihal Jain
Thank you so much everyone. :)

Appreciate the invitation and thanks again for all the support the team has
given me along the way.

Regards,
Nihal

On Thu, 4 May, 2023, 09:42 ramkrishna vasudevan, <
ramkrishna.s.vasude...@gmail.com> wrote:

> Congratulations !!!
>
> On Thu, May 4, 2023 at 8:39 AM 张铎(Duo Zhang) 
> wrote:
>
> > Congratulations!
> >
> > Viraj Jasani  于2023年5月3日周三 23:47写道:
> >
> > > Congratulations Nihal!! Very well deserved!!
> > >
> > > On Wed, May 3, 2023 at 5:12 AM Nick Dimiduk 
> wrote:
> > >
> > > > Hello!
> > > >
> > > > On behalf of the Apache HBase PMC, I am pleased to announce that
> Nihal
> > > Jain
> > > > has accepted the PMC's invitation to become a committer on the
> project.
> > > We
> > > > appreciate all of Nihal's generous contributions thus far and look
> > > forward
> > > > to his continued involvement.
> > > >
> > > > Congratulations and welcome, Nihal Jain!
> > > >
> > > > Thanks,
> > > > Nick
> > > >
> > >
> >
>