Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Bharath Vissapragada
Interesting behavior, I just tried it out on my local setup (master/HEAD)
out of curiosity to check if we can trick HBase into deleting this bad row
and the following worked for me. I don't know how you ended up with that
row though (bad bulk load? just guessing).

To have a table with the Long.MAX timestamp, I commented out some pieces of
HBase code so that it doesn't override the timestamp with the current
millis on the region server (otherwise, I just see the expected behavior of
current ms).

*Step1: Create a table and generate the problematic row*

hbase(main):002:0> create 't1', 'f'
Created table t1

-- patch hbase to accept Long.MAX_VALUE ts ---

hbase(main):005:0> put 't1', 'row1', 'f:a', 'val', 9223372036854775807
Took 0.0054 seconds

-- make sure the put with the ts is present --
hbase(main):006:0> scan 't1'
ROW  COLUMN+CELL

 row1column=f:a, timestamp=
*9223372036854775807*, value=val

1 row(s)
Took 0.0226 seconds

*Step 2: Hand craft an HFile with the delete marker*

 ...with this row/col/max ts [Let me know if you want the code, I can put
it somewhere. I just used the StoreFileWriter utility ]

-- dump the contents of hfile using the utility ---

$ bin/hbase hfile -f file:///tmp/hfiles/f/bf84f424544f4675880494e09b750ce8
-p
..
Scanned kv count -> 1
K: row1/f:a/LATEST_TIMESTAMP/Delete/vlen=0/seqid=0 V:  < Delete marker

*Step 3: Bulk load this HFile with the delete marker *

bin/hbase completebulkload file:///tmp/hfiles t1

*Step 4: Make sure the delete marker is inserted correctly.*

hbase(main):001:0> scan 't1'
..

0 row(s)
Took 0.1387 seconds

-- Raw scan to make sure the delete marker is inserted and nothing funky is
happening ---

hbase(main):003:0> scan 't1', {RAW=>true}
ROW  COLUMN+CELL


 row1column=f:a,
timestamp=9223372036854775807, type=Delete

 row1column=f:a,
timestamp=9223372036854775807, value=val

1 row(s)
Took 0.0044 seconds

Thoughts?

On Tue, May 12, 2020 at 2:00 PM Alexander Batyrshin <0x62...@gmail.com>
wrote:

> Table is ~ 10TB SNAPPY data. I don’t have such a big time window on
> production for re-inserting all data.
>
> I don’t know how we got those cells. I can only assume that this is
> phoenix and/or replaying from WAL after region server crash.
>
> > On 12 May 2020, at 18:25, Wellington Chevreuil <
> wellington.chevre...@gmail.com> wrote:
> >
> > How large is this table? Can you afford re-insert all current data on a
> > new, temp table? If so, you could write a mapreduce job that scans this
> > table and rewrite all its cells to this new, temp table. I had verified
> > that 1.4.10 does have the timestamp replacing logic here:
> >
> https://github.com/apache/hbase/blob/branch-1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3395
> <
> https://github.com/apache/hbase/blob/branch-1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3395
> >
> >
> > So if you re-insert all this table cells into a new one, the timestamps
> > would be inserted correctly and you would then be able to delete those.
> > Now, how those cells managed to get inserted with max timestamp? Was this
> > cluster running on an old version that then got upgraded to 1.4.10?
> >
> >
> > Em ter., 12 de mai. de 2020 às 13:49, Alexander Batyrshin <
> 0x62...@gmail.com >
> > escreveu:
> >
> >> Any ideas how to delete these rows?
> >>
> >> I see only this way:
> >> - backup data from region that contains “damaged” rows
> >> - close region
> >> - remove region files from HDFS
> >> - assign region
> >> - copy needed rows from backup to recreated region
> >>
> >>> On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gmail.com>
> wrote:
> >>>
> >>> The same effect for CF:
> >>>
> >>> d =
> >>
> org.apache.hadoop.hbase.client.Delete.new("\x0439d58wj434dd".to_s.to_java_bytes)
> >>> d.deleteFamily("d".to_s.to_java_bytes,
> >> 9223372036854775807.to_java(Java::long))
> >>> table.delete(d)
> >>>
> >>> ROW
> COLUMN+CELL
> >>> \x0439d58wj434ddcolumn=d:,
> >> timestamp=1588269277879, type=DeleteFamily
> >>>
> >>>
>  On 29 Apr 2020, at 18:30, Wellington Chevreuil <
> >> wellington.chevre...@gmail.com 
> >>
> >> wrote:
> 
>  Well, it's weird that puts with such TS values were allowed, according
> >> to
>  current code state. Can you afford delete the whole CF for those rows?
> 
>  Em qua., 29 de abr. de 2020 às 14:41, junhyeok park <
> >> runnerren...@gmail.com   runnerren...@gmail.com >>
>  escreveu:
> 
> > I've been through the same thing. I use 2.2.0
> >
> > 2020년 4월 29일 (수) 오후 

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Alexander Batyrshin
Table is ~ 10TB SNAPPY data. I don’t have such a big time window on production 
for re-inserting all data.

I don’t know how we got those cells. I can only assume that this is phoenix 
and/or replaying from WAL after region server crash.

> On 12 May 2020, at 18:25, Wellington Chevreuil 
>  wrote:
> 
> How large is this table? Can you afford re-insert all current data on a
> new, temp table? If so, you could write a mapreduce job that scans this
> table and rewrite all its cells to this new, temp table. I had verified
> that 1.4.10 does have the timestamp replacing logic here:
> https://github.com/apache/hbase/blob/branch-1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3395
>  
> 
> 
> So if you re-insert all this table cells into a new one, the timestamps
> would be inserted correctly and you would then be able to delete those.
> Now, how those cells managed to get inserted with max timestamp? Was this
> cluster running on an old version that then got upgraded to 1.4.10?
> 
> 
> Em ter., 12 de mai. de 2020 às 13:49, Alexander Batyrshin <0x62...@gmail.com 
> >
> escreveu:
> 
>> Any ideas how to delete these rows?
>> 
>> I see only this way:
>> - backup data from region that contains “damaged” rows
>> - close region
>> - remove region files from HDFS
>> - assign region
>> - copy needed rows from backup to recreated region
>> 
>>> On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gmail.com> wrote:
>>> 
>>> The same effect for CF:
>>> 
>>> d =
>> org.apache.hadoop.hbase.client.Delete.new("\x0439d58wj434dd".to_s.to_java_bytes)
>>> d.deleteFamily("d".to_s.to_java_bytes,
>> 9223372036854775807.to_java(Java::long))
>>> table.delete(d)
>>> 
>>> ROW  COLUMN+CELL
>>> \x0439d58wj434ddcolumn=d:,
>> timestamp=1588269277879, type=DeleteFamily
>>> 
>>> 
 On 29 Apr 2020, at 18:30, Wellington Chevreuil <
>> wellington.chevre...@gmail.com  
>> > >>
>> wrote:
 
 Well, it's weird that puts with such TS values were allowed, according
>> to
 current code state. Can you afford delete the whole CF for those rows?
 
 Em qua., 29 de abr. de 2020 às 14:41, junhyeok park <
>> runnerren...@gmail.com  
>> >>
 escreveu:
 
> I've been through the same thing. I use 2.2.0
> 
> 2020년 4월 29일 (수) 오후 10:32, Alexander Batyrshin <0x62...@gmail.com 
> 
>> >>님이 작성:
> 
>> As you can see in example I already tried DELETE operation with
>> timestamp
>> = Long.MAX_VALUE without any success.
>> 
>>> On 29 Apr 2020, at 12:41, Wellington Chevreuil <
>> wellington.chevre...@gmail.com  
>> > >>
>> wrote:
>>> 
>>> That's expected behaviour [1]. If you are "travelling to the future",
> you
>>> need to do a delete specifying Long.MAX_VALUE timestamp as the
> timestamp
>>> optional parameter in the delete operation [2], if you don't specify
>>> timestamp on the delete, it will assume current time for the delete
>> marker,
>>> which will be smaller than the Long.MAX_VALUE set to your cells, so
> scans
>>> wouldn't filter it.
>>> 
>>> [1] https://hbase.apache.org/book.html#version.delete 
>>>  <
>> https://hbase.apache.org/book.html#version.delete 
>> >
>>> [2]
>>> 
>> 
> 
>> https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98
>>  
>> 
>> <
>> https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98
>>> 
>>> 
>>> Em qua., 29 de abr. de 2020 às 08:57, Alexander Batyrshin <
>> 0x62...@gmail.com>
>>> escreveu:
>>> 
 Hello all,
 We had faced with strange situation: table has rows with
> Long.MAX_VALUE
 timestamp.
 These rows impossible to delete, because DELETE mutation uses
 System.currentTimeMillis() timestamp.
 Is there any way to delete these rows?
 We use HBase-1.4.10
 
 Example:
 
 hbase(main):037:0> scan 'TRACET', { ROWPREFIXFILTER =>
>> 

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Wellington Chevreuil
How large is this table? Can you afford re-insert all current data on a
new, temp table? If so, you could write a mapreduce job that scans this
table and rewrite all its cells to this new, temp table. I had verified
that 1.4.10 does have the timestamp replacing logic here:
https://github.com/apache/hbase/blob/branch-1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3395

So if you re-insert all this table cells into a new one, the timestamps
would be inserted correctly and you would then be able to delete those.
Now, how those cells managed to get inserted with max timestamp? Was this
cluster running on an old version that then got upgraded to 1.4.10?


Em ter., 12 de mai. de 2020 às 13:49, Alexander Batyrshin <0x62...@gmail.com>
escreveu:

> Any ideas how to delete these rows?
>
> I see only this way:
> - backup data from region that contains “damaged” rows
> - close region
> - remove region files from HDFS
> - assign region
> - copy needed rows from backup to recreated region
>
> > On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gmail.com> wrote:
> >
> > The same effect for CF:
> >
> > d =
> org.apache.hadoop.hbase.client.Delete.new("\x0439d58wj434dd".to_s.to_java_bytes)
> > d.deleteFamily("d".to_s.to_java_bytes,
> 9223372036854775807.to_java(Java::long))
> > table.delete(d)
> >
> > ROW  COLUMN+CELL
> >  \x0439d58wj434ddcolumn=d:,
> timestamp=1588269277879, type=DeleteFamily
> >
> >
> >> On 29 Apr 2020, at 18:30, Wellington Chevreuil <
> wellington.chevre...@gmail.com >
> wrote:
> >>
> >> Well, it's weird that puts with such TS values were allowed, according
> to
> >> current code state. Can you afford delete the whole CF for those rows?
> >>
> >> Em qua., 29 de abr. de 2020 às 14:41, junhyeok park <
> runnerren...@gmail.com >
> >> escreveu:
> >>
> >>> I've been through the same thing. I use 2.2.0
> >>>
> >>> 2020년 4월 29일 (수) 오후 10:32, Alexander Batyrshin <0x62...@gmail.com
> >님이 작성:
> >>>
>  As you can see in example I already tried DELETE operation with
> timestamp
>  = Long.MAX_VALUE without any success.
> 
> > On 29 Apr 2020, at 12:41, Wellington Chevreuil <
>  wellington.chevre...@gmail.com >
> wrote:
> >
> > That's expected behaviour [1]. If you are "travelling to the future",
> >>> you
> > need to do a delete specifying Long.MAX_VALUE timestamp as the
> >>> timestamp
> > optional parameter in the delete operation [2], if you don't specify
> > timestamp on the delete, it will assume current time for the delete
>  marker,
> > which will be smaller than the Long.MAX_VALUE set to your cells, so
> >>> scans
> > wouldn't filter it.
> >
> > [1] https://hbase.apache.org/book.html#version.delete <
> https://hbase.apache.org/book.html#version.delete>
> > [2]
> >
> 
> >>>
> https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98
> <
> https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98
> >
> >
> > Em qua., 29 de abr. de 2020 às 08:57, Alexander Batyrshin <
>  0x62...@gmail.com>
> > escreveu:
> >
> >> Hello all,
> >> We had faced with strange situation: table has rows with
> >>> Long.MAX_VALUE
> >> timestamp.
> >> These rows impossible to delete, because DELETE mutation uses
> >> System.currentTimeMillis() timestamp.
> >> Is there any way to delete these rows?
> >> We use HBase-1.4.10
> >>
> >> Example:
> >>
> >> hbase(main):037:0> scan 'TRACET', { ROWPREFIXFILTER =>
>  "\x0439d58wj434dd",
> >> RAW=>true, VERSIONS=>10}
> >> ROW
> >>> COLUMN+CELL
> >> \x0439d58wj434dd   column=d:_0,
> >> timestamp=9223372036854775807, value=x
> >>
> >>
> >> hbase(main):045:0* delete 'TRACET', "\x0439d58wj434dd", "d:_0"
> >> 0 row(s) in 0.0120 seconds
> >>
> >> hbase(main):046:0> scan 'TRACET', { ROWPREFIXFILTER =>
>  "\x0439d58wj434dd",
> >> RAW=>true, VERSIONS=>10}
> >> ROW
> >>> COLUMN+CELL
> >> \x0439d58wj434dd   column=d:_0,
> >> timestamp=9223372036854775807, value=x
> >> \x0439d58wj434dd   column=d:_0,
> >> timestamp=1588146570005, type=Delete
> >>
> >>
> >> hbase(main):047:0> delete 'TRACET', "\x0439d58wj434dd", "d:_0",
> >> 9223372036854775807
> >> 0 row(s) in 0.0110 seconds
> >>
> >> hbase(main):048:0> scan 'TRACET', { ROWPREFIXFILTER =>
>  "\x0439d58wj434dd",
> >> RAW=>true, VERSIONS=>10}
> >> ROW
> >>> COLUMN+CELL
> >> \x0439d58wj434dd   

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Alexander Batyrshin
Any ideas how to delete these rows?

I see only this way:
- backup data from region that contains “damaged” rows
- close region
- remove region files from HDFS
- assign region
- copy needed rows from backup to recreated region

> On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gmail.com> wrote:
> 
> The same effect for CF:
> 
> d = 
> org.apache.hadoop.hbase.client.Delete.new("\x0439d58wj434dd".to_s.to_java_bytes)
> d.deleteFamily("d".to_s.to_java_bytes, 
> 9223372036854775807.to_java(Java::long))
> table.delete(d)
> 
> ROW  COLUMN+CELL
>  \x0439d58wj434ddcolumn=d:, 
> timestamp=1588269277879, type=DeleteFamily
> 
> 
>> On 29 Apr 2020, at 18:30, Wellington Chevreuil 
>> mailto:wellington.chevre...@gmail.com>> 
>> wrote:
>> 
>> Well, it's weird that puts with such TS values were allowed, according to
>> current code state. Can you afford delete the whole CF for those rows?
>> 
>> Em qua., 29 de abr. de 2020 às 14:41, junhyeok park > >
>> escreveu:
>> 
>>> I've been through the same thing. I use 2.2.0
>>> 
>>> 2020년 4월 29일 (수) 오후 10:32, Alexander Batyrshin <0x62...@gmail.com 
>>> >님이 작성:
>>> 
 As you can see in example I already tried DELETE operation with timestamp
 = Long.MAX_VALUE without any success.
 
> On 29 Apr 2020, at 12:41, Wellington Chevreuil <
 wellington.chevre...@gmail.com > 
 wrote:
> 
> That's expected behaviour [1]. If you are "travelling to the future",
>>> you
> need to do a delete specifying Long.MAX_VALUE timestamp as the
>>> timestamp
> optional parameter in the delete operation [2], if you don't specify
> timestamp on the delete, it will assume current time for the delete
 marker,
> which will be smaller than the Long.MAX_VALUE set to your cells, so
>>> scans
> wouldn't filter it.
> 
> [1] https://hbase.apache.org/book.html#version.delete 
> 
> [2]
> 
 
>>> https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98
>>>  
>>> 
> 
> Em qua., 29 de abr. de 2020 às 08:57, Alexander Batyrshin <
 0x62...@gmail.com>
> escreveu:
> 
>> Hello all,
>> We had faced with strange situation: table has rows with
>>> Long.MAX_VALUE
>> timestamp.
>> These rows impossible to delete, because DELETE mutation uses
>> System.currentTimeMillis() timestamp.
>> Is there any way to delete these rows?
>> We use HBase-1.4.10
>> 
>> Example:
>> 
>> hbase(main):037:0> scan 'TRACET', { ROWPREFIXFILTER =>
 "\x0439d58wj434dd",
>> RAW=>true, VERSIONS=>10}
>> ROW
>>> COLUMN+CELL
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=9223372036854775807, value=x
>> 
>> 
>> hbase(main):045:0* delete 'TRACET', "\x0439d58wj434dd", "d:_0"
>> 0 row(s) in 0.0120 seconds
>> 
>> hbase(main):046:0> scan 'TRACET', { ROWPREFIXFILTER =>
 "\x0439d58wj434dd",
>> RAW=>true, VERSIONS=>10}
>> ROW
>>> COLUMN+CELL
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=9223372036854775807, value=x
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=1588146570005, type=Delete
>> 
>> 
>> hbase(main):047:0> delete 'TRACET', "\x0439d58wj434dd", "d:_0",
>> 9223372036854775807
>> 0 row(s) in 0.0110 seconds
>> 
>> hbase(main):048:0> scan 'TRACET', { ROWPREFIXFILTER =>
 "\x0439d58wj434dd",
>> RAW=>true, VERSIONS=>10}
>> ROW
>>> COLUMN+CELL
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=9223372036854775807, value=x
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=1588146678086, type=Delete
>> \x0439d58wj434dd   column=d:_0,
>> timestamp=1588146570005, type=Delete
>> 
>> 
>> 
>> 
 
 
>>> 
> 



Re: Celebrating our 10th birthday for Apache HBase

2020-05-12 Thread Mich Talebzadeh
Hi,

I will be presenting on Hbase to one of the major European banks this
Friday  15th May.

Does anyone have latest bullet points on new features of HBase so I can add
them to my presentation material.

Many thanks,

Dr Mich Talebzadeh


[image: image.png]




LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 2 May 2020 at 08:19, Mich Talebzadeh 
wrote:

> Hi,
>
> I have worked with Hbase for many years and I think it is a great product.
> it does what it says on the tin so to speak.
>
> Ironically if you look around the NoSQL competitors, most of them are
> supported by start-ups, whereas Hbase is only supported as part of Apache
> suite of products by vendors like Cloudera, Hortonworks MapR etc.
>
> For those who would prefer to use SQL on top, there is Apache Phoenix
> around which makes life easier for most SQL savvy world to work on Hbase.
> Problem solved
>
> For TCO, Hbase is still value for money compared to others. You don't need
> expensive RAM or SSD with Hbase. That makes it easy to onboard it in no
> time. Also Hbase can be used in a variety of different business
> applications, whereas other commercial ones  are focused on narrower niche
> markets.
>
> Least but last happy 10th anniversary and hope Hbase will go form
> strength to strength and we will keep using it for years to come!
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 2 May 2020 at 07:28, Yu Li  wrote:
>
>> Dear HBase developers and users,
>> 亲爱的HBase开发者和用户们,
>>
>> It has been a decade since Apache HBase became an Apache top level project
>> [1]. Ten years is a big milestone and deserves a good celebration. Do you
>> have anything to say to us? Maybe some wishes, good stories or just a
>> happy
>> birthday blessing? Looking forward to your voices.
>>
>> 大家好!距离 HBase 成为 Apache 顶级项目 (TLP) 已经过去了整整 10 年
>> [1],这是一个值得纪念的里程碑。在这个特殊的时刻,您有什么想对 HBase 说的吗?分享您和 HBase 之间发生的故事,表达您对 HBase
>> 的期待,或者是一句简单的“生日快乐”祝福?期待听到您的声音。
>>
>> Best Regards,
>> Yu (on behalf of the Apache HBase PMC)
>>
>> 祝好!
>> Yu (代表Apache HBase PMC)
>>
>> [1] https://whimsy.apache.org/board/minutes/HBase.html#2010-04-21
>>
>


Re: Celebrating our 10th birthday for Apache HBase

2020-05-12 Thread Mich Talebzadeh
Hi,

I will be presenting on Hbase to one of the major European banks this
Friday  15th May.

Does anyone have latest bullet points on new features of HBase so I can add
them to my presentation material.

Many thanks,

Dr Mich Talebzadeh


[image: image.png]




LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 2 May 2020 at 08:19, Mich Talebzadeh 
wrote:

> Hi,
>
> I have worked with Hbase for many years and I think it is a great product.
> it does what it says on the tin so to speak.
>
> Ironically if you look around the NoSQL competitors, most of them are
> supported by start-ups, whereas Hbase is only supported as part of Apache
> suite of products by vendors like Cloudera, Hortonworks MapR etc.
>
> For those who would prefer to use SQL on top, there is Apache Phoenix
> around which makes life easier for most SQL savvy world to work on Hbase.
> Problem solved
>
> For TCO, Hbase is still value for money compared to others. You don't need
> expensive RAM or SSD with Hbase. That makes it easy to onboard it in no
> time. Also Hbase can be used in a variety of different business
> applications, whereas other commercial ones  are focused on narrower niche
> markets.
>
> Least but last happy 10th anniversary and hope Hbase will go form
> strength to strength and we will keep using it for years to come!
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 2 May 2020 at 07:28, Yu Li  wrote:
>
>> Dear HBase developers and users,
>> 亲爱的HBase开发者和用户们,
>>
>> It has been a decade since Apache HBase became an Apache top level project
>> [1]. Ten years is a big milestone and deserves a good celebration. Do you
>> have anything to say to us? Maybe some wishes, good stories or just a
>> happy
>> birthday blessing? Looking forward to your voices.
>>
>> 大家好!距离 HBase 成为 Apache 顶级项目 (TLP) 已经过去了整整 10 年
>> [1],这是一个值得纪念的里程碑。在这个特殊的时刻,您有什么想对 HBase 说的吗?分享您和 HBase 之间发生的故事,表达您对 HBase
>> 的期待,或者是一句简单的“生日快乐”祝福?期待听到您的声音。
>>
>> Best Regards,
>> Yu (on behalf of the Apache HBase PMC)
>>
>> 祝好!
>> Yu (代表Apache HBase PMC)
>>
>> [1] https://whimsy.apache.org/board/minutes/HBase.html#2010-04-21
>>
>