??????[ANNOUNCE] New HBase committer Baiqiang Zhao

2021-07-11 Thread zheng wang
Congratulations~




----
??: 
   "user"   
 


??????[ANNOUNCE] New HBase Committer Xiaolin Ha(??????)

2021-05-16 Thread zheng wang
Congratulations~


----
??: 
   "user"   
 


??????[ANNOUNCE] New HBase Committer Xiaolin Ha(??????)

2021-05-16 Thread zheng wang
Congratulations~


----
??: 
   "user"   
 


??????[ANNOUNCE] New HBase committer Geoffrey Jacoby

2021-04-13 Thread zheng wang
Congratulations!




----
??: 
   "user"   
 


??????[ANNOUNCE] New HBase PMC Huaxiang Sun

2021-04-13 Thread zheng wang
Congratulations!




----
??: 
   "user"   
 


回复: EOL branch-1 and all 1.x ?

2021-04-01 Thread zheng wang
+1 on EOL.




--原始邮件--
发件人:
"user"  
  

?????? ????HBase????????HBase2.1.0 ????????????????????????

2021-01-02 Thread zheng wang
2.0??bug2.1.0??bug??2
https://issues.apache.org/jira/browse/HBASE-23008




----
??: 
   "zheng wang" 
   
<18031...@qq.com;
:2021??1??2??(??) 7:43
??:"user-zh"

?????? ????HBase????????HBase2.1.0 ????????????????????????

2021-01-02 Thread zheng wang

??2




----
??: 
   "user-zh"

<2326130...@qq.com;
:2020??12??31??(??) 2:27
??:"user-zh"

?????? [ANNOUNCE] New HBase committer Yulin Niu

2020-12-03 Thread zheng wang
Congratulations!




----
??: 
   "user"   
 


?????? [ANNOUNCE] New HBase committer Xin Sun

2020-12-03 Thread zheng wang
Congratulations!




----
??: 
   "user"   
 


Re: [ANNOUNCE] Please welcome Viraj Jasani to the Apache HBase PMC

2020-10-06 Thread zheng wang
CongratulationsViraj!




---Original---
From: "Andrew Purtell"

??????[ANNOUNCE] New HBase Committer Zheng Wang(????)

2020-09-24 Thread zheng wang
Thanks all. Will continue to contribute~




----
??: 
   "user"   
 


??????[ANNOUNCE] New HBase Committer Zheng Wang(????)

2020-09-24 Thread zheng wang
Thanks all. Will continue to contribute~




----
??: 
   "user"   
 


??????WALs????????

2020-08-03 Thread zheng wang
hbase




----
??: 
   "user-zh"
https://issues.apache.org/jira/browse/HBASE-16721

??????WALs????????

2020-08-03 Thread zheng wang
----
??: 
   "user-zh"
https://issues.apache.org/jira/browse/HBASE-16721

?????? hbase ????????????????replication??WALs????????????

2020-07-23 Thread zheng wang
??




----
??: 
   "user-zh"


?????? hbase ????????????????replication??WALs????????????

2020-07-23 Thread zheng wang
qq
??




----
??: 
   "user-zh"



?????? hbase ????????????????replication??WALs????????????

2020-07-23 Thread zheng wang





----
??: 
   "user-zh"


????????????Re: ????replication????hbase????????????????

2020-07-22 Thread zheng wang
??cpu??
??https://blog.csdn.net/liangwenmail/article/details/87874067




----
??: 
   
"user-zh@hbase.apache.orgww112...@sina.com" 
   

?????? HBase 2.1.0 - NoSuchMethodException org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy

2020-07-22 Thread zheng wang
AndgetMethod does.




----
??: 
   "user"   
 
https://github.com/apache/hbase/blob/branch-2.1/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java#L533gt
 ;.
 I am using hadoop 3.0.0 and in FilterFileSystem (which LocalFileSystem
 extends from) I do see the methodnbsp; setStoragePolicy
 <
 
https://github.com/apache/hadoop/blob/release-3.0.0-RC1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java#L637gt
 ;
 .

 Can someone explain how is this NoSuchMethodException is being thrown or I
 am looking at the wrong code path for LocalFileSystem?

 On Tue, Jul 21, 2020 at 7:04 PM Sean Busbey https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274amp;gt
 gt
 
<https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274amp;gtgt;;
 gt; gt; ;
 gt; gt; gt; code it appears even if no storage policy is 
specified it
 will take
 gt; HOT.
 gt; gt; gt;
 gt; gt; gt; Can you explain this a bit more how can I get 
around this
 error or in a
 gt; gt; gt; single node hbase cluster I should be ignoring 
this?
 gt; gt; gt;
 gt; gt; gt;
 gt; gt; gt; On Tue, Jul 21, 2020 at 3:03 PM zheng wang 
<18031...@qq.comamp;gt;
 wrote:
 gt; gt; gt;
 gt; gt; gt; amp;gt; LocalFileSystem? 
Theamp;amp;nbsp;setStoragePolicy
 could only be used
 gt; in
 gt; gt; gt; amp;gt; distributed hdfs.
 gt; gt; gt; amp;gt; amp;amp;nbsp;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 
--amp;amp;nbsp;amp;amp;nbsp;--
 gt; gt; gt; amp;gt; ??:
 gt; gt; gt;
 gt;
 
amp;gt;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;
 gt; gt; gt; "user"
 gt; gt; gt;
 gt;
 
amp;gt;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;
 gt; gt; gt; <
 gt; gt; gt; amp;gt; 
subharaj.ma...@gmail.comamp;amp;gt;;
 gt; gt; gt; amp;gt; 
:amp;amp;nbsp;2020??7??21??(??) 5:58
 gt; gt; gt; amp;gt; 
??:amp;amp;nbsp;"Hbase-User"<
 user@hbase.apache.orgamp;amp;gt;;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; :amp;amp;nbsp;HBase 2.1.0 
-
 NoSuchMethodException
 gt; gt; gt; amp;gt;
 org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; Hi
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; I am using HBase 2.1.0 with Hadoop 
3.0.0. In hbase
 master logs I
 gt; am
 gt; gt; gt; amp;gt; seeing a warning like below
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; 2020-07-20 06:02:24,859 
WARNamp;amp;nbsp;
 [StoreOpener-1588230740-1]
 gt; gt; gt; amp;gt; util.CommonFSUtils: FileSystem 
doesn't support
 setStoragePolicy;
 gt; gt; gt; amp;gt; HDFS-6584, HDFS-9345 not 
available. This is normal
 and expected on
 gt; gt; gt; amp;gt; earlier Hadoop versions.
 gt; gt; gt; amp;gt; java.lang.NoSuchMethodException:
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 gt;
 
org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy(org.apache.hadoop.fs.Path,
 gt; gt; gt; amp;gt; java.lang.String)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;
 gt; at
 gt; gt; gt; amp;gt; 
java.lang.Class.getDeclaredMethod(Class.java:2130)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;
 gt; at
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 gt;
 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:577)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;
 gt; at
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 gt;
 
org.apache.hadoop.hbase.util.CommonFSUtils.setStor

?????? HBase 2.1.0 - NoSuchMethodException org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy

2020-07-22 Thread zheng wang
SeemsgetDeclaredMethod does not include the methods that extends from 
parent.




----
??: 
   "user"   
 
https://github.com/apache/hbase/blob/branch-2.1/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java#L533gt
 ;.
 I am using hadoop 3.0.0 and in FilterFileSystem (which LocalFileSystem
 extends from) I do see the methodnbsp; setStoragePolicy
 <
 
https://github.com/apache/hadoop/blob/release-3.0.0-RC1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java#L637gt
 ;
 .

 Can someone explain how is this NoSuchMethodException is being thrown or I
 am looking at the wrong code path for LocalFileSystem?

 On Tue, Jul 21, 2020 at 7:04 PM Sean Busbey https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274amp;gt
 gt
 
<https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274amp;gtgt;;
 gt; gt; ;
 gt; gt; gt; code it appears even if no storage policy is 
specified it
 will take
 gt; HOT.
 gt; gt; gt;
 gt; gt; gt; Can you explain this a bit more how can I get 
around this
 error or in a
 gt; gt; gt; single node hbase cluster I should be ignoring 
this?
 gt; gt; gt;
 gt; gt; gt;
 gt; gt; gt; On Tue, Jul 21, 2020 at 3:03 PM zheng wang 
<18031...@qq.comamp;gt;
 wrote:
 gt; gt; gt;
 gt; gt; gt; amp;gt; LocalFileSystem? 
Theamp;amp;nbsp;setStoragePolicy
 could only be used
 gt; in
 gt; gt; gt; amp;gt; distributed hdfs.
 gt; gt; gt; amp;gt; amp;amp;nbsp;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 
--amp;amp;nbsp;amp;amp;nbsp;--
 gt; gt; gt; amp;gt; ??:
 gt; gt; gt;
 gt;
 
amp;gt;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;
 gt; gt; gt; "user"
 gt; gt; gt;
 gt;
 
amp;gt;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;amp;nbsp;
 gt; gt; gt; <
 gt; gt; gt; amp;gt; 
subharaj.ma...@gmail.comamp;amp;gt;;
 gt; gt; gt; amp;gt; 
:amp;amp;nbsp;2020??7??21??(??) 5:58
 gt; gt; gt; amp;gt; 
??:amp;amp;nbsp;"Hbase-User"<
 user@hbase.apache.orgamp;amp;gt;;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; :amp;amp;nbsp;HBase 2.1.0 
-
 NoSuchMethodException
 gt; gt; gt; amp;gt;
 org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; Hi
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; I am using HBase 2.1.0 with Hadoop 
3.0.0. In hbase
 master logs I
 gt; am
 gt; gt; gt; amp;gt; seeing a warning like below
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt; 2020-07-20 06:02:24,859 
WARNamp;amp;nbsp;
 [StoreOpener-1588230740-1]
 gt; gt; gt; amp;gt; util.CommonFSUtils: FileSystem 
doesn't support
 setStoragePolicy;
 gt; gt; gt; amp;gt; HDFS-6584, HDFS-9345 not 
available. This is normal
 and expected on
 gt; gt; gt; amp;gt; earlier Hadoop versions.
 gt; gt; gt; amp;gt; java.lang.NoSuchMethodException:
 gt; gt; gt; amp;gt;
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 gt;
 
org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy(org.apache.hadoop.fs.Path,
 gt; gt; gt; amp;gt; java.lang.String)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;
 gt; at
 gt; gt; gt; amp;gt; 
java.lang.Class.getDeclaredMethod(Class.java:2130)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;
 gt; at
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 gt;
 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:577)
 gt; gt; gt; amp;gt;
 gt; gt; gt;
 
amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;amp;amp;nbsp;

?????? HBase 2.1.0 - NoSuchMethodException org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy

2020-07-22 Thread zheng wang
Are you sure you are using hadoop3.0.0?




----
??: 
   "user"   
 
https://github.com/apache/hbase/blob/branch-2.1/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java#L533;.
I am using hadoop 3.0.0 and in FilterFileSystem (which LocalFileSystem
extends from) I do see the method setStoragePolicy
<https://github.com/apache/hadoop/blob/release-3.0.0-RC1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java#L637;
.

Can someone explain how is this NoSuchMethodException is being thrown or I
am looking at the wrong code path for LocalFileSystem?

On Tue, Jul 21, 2020 at 7:04 PM Sean Busbey https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274gt
   ;
   code it appears even if no storage policy is specified it will 
take
 HOT.
  
   Can you explain this a bit more how can I get around this error 
or in a
   single node hbase cluster I should be ignoring this?
  
  
   On Tue, Jul 21, 2020 at 3:03 PM zheng wang 
<18031...@qq.comgt; wrote:
  
   gt; LocalFileSystem? Theamp;nbsp;setStoragePolicy 
could only be used
 in
   gt; distributed hdfs.
   gt; amp;nbsp;
   gt;
   gt;
   gt; 
--amp;nbsp;amp;nbsp;--
   gt; ??:
  
 
gt;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
   "user"
  
 
gt;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;
   <
   gt; subharaj.ma...@gmail.comamp;gt;;
   gt; :amp;nbsp;2020??7??21??(??) 5:58
   gt; 
??:amp;nbsp;"Hbase-User"

?????? hbase ????????????????replication??WALs????????????

2020-07-21 Thread zheng wang
2.0.x ??2.1.0??




----
??: 
   "user-zh"



?????? hbase ????????????????replication??WALs????????????

2020-07-21 Thread zheng wang
??(??)??wal??jirahttps://issues.apache.org/jira/browse/HBASE-23008









----
??: 
   "user-zh"



?????? HBase 2.1.0 - NoSuchMethodException org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy

2020-07-21 Thread zheng wang
This log info just as a warning that cant make it disappear for now, but will 
not impact anything, so you can just ignore it in local mode.


----
??: 
   "user"   
 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274;
code it appears even if no storage policy is specified it will take HOT.

Can you explain this a bit more how can I get around this error or in a
single node hbase cluster I should be ignoring this?


On Tue, Jul 21, 2020 at 3:03 PM zheng wang <18031...@qq.com wrote:

 LocalFileSystem? Thenbsp;setStoragePolicy could only be used in
 distributed hdfs.
 nbsp;


 --nbsp;nbsp;--
 ??:

 "user"

 <
 subharaj.ma...@gmail.comgt;;
 :nbsp;2020??7??21??(??) 5:58
 ??:nbsp;"Hbase-User"

??????HBase 2.1.0 - NoSuchMethodException org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy

2020-07-21 Thread zheng wang
LocalFileSystem? ThesetStoragePolicy could only be used in distributed 
hdfs.



----
??: 
   "user"   
 


??????Re: Re: ????replication????hbase????????????????

2020-07-21 Thread zheng wang
1??gc??
2ssd
3??hbasecpu







----
??: 
   
"user-zh@hbase.apache.orgww112...@sina.com" 
   

??????Re: ????replication????hbase????????????????

2020-07-20 Thread zheng wang
replication??GC??IO??




----
??: 
   
"user-zh@hbase.apache.orgww112...@sina.com" 
   

??????????replication????hbase????????????????

2020-07-20 Thread zheng wang
??replication??




----
??: 
   
"user-zh@hbase.apache.orgww112...@sina.com" 
   

??????Could not iterate StoreFileScanner - during compaction

2020-07-03 Thread zheng wang
Hi,
"cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695"
"Invalid onDisksize=-969694035: expected to be at least 33 and at most 
2147483647, or -1"


I guess there is a very big cell causethe block size exceed 
theInteger.MAX_VALUE, andlead to overflowerror.





----
??:"Mohamed Meeran"

?????? [DISCUSS] Removing problematic terms from our project

2020-06-25 Thread zheng wang
I like thecontroller.


Coordinator is a bit long for me to write and speak.
Manager and Admin is used somewhere yet in HBase.




----
??:"Andrew Purtell"https://www.merriam-webster.com/dictionary/master
 gt 


?????? [DISCUSS] Removing problematic terms from our project

2020-06-23 Thread zheng wang
IMO, master is ok if not used with slave together.


-1/+1/+1/+1


----
??:"Andrew Purtell"https://www.merriam-webster.com/dictionary/master
 for
 examples. In particular, the progression of an artisan was from
 "apprentice" to "journeyman" to "master". A master smith, carpenter, or
 artist would run a shop managing lots of workers and apprentices who would
 hope to become masters of their own someday. So "master" and "worker" can
 still go together.

 Since it's the least problematic term, and by far the hardest term to
 change (both within HBase and with effects on downstream projects such as
 Ambari), I'm -0 (nonbinding) on changing "master".

 Geoffrey

 On Mon, Jun 22, 2020 at 1:32 PM Rushabh Shah
 https://issues.apache.org/jira/browse/HBASE-12677
  :
Update replication docs to 
clarify terminology
- HBASE-13852 <
 https://issues.apache.org/jira/browse/HBASE-13852
  :
Replace master-slave 
terminology in book, site, and javadoc
 with a
   more
modern vocabulary
- HBASE-24576 <
 https://issues.apache.org/jira/browse/HBASE-24576
  :
Changing "whitelist" and 
"blacklist" in our docs and project
   
In response to this proposal, a member of the PMC asked 
if the term
'master' used by itself would be fine, because we only 
have use of
   'slave'
in replication documentation and that is easily 
addressed. In
 response
   to
this question, others on the PMC suggested that even if 
only
 'master'
  is
used, in this context it is still a problem.
   
For folks who are surprised or lacking context on the 
details of
 this
discussion, one PMC member offered a link to this draft 
RFC as
   background:

https://tools.ietf.org/id/draft-knodel-terminology-00.html
   
There was general support for removing the term 
"master" / "hmaster"
   from
our code base and using the terms "coordinator" or 
"leader" instead.
  In
   the
context of replication, "worker" makes less sense and 
perhaps
   "destination"
or "follower" would be more appropriate terms.
   
One PMC member's thoughts on language and non-native 
English
 speakers
  is
worth including in its entirety:
   
While words like blacklist/whitelist/slave clearly have 
those
 negative
references, word master might not have the same impact 
for non
 native
English speakers like myself where the literal 
translation to my
  mother
tongue does not have this same bad connotation. 
Replacing all
  references
for word *master *on our docs/codebase is a huge 
effort, I guess
 such
  a
decision would be more suitable for native English 
speakers folks,
 and
maybe we should consider the opinion of contributors 
from that
 ethinic
minority as well?
   
These are good questions for public discussion.
   
We have a consensus in the PMC, at this time, that is 
supportive of
   making
the above discussed terminology changes. However, we 
also have
  concerns
about what it would take to accomplish meaningful 
changes. Several
 on
   the
PMC offered support in the form of cycles to review 
pull requests
 and
patches, and two PMC members offered personal 
bandwidth for
 creating
   and
releasing new code lines as needed to complete a 
deprecation cycle.
   
Unfortunately, the terms "master" and "hmaster" appear 
throughout
 our
   code
base in class names, user facing API subject to our 
project
   compatibility
guidelines, and configuration variable names, which are 
also
  implicated
   by
compatibility guidelines given the impact of changes to 
operators
 and
operations. The changes being discussed are not 
backwards compatible
changes and cannot be executed with swiftness while 
simultaneously
preserving compatibility. There must be a deprecation 
cycle. First,
 we
   must
tag all implicated public API and configuration 
variables as
  deprecated,
and release HBase 3 with these deprecations in place. 
Then, we must
undertake rename and removal as appropriate, and 
release the result
 as
HBase 4.
   
One PMC member raised a question in this context 
included here in
   entirety:
   
Are we willing to commit to rolling through the major 
versions at a
  pace
that's necessary to make this transition as swift as
reasonably possible?
   
This is a question for all of us. For the PMC, who 
would supervise
 the
effort, perhaps contribute to it, and certainly vote on 
the release
candidates. For contributors and potential 
contributors, who would
   provide
the necessary patches. For committers, who would be 
required to
 review
   and
commit the relevant changes.
   
Although there has been some initial discussion, there 
is no
 singular
proposal, or plan, or set of decisions made at this 
time. Wrestling
  with
this concern and the competing concerns involved with 
addressing it
(motivation for change versus 

??????[ANNOUNCE] Please welcome Lijin Bin to the HBase PMC

2020-05-25 Thread zheng wang
Congratulations~




----
??:"Guanghao Zhang"

??????how to scan for all values which don't have given timestamps?

2020-05-11 Thread zheng wang
May be you can split it to two scans.




----
??: "Vitaliy Semochkin"https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/TimestampsFilter.html;;
Which filter should I use?

Regards,
Vitaliy

?????? How to delete row with Long.MAX_VALUE timestamp

2020-04-29 Thread zheng wang
It seems like the Long.MAX_VALUE is aspecial value, if set it as the 
timestamp , will be changed to current time.






----
??:"Wellington Chevreuil"https://hbase.apache.org/book.html#version.delete
[2]
https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98

Em qua., 29 de abr. de 2020 ??s 08:57, Alexander Batyrshin 
<0x62...@gmail.com
escreveu:

 Hello all,
 We had faced with strange situation: table has rows with Long.MAX_VALUE
 timestamp.
 These rows impossible to delete, because DELETE mutation uses
 System.currentTimeMillis() timestamp.
 Is there any way to delete these rows?
 We use HBase-1.4.10

 Example:

 hbase(main):037:0 scan 'TRACET', { ROWPREFIXFILTER = 
"\x0439d58wj434dd",
 RAW=true, VERSIONS=10}
 
ROW
 COLUMN+CELL
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=9223372036854775807, value=x


 hbase(main):045:0* delete 'TRACET', "\x0439d58wj434dd", "d:_0"
 0 row(s) in 0.0120 seconds

 hbase(main):046:0 scan 'TRACET', { ROWPREFIXFILTER = 
"\x0439d58wj434dd",
 RAW=true, VERSIONS=10}
 
ROW
 COLUMN+CELL
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=9223372036854775807, value=x
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=1588146570005, type=Delete


 hbase(main):047:0 delete 'TRACET', "\x0439d58wj434dd", "d:_0",
 9223372036854775807
 0 row(s) in 0.0110 seconds

 hbase(main):048:0 scan 'TRACET', { ROWPREFIXFILTER = 
"\x0439d58wj434dd",
 RAW=true, VERSIONS=10}
 
ROW
 COLUMN+CELL
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=9223372036854775807, value=x
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=1588146678086, type=Delete
 
\x0439d58wj434dd
 column=d:_0,
 timestamp=1588146570005, type=Delete





Re:Why HBase Delete Reverts Back to Previous Value instead of Totally Deleting it

2019-11-26 Thread zheng wang
Maybe you used uncorrect method of Delete,the addColumn only can delete one 
version,and the addColumns can delete all versions.



--Original--
From: "Trixia Belleza"

?????? a problem of long STW because of GC ref-proc

2019-09-29 Thread zheng wang
Even if set to 64KB,it also has more than 100w softRef ,and will cost too long 
still.


this "GC ref-proc" process 50w softRef and cost 700ms:


2019-09-18T03:16:42.088+0800: 125161.477: 
[GC remark 
2019-09-18T03:16:42.088+0800: 125161.477: 
[Finalize Marking, 0.0018076 secs] 
2019-09-18T03:16:42.089+0800: 125161.479: 
[GC ref-proc
2019-09-18T03:16:42.089+0800: 125161.479: [SoftReference, 
499278 refs, 0.1382086 secs]
2019-09-18T03:16:42.228+0800: 125161.617: [WeakReference, 3750 
refs, 0.0049171 secs]
2019-09-18T03:16:42.233+0800: 125161.622: [FinalReference, 1040 
refs, 0.0009375 secs]
2019-09-18T03:16:42.234+0800: 125161.623: [PhantomReference, 0 
refs, 21921 refs, 0.0058014 secs]
2019-09-18T03:16:42.239+0800: 125161.629: [JNI Weak Reference, 
0.0001070 secs]
, 0.6667733 secs] 
2019-09-18T03:16:42.756+0800: 125162.146: 
[Unloading, 0.0224078 secs]
, 0.6987032 secs] 


--  --
??: "OpenInx";
: 2019??9??30??(??) 10:27
??: "Hbase-User";

: Re: a problem of long STW because of GC ref-proc



100% get is not the right reason for choosing 16KB I think, because  if you
read a block, there's larger possibility that we
will read the adjacent cells in the same block... I think caching a 16KB
block or caching a 64KB block in BucketCache won't
make a big difference ?  (but if you cell byte size is quite small,  then
it will have so many cells encoded in a 64KB block,
then block with smaller size will be better because we search the cells in
a block one by one , means O(N) complexity).


On Mon, Sep 30, 2019 at 10:08 AM zheng wang <18031...@qq.com> wrote:

> Yes,it will be remission by your advise,but there only get request in our
> business,so 16KB is better.
> IMO,the locks of offset will always be used,so is the strong reference a
> better choice?
>
>
>
>
> --  --
> ??: "OpenInx";
> : 2019??9??30??(??) 9:46
> ??: "Hbase-User";
>
> : Re: a problem of long STW because of GC ref-proc
>
>
>
> Seems your block size is very small (16KB), so there will be
> 70*1024*1024/16=4587520 block (at most) in your BucketCache.
> For each block, the RS will maintain a soft reference idLock and a
> BucketEntry in its bucket cache.  So maybe you can try to
> enlarge the block size ?
>
> On Sun, Sep 29, 2019 at 10:14 PM zheng wang <18031...@qq.com> wrote:
>
> > Hi~
> >
> >
> > My live cluster env config below:
> > hbase version:cdh6.0.1(apache hbase2.0.0)
> > hbase config: bucketCache(70g),blocksize(16k)
> >
> >
> > java version:1.8.0_51
> > javaconfig:heap(32g),-XX:+UseG1GC  -XX:MaxGCPauseMillis=100
> > -XX:+ParallelRefProcEnabled
> >
> >
> > About 1-2days ,regionServer would occur a old gen gc that cost 1~2s in
> > remark phase:
> >
> >
> > 2019-09-29T01:55:45.186+0800: 365222.053:
> > [GC remark
> > 2019-09-29T01:55:45.186+0800: 365222.053:
> > [Finalize Marking, 0.0016327 secs]
> > 2019-09-29T01:55:45.188+0800: 365222.054:
> > [GC ref-proc
> > 2019-09-29T01:55:45.188+0800: 365222.054: [SoftReference,
> > 1264586 refs, 0.3151392 secs]
> > 2019-09-29T01:55:45.503+0800: 365222.370: [WeakReference,
> > 4317 refs, 0.0024381 secs]
> > 2019-09-29T01:55:45.505+0800: 365222.372:
> [FinalReference,
> > 9791 refs, 0.0037445 secs]
> > 2019-09-29T01:55:45.509+0800: 365222.376:
> > [PhantomReference, 0 refs, 1963 refs, 0.0018941 secs]
> > 2019-09-29T01:55:45.511+0800: 365222.378: [JNI Weak
> > Reference, 0.0001156 secs]
> > , 1.4554361 secs]
> > 2019-09-29T01:55:46.643+0800: 365223.510:
> > [Unloading, 0.0211370 secs]
> > , 1.4851728 secs]
> >
> > The SoftReference seems used by offsetLock in BucketCache, there is two
> > questions :
> > 1:SoftReference proc cost 0.31s,but why GC ref-proc cost 1.45s at all?
> > 2:Is this a good choice to use SoftReference here?

?????? a problem of long STW because of GC ref-proc

2019-09-29 Thread zheng wang
Yes,it will be remission by your advise,but there only get request in our 
business,so 16KB is better.
IMO,the locks of offset will always be used,so is the strong reference a better 
choice?




--  --
??: "OpenInx";
: 2019??9??30??(??) 9:46
??: "Hbase-User";

: Re: a problem of long STW because of GC ref-proc



Seems your block size is very small (16KB), so there will be
70*1024*1024/16=4587520 block (at most) in your BucketCache.
For each block, the RS will maintain a soft reference idLock and a
BucketEntry in its bucket cache.  So maybe you can try to
enlarge the block size ?

On Sun, Sep 29, 2019 at 10:14 PM zheng wang <18031...@qq.com> wrote:

> Hi~
>
>
> My live cluster env config below:
> hbase version:cdh6.0.1(apache hbase2.0.0)
> hbase config: bucketCache(70g),blocksize(16k)
>
>
> java version:1.8.0_51
> javaconfig:heap(32g),-XX:+UseG1GC  -XX:MaxGCPauseMillis=100
> -XX:+ParallelRefProcEnabled
>
>
> About 1-2days ,regionServer would occur a old gen gc that cost 1~2s in
> remark phase:
>
>
> 2019-09-29T01:55:45.186+0800: 365222.053:
> [GC remark
> 2019-09-29T01:55:45.186+0800: 365222.053:
> [Finalize Marking, 0.0016327 secs]
> 2019-09-29T01:55:45.188+0800: 365222.054:
> [GC ref-proc
> 2019-09-29T01:55:45.188+0800: 365222.054: [SoftReference,
> 1264586 refs, 0.3151392 secs]
> 2019-09-29T01:55:45.503+0800: 365222.370: [WeakReference,
> 4317 refs, 0.0024381 secs]
> 2019-09-29T01:55:45.505+0800: 365222.372: [FinalReference,
> 9791 refs, 0.0037445 secs]
> 2019-09-29T01:55:45.509+0800: 365222.376:
> [PhantomReference, 0 refs, 1963 refs, 0.0018941 secs]
> 2019-09-29T01:55:45.511+0800: 365222.378: [JNI Weak
> Reference, 0.0001156 secs]
> , 1.4554361 secs]
> 2019-09-29T01:55:46.643+0800: 365223.510:
> [Unloading, 0.0211370 secs]
> , 1.4851728 secs]
>
> The SoftReference seems used by offsetLock in BucketCache, there is two
> questions :
> 1:SoftReference proc cost 0.31s,but why GC ref-proc cost 1.45s at all?
> 2:Is this a good choice to use SoftReference here?

a problem of long STW because of GC ref-proc

2019-09-29 Thread zheng wang
Hi~


My live cluster env config below:
hbase version:cdh6.0.1(apache hbase2.0.0)
hbase config: bucketCache(70g),blocksize(16k)


java version:1.8.0_51
javaconfig:heap(32g),-XX:+UseG1GC  -XX:MaxGCPauseMillis=100 
-XX:+ParallelRefProcEnabled


About 1-2days ,regionServer would occur a old gen gc that cost 1~2s in remark 
phase:


2019-09-29T01:55:45.186+0800: 365222.053: 
[GC remark 
2019-09-29T01:55:45.186+0800: 365222.053: 
[Finalize Marking, 0.0016327 secs] 
2019-09-29T01:55:45.188+0800: 365222.054: 
[GC ref-proc
2019-09-29T01:55:45.188+0800: 365222.054: [SoftReference, 
1264586 refs, 0.3151392 secs]
2019-09-29T01:55:45.503+0800: 365222.370: [WeakReference, 4317 
refs, 0.0024381 secs]
2019-09-29T01:55:45.505+0800: 365222.372: [FinalReference, 9791 
refs, 0.0037445 secs]
2019-09-29T01:55:45.509+0800: 365222.376: [PhantomReference, 0 
refs, 1963 refs, 0.0018941 secs]
2019-09-29T01:55:45.511+0800: 365222.378: [JNI Weak Reference, 
0.0001156 secs]
, 1.4554361 secs] 
2019-09-29T01:55:46.643+0800: 365223.510: 
[Unloading, 0.0211370 secs]
, 1.4851728 secs]

The SoftReference seems used by offsetLock in BucketCache, there is two 
questions :
1:SoftReference proc cost 0.31s,but why GC ref-proc cost 1.45s at all?
2:Is this a good choice to use SoftReference here?