[jira] [Commented] (HBASE-10486) ProtobufUtil Append Increment deserialization lost cell level timestamp

2014-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895464#comment-13895464
 ] 

Hadoop QA commented on HBASE-10486:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12627784/hbase-10486-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12627784

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8641//console

This message is automatically generated.

 ProtobufUtil Append  Increment deserialization lost cell level timestamp
 -

 Key: HBASE-10486
 URL: https://issues.apache.org/jira/browse/HBASE-10486
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.1
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 0.98.1

 Attachments: hbase-10486-v2.patch, hbase-10486.patch


 When we deserialized Append  Increment, we uses wrong timestamp value during 
 deserialization in trunk  0.98 code and discard the value in 0.96 code base. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895475#comment-13895475
 ] 

Hadoop QA commented on HBASE-3909:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12627787/HBASE-3909-backport-from-fb-for-trunk-6.patch
  against trunk revision .
  ATTACHMENT ID: 12627787

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private UpdateConfigurationRequest(boolean noInit) { 
this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+   * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+ * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+   * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+ * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:485)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8642//console

This message is automatically generated.

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBASE-3909-backport-from-fb-for-trunk-2.patch, 
 HBASE-3909-backport-from-fb-for-trunk-3.patch, 
 HBASE-3909-backport-from-fb-for-trunk-4.patch, 
 HBASE-3909-backport-from-fb-for-trunk-5.patch, 
 HBASE-3909-backport-from-fb-for-trunk-6.patch, 
 

[jira] [Updated] (HBASE-3909) Add dynamic config

2014-02-08 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-3909:


Attachment: HBASE-3909-backport-from-fb-for-trunk-7.patch

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBASE-3909-backport-from-fb-for-trunk-2.patch, 
 HBASE-3909-backport-from-fb-for-trunk-3.patch, 
 HBASE-3909-backport-from-fb-for-trunk-4.patch, 
 HBASE-3909-backport-from-fb-for-trunk-5.patch, 
 HBASE-3909-backport-from-fb-for-trunk-6.patch, 
 HBASE-3909-backport-from-fb-for-trunk-7.patch, 
 HBASE-3909-backport-from-fb-for-trunk.patch, HBase Cluster Config 
 Details.xlsx, patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895513#comment-13895513
 ] 

Hadoop QA commented on HBASE-3909:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12627802/HBASE-3909-backport-from-fb-for-trunk-7.patch
  against trunk revision .
  ATTACHMENT ID: 12627802

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private UpdateConfigurationRequest(boolean noInit) { 
this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+   * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+ * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+   * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code
+ * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8643//console

This message is automatically generated.

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBASE-3909-backport-from-fb-for-trunk-2.patch, 
 HBASE-3909-backport-from-fb-for-trunk-3.patch, 
 HBASE-3909-backport-from-fb-for-trunk-4.patch, 
 HBASE-3909-backport-from-fb-for-trunk-5.patch, 
 HBASE-3909-backport-from-fb-for-trunk-6.patch, 
 HBASE-3909-backport-from-fb-for-trunk-7.patch, 
 HBASE-3909-backport-from-fb-for-trunk.patch, HBase Cluster Config 
 Details.xlsx, patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part 

[jira] [Created] (HBASE-10487) Avoid allocating new KeyValue for appended kvs which don't have existing(old) values

2014-02-08 Thread Feng Honghua (JIRA)
Feng Honghua created HBASE-10487:


 Summary: Avoid allocating new KeyValue for appended kvs which 
don't have existing(old) values
 Key: HBASE-10487
 URL: https://issues.apache.org/jira/browse/HBASE-10487
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua


in HRegion.append, a new KeyValue will be allocated no matter there is existing 
kv for the appended cell, we can improve here by avoiding the allocating of new 
KeyValue for kv without existing value by reusing the passed-in kv and only 
update its timestamp to 'now'(its original timestamp is latest, so can be 
updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue for appended kvs which don't have existing(old) values

2014-02-08 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10487:
-

Attachment: HBASE-10487-trunk_v1.patch

patch attached

 Avoid allocating new KeyValue for appended kvs which don't have existing(old) 
 values
 

 Key: HBASE-10487
 URL: https://issues.apache.org/jira/browse/HBASE-10487
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10487-trunk_v1.patch


 in HRegion.append, a new KeyValue will be allocated no matter there is 
 existing kv for the appended cell, we can improve here by avoiding the 
 allocating of new KeyValue for kv without existing value by reusing the 
 passed-in kv and only update its timestamp to 'now'(its original timestamp is 
 latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and bytes-copying for appended kvs which don't have existing(old) values

2014-02-08 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10487:
-

Description: in HRegion.append, a new KeyValue will be allocated no matter 
there is existing kv for the appended cell, we can improve here by avoiding the 
allocating of new KeyValue and according bytes-copying for kv which don't have 
existing(old) values by reusing the passed-in kv and only updating its 
timestamp to 'now'(its original timestamp is latest, so can be updated)  (was: 
in HRegion.append, a new KeyValue will be allocated no matter there is existing 
kv for the appended cell, we can improve here by avoiding the allocating of new 
KeyValue for kv without existing value by reusing the passed-in kv and only 
update its timestamp to 'now'(its original timestamp is latest, so can be 
updated))
Summary: Avoid allocating new KeyValue and bytes-copying for appended 
kvs which don't have existing(old) values  (was: Avoid allocating new KeyValue 
for appended kvs which don't have existing(old) values)

 Avoid allocating new KeyValue and bytes-copying for appended kvs which don't 
 have existing(old) values
 --

 Key: HBASE-10487
 URL: https://issues.apache.org/jira/browse/HBASE-10487
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10487-trunk_v1.patch


 in HRegion.append, a new KeyValue will be allocated no matter there is 
 existing kv for the appended cell, we can improve here by avoiding the 
 allocating of new KeyValue and according bytes-copying for kv which don't 
 have existing(old) values by reusing the passed-in kv and only updating its 
 timestamp to 'now'(its original timestamp is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing(old) values

2014-02-08 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10487:
-

Summary: Avoid allocating new KeyValue and according bytes-copying for 
appended kvs which don't have existing(old) values  (was: Avoid allocating new 
KeyValue and bytes-copying for appended kvs which don't have existing(old) 
values)

 Avoid allocating new KeyValue and according bytes-copying for appended kvs 
 which don't have existing(old) values
 

 Key: HBASE-10487
 URL: https://issues.apache.org/jira/browse/HBASE-10487
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10487-trunk_v1.patch


 in HRegion.append, a new KeyValue will be allocated no matter there is 
 existing kv for the appended cell, we can improve here by avoiding the 
 allocating of new KeyValue and according bytes-copying for kv which don't 
 have existing(old) values by reusing the passed-in kv and only updating its 
 timestamp to 'now'(its original timestamp is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10487) Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing(old) values

2014-02-08 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10487:
-

Description: in HRegion.append, new KeyValues will be allocated and do 
according bytes-copying no matter whether there are existing kv for the 
appended cells, we can improve here by avoiding the allocating of new KeyValue 
and according bytes-copying for kv which don't have existing(old) values by 
reusing the passed-in kv and only updating its timestamp to 'now'(its original 
timestamp is latest, so can be updated)  (was: in HRegion.append, a new 
KeyValue will be allocated no matter there is existing kv for the appended 
cell, we can improve here by avoiding the allocating of new KeyValue and 
according bytes-copying for kv which don't have existing(old) values by reusing 
the passed-in kv and only updating its timestamp to 'now'(its original 
timestamp is latest, so can be updated))

 Avoid allocating new KeyValue and according bytes-copying for appended kvs 
 which don't have existing(old) values
 

 Key: HBASE-10487
 URL: https://issues.apache.org/jira/browse/HBASE-10487
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-10487-trunk_v1.patch


 in HRegion.append, new KeyValues will be allocated and do according 
 bytes-copying no matter whether there are existing kv for the appended cells, 
 we can improve here by avoiding the allocating of new KeyValue and according 
 bytes-copying for kv which don't have existing(old) values by reusing the 
 passed-in kv and only updating its timestamp to 'now'(its original timestamp 
 is latest, so can be updated)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-08 Thread Deepa Vasanthkumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895586#comment-13895586
 ] 

Deepa Vasanthkumar commented on HBASE-10463:


Thanks [~ndimiduk]
The approach was just to create a Comparator class which extends 
WritableByteArrayComparable, and provided compareTo  which uses BigDecimal 
comparsion. 

I am interested in contributing this. I will go through the article 
http://hbase.apache.org/book/submitting.patches.html and  will update shortly. 






 Filter on columns containing numerics yield wrong results
 -

 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar
   Original Estimate: 168h
  Remaining Estimate: 168h

 Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL 
 for filtering the scan result. 
 However the columns which have numeric value, scan result is not correct, 
 because of lexicographic comparison.
 Does HBase support numeric value filters (for equal, greater or equal..) for 
 columns ? If not, can we add it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895682#comment-13895682
 ] 

Lars Hofhansl commented on HBASE-10485:
---

Looks. All filters need to consistent between filterRow, filterRowKey, and 
filterKeyValue, or the results are undefined.
I'd probably write it as:
{code}
  return (filterRowKey(v.getBuffer(), v.getRowOffset(), v.getRowLength())) ? 
ReturnCode.SKIP : ReturnCode.INCLUDE;
{code}
(But that's just a nit)

 PrefixFilter#filterKeyValue() should perform filtering on row key
 -

 Key: HBASE-10485
 URL: https://issues.apache.org/jira/browse/HBASE-10485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10485-0.94.txt, 10485-v1.txt


 Niels reported an issue under the thread 'Trouble writing custom filter for 
 use in FilterList' where his custom filter used in FilterList along with 
 PrefixFilter produced an unexpected results.
 His test can be found here:
 https://github.com/nielsbasjes/HBase-filter-problem
 This is due to PrefixFilter#filterKeyValue() using 
 FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
 When FilterList.Operator.MUST_PASS_ONE is specified, 
 FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
 prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
 ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895686#comment-13895686
 ] 

Ted Yu commented on HBASE-10485:


bq. Looks.

Did you mean 'Looks good' ?

 PrefixFilter#filterKeyValue() should perform filtering on row key
 -

 Key: HBASE-10485
 URL: https://issues.apache.org/jira/browse/HBASE-10485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10485-0.94.txt, 10485-v1.txt


 Niels reported an issue under the thread 'Trouble writing custom filter for 
 use in FilterList' where his custom filter used in FilterList along with 
 PrefixFilter produced an unexpected results.
 His test can be found here:
 https://github.com/nielsbasjes/HBase-filter-problem
 This is due to PrefixFilter#filterKeyValue() using 
 FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
 When FilterList.Operator.MUST_PASS_ONE is specified, 
 FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
 prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
 ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895692#comment-13895692
 ] 

Lars Hofhansl commented on HBASE-10485:
---

bq. Did you mean 'Looks good' ?

Yes :)

 PrefixFilter#filterKeyValue() should perform filtering on row key
 -

 Key: HBASE-10485
 URL: https://issues.apache.org/jira/browse/HBASE-10485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10485-0.94.txt, 10485-v1.txt


 Niels reported an issue under the thread 'Trouble writing custom filter for 
 use in FilterList' where his custom filter used in FilterList along with 
 PrefixFilter produced an unexpected results.
 His test can be found here:
 https://github.com/nielsbasjes/HBase-filter-problem
 This is due to PrefixFilter#filterKeyValue() using 
 FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
 When FilterList.Operator.MUST_PASS_ONE is specified, 
 FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
 prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
 ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10485) PrefixFilter#filterKeyValue() should perform filtering on row key

2014-02-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895695#comment-13895695
 ] 

Ted Yu commented on HBASE-10485:


[~apurtell]:
Do you want this in 0.98 ?

 PrefixFilter#filterKeyValue() should perform filtering on row key
 -

 Key: HBASE-10485
 URL: https://issues.apache.org/jira/browse/HBASE-10485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10485-0.94.txt, 10485-v1.txt


 Niels reported an issue under the thread 'Trouble writing custom filter for 
 use in FilterList' where his custom filter used in FilterList along with 
 PrefixFilter produced an unexpected results.
 His test can be found here:
 https://github.com/nielsbasjes/HBase-filter-problem
 This is due to PrefixFilter#filterKeyValue() using 
 FilterBase#filterKeyValue() which returns ReturnCode.INCLUDE
 When FilterList.Operator.MUST_PASS_ONE is specified, 
 FilterList#filterKeyValue() would return ReturnCode.INCLUDE even when row key 
 prefix doesn't match meanwhile the other filter's filterKeyValue() returns 
 ReturnCode.NEXT_COL



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-08 Thread Lukas Nalezenec (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895729#comment-13895729
 ] 

Lukas Nalezenec commented on HBASE-10413:
-

Lets make RegionSizeCalculator @InterfaceAudience.Private. Users are not 
expected to directly call this, right?
 - I am not sure - I have no experience with using this interface 
InterfaceAudience. Lot of developers are using heavily customized 
TableInputFormat. They may want to use this class.  I have changed it to 
Private (Btw: I was told to change it from Private to Public in previous code 
review ).

Instead of TableSplit.setLength(), you can override the ctor. TableSplit acts 
like a immutable data bean like object.
 - It means there will be ctor with 6 parameters. IMO it is too much but if you 
really want me to do it I will.

 On some cases, the regions might split or merge concurrently between getting 
the startEndKeys and asking the regions from cluster. In this case, for that 
range, we might default to 0, but it should be ok I think. We are not just 
estimating the region sizes here.
 - I think its not worth doing - it will be rare and the difference will be 
insignificant most times.


 Tablesplit.getLength returns 0
 --

 Key: HBASE-10413
 URL: https://issues.apache.org/jira/browse/HBASE-10413
 Project: HBase
  Issue Type: Bug
  Components: Client, mapreduce
Affects Versions: 0.96.1.1
Reporter: Lukas Nalezenec
Assignee: Lukas Nalezenec
 Attachments: HBASE-10413-2.patch, HBASE-10413-3.patch, 
 HBASE-10413-4.patch, HBASE-10413.patch


 InputSplits should be sorted by length but TableSplit does not contain real 
 getLength implementation:
   @Override
   public long getLength() {
 // Not clear how to obtain this... seems to be used only for sorting 
 splits
 return 0;
   }
 This is causing us problem with scheduling - we have got jobs that are 
 supposed to finish in limited time but they get often stuck in last mapper 
 working on large region.
 Can we implement this method ? 
 What is the best way ?
 We were thinking about estimating size by size of files on HDFS.
 We would like to get Scanner from TableSplit, use startRow, stopRow and 
 column families to get corresponding region than computing size of HDFS for 
 given region and column family. 
 Update:
 This ticket was about production issue - I talked with guy who worked on this 
 and he said our production issue was probably not directly caused by 
 getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-08 Thread Lukas Nalezenec (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Nalezenec updated HBASE-10413:


Attachment: HBASE-10413-5.patch

fix after code review.
TableSplit still contains setLength()


 Tablesplit.getLength returns 0
 --

 Key: HBASE-10413
 URL: https://issues.apache.org/jira/browse/HBASE-10413
 Project: HBase
  Issue Type: Bug
  Components: Client, mapreduce
Affects Versions: 0.96.1.1
Reporter: Lukas Nalezenec
Assignee: Lukas Nalezenec
 Attachments: HBASE-10413-2.patch, HBASE-10413-3.patch, 
 HBASE-10413-4.patch, HBASE-10413-5.patch, HBASE-10413.patch


 InputSplits should be sorted by length but TableSplit does not contain real 
 getLength implementation:
   @Override
   public long getLength() {
 // Not clear how to obtain this... seems to be used only for sorting 
 splits
 return 0;
   }
 This is causing us problem with scheduling - we have got jobs that are 
 supposed to finish in limited time but they get often stuck in last mapper 
 working on large region.
 Can we implement this method ? 
 What is the best way ?
 We were thinking about estimating size by size of files on HDFS.
 We would like to get Scanner from TableSplit, use startRow, stopRow and 
 column families to get corresponding region than computing size of HDFS for 
 given region and column family. 
 Update:
 This ticket was about production issue - I talked with guy who worked on this 
 and he said our production issue was probably not directly caused by 
 getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895767#comment-13895767
 ] 

Hadoop QA commented on HBASE-10413:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12627835/HBASE-10413-5.patch
  against trunk revision .
  ATTACHMENT ID: 12627835

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8644//console

This message is automatically generated.

 Tablesplit.getLength returns 0
 --

 Key: HBASE-10413
 URL: https://issues.apache.org/jira/browse/HBASE-10413
 Project: HBase
  Issue Type: Bug
  Components: Client, mapreduce
Affects Versions: 0.96.1.1
Reporter: Lukas Nalezenec
Assignee: Lukas Nalezenec
 Attachments: HBASE-10413-2.patch, HBASE-10413-3.patch, 
 HBASE-10413-4.patch, HBASE-10413-5.patch, HBASE-10413.patch


 InputSplits should be sorted by length but TableSplit does not contain real 
 getLength implementation:
   @Override
   public long getLength() {
 // Not clear how to obtain this... seems to be used only for sorting 
 splits
 return 0;
   }
 This is causing us problem with scheduling - we have got jobs that are 
 supposed to finish in limited time but they get often stuck in last mapper 
 working on large region.
 Can we implement this method ? 
 What is the best way ?
 We were thinking about estimating size by size of files on HDFS.
 We would like to get Scanner from TableSplit, use startRow, stopRow and 
 column families to get corresponding region than computing size of HDFS for 
 given region and column family. 
 Update:
 This ticket was about production issue - I talked with guy who worked on this 
 and he said our production issue was probably not directly caused by 
 getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-4064) Two concurrent unassigning of the same region caused the endless loop of Region has been PENDING_CLOSE for too long...

2014-02-08 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-4064.
---

Resolution: Won't Fix

closing, ancient.

 Two concurrent unassigning of the same region caused the endless loop of 
 Region has been PENDING_CLOSE for too long...
 

 Key: HBASE-4064
 URL: https://issues.apache.org/jira/browse/HBASE-4064
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.3
Reporter: Jieshan Bean
 Fix For: 0.90.7

 Attachments: HBASE-4064-v1.patch, HBASE-4064_branch90V2.patch, 
 disableflow.png


 1. If there is a rubbish RegionState object with PENDING_CLOSE in 
 regionsInTransition(The RegionState was remained by some exception which 
 should be removed, that's why I called it as rubbish object), but the 
 region is not currently assigned anywhere, TimeoutMonitor will fall into an 
 endless loop:
 2011-06-27 10:32:21,326 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:21,326 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:21,438 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:21,441 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 2011-06-27 10:32:31,207 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:31,207 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:31,215 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:31,215 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 2011-06-27 10:32:41,164 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:41,164 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:41,172 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:41,172 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 .
 2  In the following scenario, two concurrent unassigning call of the same 
 region may lead to the above problem:
 the first unassign call send rpc call success, the master watched the event 
 of RS_ZK_REGION_CLOSED, process this event, will create a 
 ClosedRegionHandler to remove the state of the region in master.eg.
 while ClosedRegionHandler is running in  
 hbase.master.executor.closeregion.threads thread (A), another unassign call 
 of same region run in another thread(B).
 while thread B  run if (!regions.containsKey(region)), this.regions have 
 the region info, now  cpu switch to thread A.
 The thread A will remove the region from the sets of this.regions and 
 regionsInTransition, then switch to thread B. the thread B run continue, 
 will throw an exception with the msg of Server null returned 
 java.lang.NullPointerException: Passed server is null for 
 9a6e26d40293663a79523c58315b930f, but without removing the new-adding 
 RegionState from regionsInTransition,and it can not be removed for ever.
  public void unassign(HRegionInfo region, boolean force) {
 LOG.debug(Starting unassignment of region  +
   region.getRegionNameAsString() +  (offlining));
 synchronized (this.regions) {
   // 

[jira] [Commented] (HBASE-10482) ReplicationSyncUp doesn't clean up its ZK, needed for tests

2014-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895792#comment-13895792
 ] 

Hudson commented on HBASE-10482:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #84 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/84/])
HBASE-10482 ReplicationSyncUp doesn't clean up its ZK, needed for tests (stack: 
rev 1565837)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSyncUp.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java


 ReplicationSyncUp doesn't clean up its ZK, needed for tests
 ---

 Key: HBASE-10482
 URL: https://issues.apache.org/jira/browse/HBASE-10482
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.1, 0.94.16
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.1, 0.99.0, 0.94.17

 Attachments: HBASE-10249.patch


 TestReplicationSyncUpTool failed again:
 https://builds.apache.org/job/HBase-TRUNK/4895/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationSyncUpTool/testSyncUpTool/
 It's not super obvious why only one of the two tables is replicated, the test 
 could use some more logging, but I understand it this way:
 The first ReplicationSyncUp gets started and for some reason it cannot 
 replicate the data:
 {noformat}
 2014-02-06 21:32:19,811 INFO  [Thread-1372] 
 regionserver.ReplicationSourceManager(203): Current list of replicators: 
 [1391722339091.SyncUpTool.replication.org,1234,1, 
 quirinus.apache.org,37045,1391722237951, 
 quirinus.apache.org,33502,1391722238125] other RSs: []
 2014-02-06 21:32:19,811 INFO  [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(231): Replicating 
 db42e7fc-7f29-4038-9292-d85ea8b9994b - 783c0ab2-4ff9-4dc0-bb38-86bf31d1d817
 2014-02-06 21:32:19,892 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
 2014-02-06 21:32:19,911 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
 2014-02-06 21:32:20,094 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 2
 ...
 2014-02-06 21:32:23,414 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 8
 2014-02-06 21:32:23,673 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(169): Moving 
 quirinus.apache.org,37045,1391722237951's hlogs to my queue
 2014-02-06 21:32:23,768 DEBUG [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(396): Creating 
 quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
 2014-02-06 21:32:23,842 DEBUG [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(396): Creating 
 quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
 2014-02-06 21:32:24,297 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
 2014-02-06 21:32:24,314 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
 {noformat}
 Finally it gives up:
 {noformat}
 2014-02-06 21:32:30,873 DEBUG [Thread-1372] 
 replication.TestReplicationSyncUpTool(323): SyncUpAfterDelete failed at retry 
 = 0, with rowCount_ht1TargetPeer1 =100 and rowCount_ht2TargetAtPeer1 =200
 {noformat}
 The syncUp tool has an ID you can follow, grep for 
 syncupReplication1391722338885 or just the timestamp, and you can see it 
 doing things after that. The reason is that the tool closes the 
 ReplicationSourceManager but not the ZK connection, so events _still_ come in 
 and NodeFailoverWorker _still_ tries to recover queues but then there's 
 nothing to process them.
 Later in the logs you can see:
 {noformat}
 2014-02-06 21:32:37,381 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(169): Moving 
 quirinus.apache.org,33502,1391722238125's hlogs to my queue
 2014-02-06 21:32:37,567 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(239): Won't transfer the queue, another 
 RS took care of it because of: KeeperErrorCode = NoNode for 
 /1/replication/rs/quirinus.apache.org,33502,1391722238125/lock
 {noformat}
 There shouldn't' be any racing, but now someone already moved 
 quirinus.apache.org,33502,1391722238125 away.
 FWIW I can't even make the test fail on my machine so I'm not 100% sure 
 closing the ZK connection fixes the issue, but at least it's the right thing 
 to do.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9631) add murmur3 hash

2014-02-08 Thread Gaurav Menghani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895821#comment-13895821
 ] 

Gaurav Menghani commented on HBASE-9631:


[~apurtell] I don't think that was an *increase*. That seems to be the 
probability of the BloomFilter being correct when it returns true. The false 
positive rate should be (1 - 0.99...). I would be surprised if the Bloom Filter 
is wrong with a probability of 0.99.. or more. Please correct me if I am wrong.

 add murmur3 hash
 

 Key: HBASE-9631
 URL: https://issues.apache.org/jira/browse/HBASE-9631
 Project: HBase
  Issue Type: New Feature
  Components: util
Affects Versions: 0.98.0
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: HBase-9631-v2.txt, HBase-9631.txt


 MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
 32-bit version that targets low latency for hash table use and two 128-bit 
 versions for generating unique identifiers for large blocks of data, one each 
 for x86 and x64 platforms.
 several open source projects have added murmur3 already, like cassandra, 
 mahout, etc.
 I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
 default Hash algo(murmur2) without changing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10309) Add support to delete empty regions in 0.94.x series

2014-02-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10309.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.94.17)

Alright. Lemme close this then.

 Add support to delete empty regions in 0.94.x series
 

 Key: HBASE-10309
 URL: https://issues.apache.org/jira/browse/HBASE-10309
 Project: HBase
  Issue Type: New Feature
Reporter: AcCud

 My use case: I have several tables where keys start with a timestamp. Because 
 of this and combined with the fact that I have set a 15 days retention 
 period, after a period of time results empty regions.
 I am sure that no write will occur in these region.
 It would be nice to have a tool to delete regions without being necessary to 
 stop the cluster.
 The easiest way for me is to have a tool that is able to delete all empty 
 regions, but there wouldn't be any problem to specify which region to delete.
 Something like:
 deleteRegion tableName region



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10482) ReplicationSyncUp doesn't clean up its ZK, needed for tests

2014-02-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895837#comment-13895837
 ] 

Lars Hofhansl commented on HBASE-10482:
---

[~jdcryans], want this in 0.94 too? Or are you just debugging in 0.96+ for now?

 ReplicationSyncUp doesn't clean up its ZK, needed for tests
 ---

 Key: HBASE-10482
 URL: https://issues.apache.org/jira/browse/HBASE-10482
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.1, 0.94.16
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.1, 0.99.0, 0.94.17

 Attachments: HBASE-10249.patch


 TestReplicationSyncUpTool failed again:
 https://builds.apache.org/job/HBase-TRUNK/4895/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationSyncUpTool/testSyncUpTool/
 It's not super obvious why only one of the two tables is replicated, the test 
 could use some more logging, but I understand it this way:
 The first ReplicationSyncUp gets started and for some reason it cannot 
 replicate the data:
 {noformat}
 2014-02-06 21:32:19,811 INFO  [Thread-1372] 
 regionserver.ReplicationSourceManager(203): Current list of replicators: 
 [1391722339091.SyncUpTool.replication.org,1234,1, 
 quirinus.apache.org,37045,1391722237951, 
 quirinus.apache.org,33502,1391722238125] other RSs: []
 2014-02-06 21:32:19,811 INFO  [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(231): Replicating 
 db42e7fc-7f29-4038-9292-d85ea8b9994b - 783c0ab2-4ff9-4dc0-bb38-86bf31d1d817
 2014-02-06 21:32:19,892 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
 2014-02-06 21:32:19,911 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 1
 2014-02-06 21:32:20,094 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 2
 ...
 2014-02-06 21:32:23,414 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 8
 2014-02-06 21:32:23,673 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(169): Moving 
 quirinus.apache.org,37045,1391722237951's hlogs to my queue
 2014-02-06 21:32:23,768 DEBUG [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(396): Creating 
 quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
 2014-02-06 21:32:23,842 DEBUG [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(396): Creating 
 quirinus.apache.org%2C37045%2C1391722237951.1391722243779 with data 10803
 2014-02-06 21:32:24,297 TRACE [Thread-1372.replicationSource,2] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
 2014-02-06 21:32:24,314 TRACE [Thread-1372.replicationSource,1] 
 regionserver.ReplicationSource(596): No log to process, sleeping 100 times 9
 {noformat}
 Finally it gives up:
 {noformat}
 2014-02-06 21:32:30,873 DEBUG [Thread-1372] 
 replication.TestReplicationSyncUpTool(323): SyncUpAfterDelete failed at retry 
 = 0, with rowCount_ht1TargetPeer1 =100 and rowCount_ht2TargetAtPeer1 =200
 {noformat}
 The syncUp tool has an ID you can follow, grep for 
 syncupReplication1391722338885 or just the timestamp, and you can see it 
 doing things after that. The reason is that the tool closes the 
 ReplicationSourceManager but not the ZK connection, so events _still_ come in 
 and NodeFailoverWorker _still_ tries to recover queues but then there's 
 nothing to process them.
 Later in the logs you can see:
 {noformat}
 2014-02-06 21:32:37,381 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(169): Moving 
 quirinus.apache.org,33502,1391722238125's hlogs to my queue
 2014-02-06 21:32:37,567 INFO  [ReplicationExecutor-0] 
 replication.ReplicationQueuesZKImpl(239): Won't transfer the queue, another 
 RS took care of it because of: KeeperErrorCode = NoNode for 
 /1/replication/rs/quirinus.apache.org,33502,1391722238125/lock
 {noformat}
 There shouldn't' be any racing, but now someone already moved 
 quirinus.apache.org,33502,1391722238125 away.
 FWIW I can't even make the test fail on my machine so I'm not 100% sure 
 closing the ZK connection fixes the issue, but at least it's the right thing 
 to do.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10174) Back port HBASE-9667 'NullOutputStream removed from Guava 15' to 0.94

2014-02-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10174.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.94.17)

We aren't moving on this. Let me close. Interested parties can use the patch 
here. It is fixed in 0.96 and later.

 Back port HBASE-9667 'NullOutputStream removed from Guava 15' to 0.94
 -

 Key: HBASE-10174
 URL: https://issues.apache.org/jira/browse/HBASE-10174
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10174-v2.txt, 10174-v3.txt, 9667-0.94.patch


 On user mailing list under the thread 'Guava 15', Kristoffer Sjögren reported 
 NoClassDefFoundError when he used Guava 15.
 The issue has been fixed in 0.96 + by HBASE-9667
 This JIRA ports the fix to 0.94 branch



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)