[jira] [Commented] (HBASE-11520) Simplify offheap cache config by removing the confusing hbase.bucketcache.percentage.in.combinedcache

2014-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063204#comment-14063204
 ] 

stack commented on HBASE-11520:
---

bq. When bucketCacheIOEngineName is heap it is correct to calculate the 
memory size by mu.getMax() * bucketCachePercentage  But when it is offheap, 
size calculation based on max heap memory looks strange no?

Yeah.  It is fallout from the way in which BUCKET_CACHE_SIZE_KEY can be either 
MB or a float between 0 and 1.  I am reluctant to change this for 1.0.  Someone 
may be depending on this 'behavior'.  I intend to add more on BC to refguide 
describing options.  Will include doc on this little vagary.

Resizeable CBC would be great though I'd say resizing an offheap BC is probably 
low priority; the important resizing is in the heap and you have the LruBC 
doing that already.

Thanks for the +1.  Let me commit.  The TestReplicaWithCluster is unrelated.

 Simplify offheap cache config by removing the confusing 
 hbase.bucketcache.percentage.in.combinedcache
 ---

 Key: HBASE-11520
 URL: https://issues.apache.org/jira/browse/HBASE-11520
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520.txt, 11520v2.txt, 11520v3.txt, 11520v3.txt


 Remove hbase.bucketcache.percentage.in.combinedcache.  It is unnecessary 
 complication of block cache config.  Let L1 config setup be as it is whether 
 a L2 present or not, just set hfile.block.cache.size (not 
 hbase.bucketcache.size * (1.0 - 
 hbase.bucketcache.percentage.in.combinedcache)).  For L2, let 
 hbase.bucketcache.size be the actual size of the bucket cache, not 
 hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.
 Attached patch removes the config. and updates docs.  Adds tests to confirm 
 configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 
 deploy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11520) Simplify offheap cache config by removing the confusing hbase.bucketcache.percentage.in.combinedcache

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11520:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Applied to master and branch-1  Thanks for reviews lads.

 Simplify offheap cache config by removing the confusing 
 hbase.bucketcache.percentage.in.combinedcache
 ---

 Key: HBASE-11520
 URL: https://issues.apache.org/jira/browse/HBASE-11520
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520.txt, 11520v2.txt, 11520v3.txt, 11520v3.txt


 Remove hbase.bucketcache.percentage.in.combinedcache.  It is unnecessary 
 complication of block cache config.  Let L1 config setup be as it is whether 
 a L2 present or not, just set hfile.block.cache.size (not 
 hbase.bucketcache.size * (1.0 - 
 hbase.bucketcache.percentage.in.combinedcache)).  For L2, let 
 hbase.bucketcache.size be the actual size of the bucket cache, not 
 hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.
 Attached patch removes the config. and updates docs.  Adds tests to confirm 
 configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 
 deploy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11400) Edit, consolidate, and update Compression and data encoding docs

2014-07-16 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063206#comment-14063206
 ] 

Misty Stanley-Jones commented on HBASE-11400:
-

Sorry, the problem is in the pom due to my mistake in HBASE-11521. I will 
attach a supplemental patch to pom.xml. The images go in src/main/docbkx/images/

 Edit, consolidate, and update Compression and data encoding docs
 

 Key: HBASE-11400
 URL: https://issues.apache.org/jira/browse/HBASE-11400
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Minor
 Attachments: HBASE-11400-1.patch, HBASE-11400-2.patch, 
 HBASE-11400-3.patch, HBASE-11400-4.patch, HBASE-11400-5.patch, 
 HBASE-11400-6.patch, HBASE-11400-7.patch, HBASE-11400.patch, 
 data_block_diff_encoding.png, data_block_no_encoding.png, 
 data_block_prefix_encoding.png


 Current docs are here: http://hbase.apache.org/book.html#compression.test
 It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11400) Edit, consolidate, and update Compression and data encoding docs

2014-07-16 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11400:


Attachment: pom.xml.patch

 Edit, consolidate, and update Compression and data encoding docs
 

 Key: HBASE-11400
 URL: https://issues.apache.org/jira/browse/HBASE-11400
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Minor
 Attachments: HBASE-11400-1.patch, HBASE-11400-2.patch, 
 HBASE-11400-3.patch, HBASE-11400-4.patch, HBASE-11400-5.patch, 
 HBASE-11400-6.patch, HBASE-11400-7.patch, HBASE-11400.patch, 
 data_block_diff_encoding.png, data_block_no_encoding.png, 
 data_block_prefix_encoding.png, pom.xml.patch


 Current docs are here: http://hbase.apache.org/book.html#compression.test
 It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11518) doc update for how to create non-shared HConnection

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063207#comment-14063207
 ] 

Hadoop QA commented on HBASE-11518:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12655984/HBASE-11518-master-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12655984

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestReplicaWithCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10084//console

This message is automatically generated.

 doc update for how to create non-shared HConnection
 ---

 Key: HBASE-11518
 URL: https://issues.apache.org/jira/browse/HBASE-11518
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.96.2, 0.94.21, 0.98.4
Reporter: Qiang Tian
Assignee: Qiang Tian
Priority: Minor
 Attachments: HBASE-11518-master-v1.patch, hbase-11518-master.patch


 creation for non-shared connections has changed since hbase-3777, but the doc 
 still remains the same...
 a simple test program:
 public class testHbase2{
 public static void main(String[] args) throws Exception {
 Configuration conf = HBaseConfiguration.create();
 conf.set(hbase.zookeeper.quorum,localhost);
 conf.set(hbase.zookeeper.property.clientPort, 2181);
 conf.set(hbase.client.instance.id, 2);
 HBaseAdmin admin = new HBaseAdmin(conf);
 conf.set(hbase.client.instance.id, 3);
 HBaseAdmin admin2 = new HBaseAdmin(conf);
 }
 }
   public static HConnection getConnection(final Configuration conf)
   throws IOException {
 HConnectionKey connectionKey = new HConnectionKey(conf);
 LOG.info(###create new connectionkey:  + connectionKey);
 synchronized (CONNECTION_INSTANCES) {
 14/07/15 18:06:08 INFO client.HConnectionManager: ###create new 
 connectionkey: HConnectionKey{properties={hbase.rpc.timeout=60, 
 hbase.client.instance.id=2, hbase.zookeeper.quorum=localhost, 
 hbase.client.pause=100, hbase.zookeeper.property.clientPort=2181, 
 zookeeper.znode.parent=/hbase, hbase.client.retries.number=35}, 
 username='tianq'}
 14/07/15 18:06:08 INFO client.HConnectionManager: ###create new connection###
 14/07/15 

[jira] [Commented] (HBASE-11400) Edit, consolidate, and update Compression and data encoding docs

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063208#comment-14063208
 ] 

Hadoop QA commented on HBASE-11400:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655999/pom.xml.patch
  against trunk revision .
  ATTACHMENT ID: 12655999

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10087//console

This message is automatically generated.

 Edit, consolidate, and update Compression and data encoding docs
 

 Key: HBASE-11400
 URL: https://issues.apache.org/jira/browse/HBASE-11400
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Minor
 Attachments: HBASE-11400-1.patch, HBASE-11400-2.patch, 
 HBASE-11400-3.patch, HBASE-11400-4.patch, HBASE-11400-5.patch, 
 HBASE-11400-6.patch, HBASE-11400-7.patch, HBASE-11400.patch, 
 data_block_diff_encoding.png, data_block_no_encoding.png, 
 data_block_prefix_encoding.png, pom.xml.patch


 Current docs are here: http://hbase.apache.org/book.html#compression.test
 It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-16 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063209#comment-14063209
 ] 

Liang Xie commented on HBASE-7336:
--

Yes, i observed similar problem.  Long time ago i had a raw idea to impl a 
multi streams/readers prototype, maybe i can share the patch once ready:)

 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11520) Simplify offheap cache config by removing the confusing hbase.bucketcache.percentage.in.combinedcache

2014-07-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063215#comment-14063215
 ] 

Anoop Sam John commented on HBASE-11520:


bq.Resizeable CBC would be great though I'd say resizing an offheap BC is 
probably low priority;
My plan is for onheap part only.  The L1 any way can be resized.  (In case of 
combine cache, it wont resize as of today)
One more is in CombinedCache when both L1 and L2 are onheap, the resize will be 
applied for the L1 only.

Just one more thing came to mind now.
We do have checks on cluster start that the sum of memstore size and block 
cache size can be max 80%.  When BucketCache with onheap is used, it can take 
more than 80% memory!!..  We must change this checking logic also.



 Simplify offheap cache config by removing the confusing 
 hbase.bucketcache.percentage.in.combinedcache
 ---

 Key: HBASE-11520
 URL: https://issues.apache.org/jira/browse/HBASE-11520
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520.txt, 11520v2.txt, 11520v3.txt, 11520v3.txt


 Remove hbase.bucketcache.percentage.in.combinedcache.  It is unnecessary 
 complication of block cache config.  Let L1 config setup be as it is whether 
 a L2 present or not, just set hfile.block.cache.size (not 
 hbase.bucketcache.size * (1.0 - 
 hbase.bucketcache.percentage.in.combinedcache)).  For L2, let 
 hbase.bucketcache.size be the actual size of the bucket cache, not 
 hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.
 Attached patch removes the config. and updates docs.  Adds tests to confirm 
 configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 
 deploy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063220#comment-14063220
 ] 

Hadoop QA commented on HBASE-11523:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655988/11520v3.txt
  against trunk revision .
  ATTACHMENT ID: 12655988

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestReplicaWithCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10085//console

This message is automatically generated.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 

[jira] [Commented] (HBASE-11521) Modify pom.xml to copy the images/ and css/ directories to the right location for the Ref Guide to see them correctly

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063230#comment-14063230
 ] 

Hudson commented on HBASE-11521:


FAILURE: Integrated in HBase-TRUNK #5310 (See 
[https://builds.apache.org/job/HBase-TRUNK/5310/])
HBASE-11521 Modify pom.xml to copy the images/ and css/ directories to the 
right location for the Ref Guide to see them correctly (Misty Stanley-Jones) 
(stack: rev 58982e2027228676a059dff845b14ed5b4f2bb9f)
* pom.xml


 Modify pom.xml to copy the images/ and css/ directories to the right location 
 for the Ref Guide to see them correctly
 -

 Key: HBASE-11521
 URL: https://issues.apache.org/jira/browse/HBASE-11521
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Critical
 Fix For: 2.0.0

 Attachments: HBASE-11521.patch


 Currently, images are broken in the html-single version of the Ref Guide and 
 a CSS file is missing from it too. This change fixes those issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11520) Simplify offheap cache config by removing the confusing hbase.bucketcache.percentage.in.combinedcache

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063231#comment-14063231
 ] 

Hudson commented on HBASE-11520:


FAILURE: Integrated in HBase-TRUNK #5311 (See 
[https://builds.apache.org/job/HBase-TRUNK/5311/])
HBASE-11520 Simplify offheap cache config by removing the confusing 
hbase.bucketcache.percentage.in.combinedcache (stack: rev 
8a481b87b57035aca9f6ff2833104eb073e2e889)
* src/main/docbkx/book.xml
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
Add a upgrading to 1.0 section to the upgrade part of the book; talk about 
HBASE-11520 removing hbase.bucketcache.percentage.in.combinedcache (stack: rev 
a99b71da5774500af5d72b47dcfd7a7cf2a9eb00)
* src/main/docbkx/upgrading.xml


 Simplify offheap cache config by removing the confusing 
 hbase.bucketcache.percentage.in.combinedcache
 ---

 Key: HBASE-11520
 URL: https://issues.apache.org/jira/browse/HBASE-11520
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520.txt, 11520v2.txt, 11520v3.txt, 11520v3.txt


 Remove hbase.bucketcache.percentage.in.combinedcache.  It is unnecessary 
 complication of block cache config.  Let L1 config setup be as it is whether 
 a L2 present or not, just set hfile.block.cache.size (not 
 hbase.bucketcache.size * (1.0 - 
 hbase.bucketcache.percentage.in.combinedcache)).  For L2, let 
 hbase.bucketcache.size be the actual size of the bucket cache, not 
 hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.
 Attached patch removes the config. and updates docs.  Adds tests to confirm 
 configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 
 deploy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11064) Odd behaviors of TableName for empty namespace

2014-07-16 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-11064:


Attachment: HBASE-11064.3.patch

attached patch.thanks

 Odd behaviors of TableName for empty namespace
 --

 Key: HBASE-11064
 URL: https://issues.apache.org/jira/browse/HBASE-11064
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.3
Reporter: Hiroshi Ikeda
Assignee: Rekha Joshi
Priority: Trivial
 Fix For: 0.99.0, 1.0.0, 0.98.5

 Attachments: HBASE-11064.1.patch, HBASE-11064.2.patch, 
 HBASE-11064.2.patch, HBASE-11064.3.patch


 In the class TableName,
 {code}
 public static byte [] isLegalFullyQualifiedTableName(final byte[] tableName) {
 ...
 int namespaceDelimIndex = ...
 if (namespaceDelimIndex == 0 || namespaceDelimIndex == -1){
   isLegalTableQualifierName(tableName);
 } else {
 ...
 {code}
 That means, for example, giving :a as the argument throws an exception 
 which says invalid qualifier, instead of invalid namespace.
 Also, TableName.valueOf(String) and valueOf(byte[]) can create an instance 
 with empty namespace, which is inconsistent.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063291#comment-14063291
 ] 

Hudson commented on HBASE-11517:


FAILURE: Integrated in HBase-1.0 #47 (See 
[https://builds.apache.org/job/HBase-1.0/47/])
HBASE-11517 TestReplicaWithCluster turns zombie -- ADDS TIMEOUTS SO CAN DEBUG 
ZOMBIE (stack: rev 7175f51f0817e93fe2c46aa39937fdff73e383d6)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java


 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 
 HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11520) Simplify offheap cache config by removing the confusing hbase.bucketcache.percentage.in.combinedcache

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063290#comment-14063290
 ] 

Hudson commented on HBASE-11520:


FAILURE: Integrated in HBase-1.0 #47 (See 
[https://builds.apache.org/job/HBase-1.0/47/])
HBASE-11520 Simplify offheap cache config by removing the confusing 
hbase.bucketcache.percentage.in.combinedcache (stack: rev 
14b331ccab8ef18b00ff7b4cf9127e22c74c4bca)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* src/main/docbkx/book.xml
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


 Simplify offheap cache config by removing the confusing 
 hbase.bucketcache.percentage.in.combinedcache
 ---

 Key: HBASE-11520
 URL: https://issues.apache.org/jira/browse/HBASE-11520
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.99.0
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520.txt, 11520v2.txt, 11520v3.txt, 11520v3.txt


 Remove hbase.bucketcache.percentage.in.combinedcache.  It is unnecessary 
 complication of block cache config.  Let L1 config setup be as it is whether 
 a L2 present or not, just set hfile.block.cache.size (not 
 hbase.bucketcache.size * (1.0 - 
 hbase.bucketcache.percentage.in.combinedcache)).  For L2, let 
 hbase.bucketcache.size be the actual size of the bucket cache, not 
 hbase.bucketcache.size * hbase.bucketcache.percentage.in.combinedcache.
 Attached patch removes the config. and updates docs.  Adds tests to confirm 
 configs are as expected whether a CombinedBlockCache deploy or a strict L1+L2 
 deploy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Qiang Tian (JIRA)
Qiang Tian created HBASE-11524:
--

 Summary: TestReplicaWithCluster#testChangeTable and 
TestReplicaWithCluster#testCreateDeleteTable fail
 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Qiang Tian


git bisect points to HBASE-11367.

build server run: (I did not get it in my local test)
{quote}
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
at 
org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{quote}

suspected log messages:
{quote}
2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
[PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
java.lang.NullPointerException: No connection
-at 
org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
-at 
org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
-at 
org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
-at 
org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
-at 
org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
-at 
org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
-at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)

coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
44a7fe2589d83138452640fecb7cae80, NAME = 
'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.', 
STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
OPENING to FAILED_OPEN in ZK, expecting version 1

2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
regionserver.HRegion(1239): Closed 
testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
quorum=localhost:61041, baseZNode=/hbase Transitioning 
44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
RS_ZK_REGION_FAILED_OPEN

{quote}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063308#comment-14063308
 ] 

Mikhail Antonov commented on HBASE-11524:
-

HBASE-11517 - does it look related?

 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11339) HBase MOB

2014-07-16 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-11339:
-

Attachment: (was: HBase MOB Design.pdf)

 HBase MOB
 -

 Key: HBASE-11339
 URL: https://issues.apache.org/jira/browse/HBASE-11339
 Project: HBase
  Issue Type: New Feature
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Attachments: HBase MOB Design.pdf


   It's quite useful to save the medium binary data like images, documents 
 into Apache HBase. Unfortunately directly saving the binary MOB(medium 
 object) to HBase leads to a worse performance since the frequent split and 
 compaction.
   In this design, the MOB data are stored in an more efficient way, which 
 keeps a high write/read performance and guarantees the data consistency in 
 Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11339) HBase MOB

2014-07-16 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-11339:
-

Attachment: HBase MOB Design.pdf

 HBase MOB
 -

 Key: HBASE-11339
 URL: https://issues.apache.org/jira/browse/HBASE-11339
 Project: HBase
  Issue Type: New Feature
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Attachments: HBase MOB Design.pdf


   It's quite useful to save the medium binary data like images, documents 
 into Apache HBase. Unfortunately directly saving the binary MOB(medium 
 object) to HBase leads to a worse performance since the frequent split and 
 compaction.
   In this design, the MOB data are stored in an more efficient way, which 
 keeps a high write/read performance and guarantees the data consistency in 
 Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Qiang Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Tian updated HBASE-11524:
---

Affects Version/s: (was: 0.99.0)
   2.0.0

 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063334#comment-14063334
 ] 

Qiang Tian commented on HBASE-11524:


Thanks [~mantonov]! yes, i think so...debug shows the shortCircuitConnection is 
closed.  
is HBASE-11517 committed? I cannot see it..


 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063335#comment-14063335
 ] 

Mikhail Antonov commented on HBASE-11524:
-

[~tianq] there was kind of intermediate commit to help get more debug logs. See 
TestReplicaWithCluster turns zombie – ADDS TIMEOUTS SO CAN DEBUG ZOMBIE (stack: 
rev 7175f51f0817e93fe2c46aa39937fdff73e383d6)

If you got this commit fetched on your environment, you could pick the patch I 
attached to that jira and just apply it on top of this one. After that this 
test passes for me. The error seems to be that with recent changes each server 
holds instance of HConnection, but when we have 2 miniclusters in the same 
test, calling #shutdown() on any of them closes all connections in JVM, for 
both miniclusters.

Let me know if that patch works for you.

 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063340#comment-14063340
 ] 

Qiang Tian commented on HBASE-11524:


oops. I already have it..(there are quite a few commits yesterday, I missed 
it..:-))
my test still fails, but compared with my git bisect test, it takes much less 
time:

{quote}
HBase - Server  FAILURE [01:58 min]
{quote}

looks without your change, the test hangs there for about 15 minutes and no 
TEST-org.apache.hadoop.hbase.client.TestReplicaWithCluster.xml created.





 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-07-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-10378:
--

Assignee: ramkrishna.s.vasudevan

 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
Assignee: ramkrishna.s.vasudevan
 Attachments: 10378-1.patch, 10378-2.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-07-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10378:
---

Assignee: (was: ramkrishna.s.vasudevan)

 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
 Attachments: 10378-1.patch, 10378-2.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-4593) Design and document the official procedure for posting patches, commits, commit messages, etc. to smooth process and make integration with tools easier

2014-07-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063440#comment-14063440
 ] 

Mike Drob commented on HBASE-4593:
--

bq. Save your changes to a patch: git diff --no-prefix origin/master  
HBASE-.patch
This changed recently, and should be just {{git diff origin/master  
HBASE-.patch}}

I've seen other projects prefer naming patches as {{PROJECT-XXX.patch.txt}} 
because that means the patch will open in browser instead of prompting for a 
download.

Over on Accumulo, we suggest to folks that they commit locally and then do 
{{git format-patch --stdout}} instead of git diff because it preserves the 
authorship and commit message. Of course, committers would need to apply it 
using {{git am}} or {{git am --signoff}}. YMMV.

 Design and document the official procedure for posting patches, commits, 
 commit messages, etc. to smooth process and make integration with tools easier
 ---

 Key: HBASE-4593
 URL: https://issues.apache.org/jira/browse/HBASE-4593
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Jonathan Gray
Assignee: Misty Stanley-Jones

 I have been building a tool (currently called reposync) to help me keep the 
 internal FB hbase-92-based branch up-to-date with the public branches.
 Various inconsistencies in our process has made it difficult to automate a 
 lot of this stuff.
 I'd like to work with everyone to come up with the official best practices 
 and stick to it.
 I welcome all suggestions.  Among some of the things I'd like to nail down:
 - Commit message format
 - Best practice and commit message format for multiple commits
 - Multiple commits per jira vs. jira per commit, what are the exceptions and 
 when
 - Affects vs. Fix versions
 - Potential usage of [tags] in commit messages for things like book, scripts, 
 shell... maybe even whatever is in the components field?
 - Increased usage of JIRA tags or labels to mark exactly which repos a JIRA 
 has been committed to (potentially even internal repos?  ways for a tool to 
 keep track in JIRA?)
 We also need to be more strict about some things if we want to follow Apache 
 guidelines.  For example, all final versions of a patch must be attached to 
 JIRA so that the author properly assigns it to Apache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063576#comment-14063576
 ] 

Nick Dimiduk commented on HBASE-11523:
--

Mapred mode has worked in the past. I think this tool is becoming complex 
enough that it's time we add some unit tests.

Is DataBlockEncoding is not serializable by Jackson because of it's enum 
declaration? Compression and BloomType both appear to work.

The attached patch is for HBASE-11520, not this one :)

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063587#comment-14063587
 ] 

Nick Dimiduk commented on HBASE-11523:
--

It looks like bloomFilter parsing isn't working either. We need:

{noformat}
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -1615,7 +1615,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 continue;
   }
 
-  final String bloomFilter = --bloomFilter;
+  final String bloomFilter = --bloomFilter=;
   if (cmd.startsWith(bloomFilter)) {
 opts.bloomType = 
BloomType.valueOf(cmd.substring(bloomFilter.length()));
 continue;
{noformat}

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063596#comment-14063596
 ] 

Nick Dimiduk commented on HBASE-11523:
--

I'm seeing this on all fields.

{noformat}
UnrecognizedPropertyException: Unrecognized field autoFlush (Class 
org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions)
{noformat}

Fix one and then you get the next, and the next (blockEncoding, bloomType, 
compression, filterAll...). Did our Jackson version change?

Reading the Jackson doc, we can manually annotate, maintain getters/setters, or 
mark all fields public. I prefer the public approach as TestOptions is really 
just a struct.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11523:
-

Attachment: HBASE-11523.00.nd.patch

Here's a patch that makes fields public, allowing Jackson's bean-based 
serialization magic to just work. Also fixed the previously mentioned 
{{--bloomFilter}} issue. Confirmed on a local-mode rig with

{noformat}
$ ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=1 
--compress=GZ --bloomFilter=ROWCOL --blockEncoding=PREF
$ ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=1 
--compress=GZ --bloomFilter=ROWCOL --blockEncoding=PREFIX_TREE 
--writeToWAL=false --sampleRate=0.1 randomRead 2
{noformat}

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063614#comment-14063614
 ] 

Nick Dimiduk commented on HBASE-11523:
--

--mapred is true by default, always has been as far as I know.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063618#comment-14063618
 ] 

Nick Dimiduk commented on HBASE-11523:
--

I'm +1 for either patch. I'd prefer to not maintain accessors, but whatever you 
folks feel is best.

[~apurtell] Have you spun those 0.98 bits yet? It would be a shame to ship with 
broken PerfEval, but I won't block a release over it.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11523:
--

Attachment: 11523v2.txt

Patch that includes your bloomfilter fix nick.

On the DataBlockEncodng, I don't think it is because it is enum.  Issue is that 
there are no setters on the test options so jackson can't set them.

I didn't 'choose' mapred.  I just launched a job this way:

{code}
HADOOP_CLASSPATH=/home/stack/conf_hbase:`./hbase-2.0.0-SNAPSHOT/bin/hbase 
classpath`  ./hadoop-2.4.1-SNAPSHOT/bin/hadoop --config /home/stack/conf_hadoop 
org.apache.hadoop.hbase.PerformanceEvaluation --valueSize=8192 --valueRandom 
--rows=10 sequentialWrite 1000
{code}



 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11517:
--

Status: Patch Available  (was: Open)

Lets see what hadoopqa looks like.  Changing test seems good to me.  Thanks 
Mikhail

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 
 HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063629#comment-14063629
 ] 

stack commented on HBASE-11523:
---

[~ndimiduk] Yeah, that is what I see.  Yeah, could annotate or make public.  I 
thought setters more the way to go.

Jackson doesn't seem to have changed in a while:

65019f0a » Michael Stack 
2012-05-17 HBASE-6034 Upgrade Hadoop dependencies   929 
jackson.version1.8.8/jackson.version

Will commit my patch since I 'tested' it?

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-16 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063630#comment-14063630
 ] 

Lars Hofhansl commented on HBASE-7336:
--

Local mode is not the same. It does not actually run HDFS, but just pretty ad 
hoc DFS wrapper.
Tests are only valid when tested again real HDFS - even when HDFS is in single 
node mode.
 
With actual HDFS I have not observed your numbers [~vrodionov].


 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063631#comment-14063631
 ] 

Hadoop QA commented on HBASE-11517:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12655995/HBASE-11517_v1-mantonov.patch
  against trunk revision .
  ATTACHMENT ID: 12655995

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10091//console

This message is automatically generated.

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 
 HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11525) Region server holding in region states is out of sync with meta

2014-07-16 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11525:
---

 Summary: Region server holding in region states is out of sync 
with meta
 Key: HBASE-11525
 URL: https://issues.apache.org/jira/browse/HBASE-11525
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


In RegionStates, we remove a region from the region list a region server hosts 
once the region is offline. However, in meta, we do this only when the region 
is assigned to a new region server. We should keep them in sync so that we can 
claim they are consistent.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063635#comment-14063635
 ] 

Nick Dimiduk commented on HBASE-11523:
--

Sure thing. Fire away.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 0.96.3, 0.98.5

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11523:
--

   Resolution: Fixed
Fix Version/s: (was: 0.98.5)
   (was: 0.96.3)
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to master and branch-1 (bugfix).

I don't see the getters in 0.98 [~ndimiduk]  class TestOptions data members are 
public so my guess is the JSON serialization works in 0.98 and 0.96.  To be 
tested.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063645#comment-14063645
 ] 

stack commented on HBASE-11523:
---

So change to make TestOptions data members non-public is what broke mr.  Let me 
see...

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-16 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063652#comment-14063652
 ] 

Vladimir Rodionov commented on HBASE-7336:
--

HBase mini cluster runs HDFS and code path to access files/data is the same as 
in a real cluster mode, I think. At least, PackageSender/PackageReceiver(s) and 
all IPC stuff for NN and DN are present in stack traces.

 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2014-07-16 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063662#comment-14063662
 ] 

Vladimir Rodionov commented on HBASE-7336:
--

[~xieliang007]], I am working on efficient multiple scanners support for single 
store file as well. We  need this for several purposes:

* Improve single task granularity during query execution (currently, it is a 
single region)
* Improve scanner performance during compaction(s)
* Improve compaction performance during normal operations.

The patch will be submitted soon when we get all testing done.  

 HFileBlock.readAtOffset does not work well with multiple threads
 

 Key: HBASE-7336
 URL: https://issues.apache.org/jira/browse/HBASE-7336
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.4, 0.95.0

 Attachments: 7336-0.94.txt, 7336-0.96.txt


 HBase grinds to a halt when many threads scan along the same set of blocks 
 and neither read short circuit is nor block caching is enabled for the dfs 
 client ... disabling the block cache makes sense on very large scans.
 It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
 culprit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063667#comment-14063667
 ] 

Nick Dimiduk commented on HBASE-11523:
--

I believe you're correct. Checkout 
{{96681210a7bd63a66ee2f70419b2e39360ac2f50~1}} is working.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063675#comment-14063675
 ] 

Hadoop QA commented on HBASE-11523:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12656058/HBASE-11523.00.nd.patch
  against trunk revision .
  ATTACHMENT ID: 12656058

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.http.TestHttpServerLifecycle.testStoppingTwiceServerIsAllowed(TestHttpServerLifecycle.java:127)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10092//console

This message is automatically generated.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 

[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063687#comment-14063687
 ] 

Andrew Purtell commented on HBASE-11523:


Glad to hear there's no issue with 0.98, but I will confirm this when 
evaluating the RC

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063702#comment-14063702
 ] 

Hadoop QA commented on HBASE-11523:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12656058/HBASE-11523.00.nd.patch
  against trunk revision .
  ATTACHMENT ID: 12656058

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestReplicaWithCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10090//console

This message is automatically generated.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 

[jira] [Updated] (HBASE-10398) HBase book updates for Replication after HBASE-10322

2014-07-16 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10398:
---

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the patch Misty.

 HBase book updates for Replication after HBASE-10322
 

 Key: HBASE-10398
 URL: https://issues.apache.org/jira/browse/HBASE-10398
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0

 Attachments: HBASE-10398-1.patch, HBASE-10398-2.patch, 
 HBASE-10398.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11526) mvn site goal fails in trunk

2014-07-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11526:
--

 Summary: mvn site goal fails in trunk
 Key: HBASE-11526
 URL: https://issues.apache.org/jira/browse/HBASE-11526
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


I got the following error when running command 'mvn compile site -DskipTests'
{code}
[ERROR] Failed to execute goal 
com.agilejava.docbkx:docbkx-maven-plugin:2.0.15:generate-html (multipage) on 
project hbase: Error executing ant tasks: 
/homes/hortonzy/trunk/src/main/docbkx/images not found. - [Help 1]
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11526) mvn site goal fails in trunk

2014-07-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11526:
---

Description: 
I got the following error when running command 'mvn compile site -DskipTests'
{code}
[ERROR] Failed to execute goal 
com.agilejava.docbkx:docbkx-maven-plugin:2.0.15:generate-html (multipage) on 
project hbase: Error executing ant tasks: 
/homes/zy/trunk/src/main/docbkx/images not found. - [Help 1]
{code}

  was:
I got the following error when running command 'mvn compile site -DskipTests'
{code}
[ERROR] Failed to execute goal 
com.agilejava.docbkx:docbkx-maven-plugin:2.0.15:generate-html (multipage) on 
project hbase: Error executing ant tasks: 
/homes/hortonzy/trunk/src/main/docbkx/images not found. - [Help 1]
{code}


 mvn site goal fails in trunk
 

 Key: HBASE-11526
 URL: https://issues.apache.org/jira/browse/HBASE-11526
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu

 I got the following error when running command 'mvn compile site -DskipTests'
 {code}
 [ERROR] Failed to execute goal 
 com.agilejava.docbkx:docbkx-maven-plugin:2.0.15:generate-html (multipage) on 
 project hbase: Error executing ant tasks: 
 /homes/zy/trunk/src/main/docbkx/images not found. - [Help 1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063765#comment-14063765
 ] 

stack commented on HBASE-11523:
---

This 'trivial change in pe' by me changing TestOptions data members from public 
to package private is what broke the serialization.

{code}
96681210 » stack 
2014-06-26 HBASE-11415 [PE] Dump config before running test 547 
boolean nomapred = false;
548 boolean filterAll = false;
549 int startRow = 0;
550 float size = 1.0f;
551 int perClientRunRows = DEFAULT_ROWS_PER_GB;
552 int numClientThreads = 1;
553 int totalRows = DEFAULT_ROWS_PER_GB;
554 float sampleRate = 1.0f;
555 double traceRate = 0.0;
556 String tableName = TABLE_NAME;
557 boolean flushCommits = true;
558 boolean writeToWAL = true;
559 boolean autoFlush = false;
560 boolean oneCon = false;
561 boolean useTags = false;
562 int noOfTags = 1;
563 boolean reportLatency = false;
564 int multiGet = 0;
{code}

I did it so could get insight on the configs PE was running with.  I wanted to 
use JSON to print out the configs so didn't have to dick around w/ toString 
adding stuff as new configs were added.  I added getters so could control what 
JSON emitted (private stuff would be ignored).

Sorry about that.  Thanks for the help [~ndimiduk]

I suppose could go back to all public but then can't hide TestOptions data 
members which is probably not going to be needed.

Yeah, PE is getting unwieldy.  Start thinking about breaking it out?  A module 
of its own with tests...



 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-16 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11516:


Attachment: HBASE-11516_v2.patch

Adding new patch, which  implements coprocessor time as a part of metrics 
framework.

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11517:
--

Attachment: 11517v2.txt

Mikhail's patch at p1 instead so applies.

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-4907) Port 89-fb changes to trunk

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-4907.
--

Resolution: Implemented

Marked resolved as (mostly) implemented.

 Port 89-fb changes to trunk
 ---

 Key: HBASE-4907
 URL: https://issues.apache.org/jira/browse/HBASE-4907
 Project: HBase
  Issue Type: Improvement
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Nicolas Spiegelberg
Priority: Blocker
  Labels: noob

 A super task to track the progress of porting 89-fb functionality  fixes to 
 trunk.  Note that these tasks are focused on RegionServer functionality.  
 89-specific master functionality doesn't have a time frame for porting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-8900) TestRSKilledWhenMasterInitializing.testCorrectnessWhenMasterFailOver is flakey

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-8900.
--

Resolution: Won't Fix

Resolving.  If someone wants to fix up the test, that'd be good but closing out 
this old issue.

 TestRSKilledWhenMasterInitializing.testCorrectnessWhenMasterFailOver is flakey
 --

 Key: HBASE-8900
 URL: https://issues.apache.org/jira/browse/HBASE-8900
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: ramkrishna.s.vasudevan
 Attachments: 8900.txt


 Failed here:
 https://builds.apache.org/job/hbase-0.95-on-hadoop2/169/testReport/junit/org.apache.hadoop.hbase.regionserver/TestRSKilledWhenMasterInitializing/testCorrectnessWhenMasterFailOver/
 and
 http://54.241.6.143/job/HBase-0.95-Hadoop-2/579/org.apache.hbase$hbase-server/testReport/junit/org.apache.hadoop.hbase.regionserver/TestRSKilledWhenMasterInitializing/org_apache_hadoop_hbase_regionserver_TestRSKilledWhenMasterInitializing/
 {code}
 java.lang.Exception: test timed out after 12 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.blockUntilNoRIT(ZKAssign.java:1002)
   at 
 org.apache.hadoop.hbase.regionserver.TestRSKilledWhenMasterInitializing.testCorrectnessWhenMasterFailOver(TestRSKilledWhenMasterInitializing.java:177)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {code}
 and with this:
 {code}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.TestRSKilledWhenMasterInitializing.tearDownAfterClass(TestRSKilledWhenMasterInitializing.java:83)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at org.junit.runners.Suite.runChild(Suite.java:127)
   at org.junit.runners.Suite.runChild(Suite.java:26)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063784#comment-14063784
 ] 

Nick Dimiduk commented on HBASE-11523:
--

Might as well just print all options for now. We don't need all options to be 
serialized for mapred, but I figured it wouldn't hurt anything. If you want to 
limit that stuff, better to go back to specifying accessors, but make sure 
everything needed by clients in mapred is included.

Module is overkill? We talked a while back about moving it out of test, package 
it in hbase-server. I think some basic smoke-tests are in order at least.

 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3475) MetaScanner and MetaReader are very similar; purge one

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-3475.
--

Resolution: Invalid

Resolving as invalid.  MetaReader does not exist anymore.  Enis raises good 
point that there is still work to do so only one place to go when scanning meta 
but can do in a new issue.

 MetaScanner and MetaReader are very similar; purge one
 --

 Key: HBASE-3475
 URL: https://issues.apache.org/jira/browse/HBASE-3475
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: Andrey Stepachev
  Labels: noob

 MetaScanner in client package is a little more involved but MetaReader does 
 similar.   Both allow you specify a Visitor on .META. and -ROOT-.  We should 
 dump one of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11526) mvn site goal fails in trunk

2014-07-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-11526.


Resolution: Duplicate

We already have JIRAs open for broken images.

 mvn site goal fails in trunk
 

 Key: HBASE-11526
 URL: https://issues.apache.org/jira/browse/HBASE-11526
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu

 I got the following error when running command 'mvn compile site -DskipTests'
 {code}
 [ERROR] Failed to execute goal 
 com.agilejava.docbkx:docbkx-maven-plugin:2.0.15:generate-html (multipage) on 
 project hbase: Error executing ant tasks: 
 /homes/zy/trunk/src/main/docbkx/images not found. - [Help 1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11527) Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap.

2014-07-16 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-11527:
--

 Summary: Cluster free memory limit check should consider L2 block 
cache size also when L2 cache is onheap.
 Key: HBASE-11527
 URL: https://issues.apache.org/jira/browse/HBASE-11527
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 2.0.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11521) Modify pom.xml to copy the images/ and css/ directories to the right location for the Ref Guide to see them correctly

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11521:
--

Attachment: 11521.amendment.txt

Fix for breakage in site since this patch was committed last night.

 Modify pom.xml to copy the images/ and css/ directories to the right location 
 for the Ref Guide to see them correctly
 -

 Key: HBASE-11521
 URL: https://issues.apache.org/jira/browse/HBASE-11521
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Critical
 Fix For: 2.0.0

 Attachments: 11521.amendment.txt, HBASE-11521.patch


 Currently, images are broken in the html-single version of the Ref Guide and 
 a CSS file is missing from it too. This change fixes those issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063822#comment-14063822
 ] 

Hadoop QA commented on HBASE-11517:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12656081/11517v2.txt
  against trunk revision .
  ATTACHMENT ID: 12656081

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10093//console

This message is automatically generated.

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11521) Modify pom.xml to copy the images/ and css/ directories to the right location for the Ref Guide to see them correctly

2014-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063826#comment-14063826
 ] 

stack commented on HBASE-11521:
---

Committed amendment.

 Modify pom.xml to copy the images/ and css/ directories to the right location 
 for the Ref Guide to see them correctly
 -

 Key: HBASE-11521
 URL: https://issues.apache.org/jira/browse/HBASE-11521
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Critical
 Fix For: 2.0.0

 Attachments: 11521.amendment.txt, HBASE-11521.patch


 Currently, images are broken in the html-single version of the Ref Guide and 
 a CSS file is missing from it too. This change fixes those issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-899) Support for specifying a timestamp and numVersions on a per-column basis

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-899.
-

Resolution: Won't Fix

Resolving. Old.

 Support for specifying a timestamp and numVersions on a per-column basis
 

 Key: HBASE-899
 URL: https://issues.apache.org/jira/browse/HBASE-899
 Project: HBase
  Issue Type: New Feature
Reporter: Doğacan Güney

 This is just an idea and it may be better to wait after the planned API 
 changes. But I think it would be useful to support fetching different 
 timestamps and versions for different columns.
 Example:
 If a row has 2 columns, col1: and col2: I want to be able to ask for 
 (during scan or read time, doesn't matter) 2 versions of col1: (maybe even 
 between timestamps t1 and t2) but only 1 version of col2:. This would be 
 especially handy if during an MR job you have to read 2 versions of a small 
 column, but do not want the overhead of reading 2 versions of every other 
 column too
 (Also, the mechanism is already there. I mean, making the changes to support 
 a per-column timestamp/numVersions is  ridiculously easy :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063832#comment-14063832
 ] 

Hudson commented on HBASE-11523:


FAILURE: Integrated in HBase-TRUNK #5312 (See 
[https://builds.apache.org/job/HBase-TRUNK/5312/])
HBASE-11523 JSON serialization of PE Options is broke (stack: rev 
9d8ad39a4c09fb369d93fec34fca511efbcfcf09)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11517:
--

Attachment: 11517v2.txt

Retry.  And trying locally.  Some issue running hbase-server tests.

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 11517v2.txt, HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11512) Write region open/close events to WAL

2014-07-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11512:
--

Status: Patch Available  (was: Open)

 Write region open/close events to WAL
 -

 Key: HBASE-11512
 URL: https://issues.apache.org/jira/browse/HBASE-11512
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-11512_v1.patch


 Similar to writing flush events to WAL (HBASE-11511) and compaction events to 
 WAL (HBASE-2231), we should write region open and close events to WAL. 
 This is especially important for secondary region replicas, since we can use 
 this information to pick up primary regions' files from secondary replicas.
 However, we may need this for regular inter cluster replication as well, see 
 issues HBASE-10343 and HBASE-9465. 
 A design doc for secondary replica replication can be found at HBASE-11183. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-962) Isolation level support in HBase transactions

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-962.
-

Resolution: Later

Resolving as 'later'.  Issue is stale.  Won't do different isolation levels in 
hbase, at least not in foreseeable future, but may do so on top of hbase.

 Isolation level support in HBase transactions
 -

 Key: HBASE-962
 URL: https://issues.apache.org/jira/browse/HBASE-962
 Project: HBase
  Issue Type: Improvement
  Components: io, IPC/RPC, regionserver
Reporter: Jaeyun Noh

 There will be some case that we don't strict transaction isolation level. 
 Just a ReadCommitted iso-level will be sufficient in some application. 
 (Non-repeatable read, phantom possible)
 And, it would be great to specify which isolation level will be used in HBase 
 transaction by an enumeration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-1085) More variety in Performance Evaluation

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1085.
--

Resolution: Invalid

Resolving.  Mostly implemented (can have random cell size now -- though not 
random column size as this asks for...)  We've also implemented multiple 
clients.

 More variety in Performance Evaluation
 --

 Key: HBASE-1085
 URL: https://issues.apache.org/jira/browse/HBASE-1085
 Project: HBase
  Issue Type: Task
  Components: Performance
Reporter: stack

 + Use smaller cells (Small cells is common use case.  Can bring on OOMEs)
 + In this report, http://www.cs.duke.edu/~kcd/hadoop/kcd-hadoop-report.pdf, a 
 test is done testing performance as column families increase.  Could do same 
 increasing columns inside a family.
 + We currently do one client and one server; add multiple clients to one 
 server



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-1120) Can't 'close_region' the catalog '-ROOT-,,0' region

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1120.
--

Resolution: Invalid

Resolving invalid.  No -ROOT- any more.

 Can't 'close_region' the catalog '-ROOT-,,0' region
 ---

 Key: HBASE-1120
 URL: https://issues.apache.org/jira/browse/HBASE-1120
 Project: HBase
  Issue Type: Bug
Reporter: stack

 {code}
 hbase(main):003:0 close_region '-ROOT-,,0'
 NativeException: java.io.IOException: java.io.IOException: 
 java.lang.NullPointerException
  at org.apache.hadoop.hbase.master.HMaster.modifyTable(HMaster.java:830)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
 ...
 {code}
 Presumption is that its a region that can be found in .META. whereas, I have 
 a cluster where master and regionserver don't agree on -ROOT- and want to 
 close '-ROOT-,,0' but get the NPE when I try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11293) Master and Region servers fail to start when hbase.master.ipc.address=0.0.0.0, hbase.regionserver.ipc.address=0.0.0.0 and Kerberos is enabled

2014-07-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063872#comment-14063872
 ] 

Enis Soztutar commented on HBASE-11293:
---

[~devaraj] should we go forward with this patch? 

 Master and Region servers fail to start when 
 hbase.master.ipc.address=0.0.0.0, hbase.regionserver.ipc.address=0.0.0.0 and 
 Kerberos is enabled
 -

 Key: HBASE-11293
 URL: https://issues.apache.org/jira/browse/HBASE-11293
 Project: HBase
  Issue Type: Bug
Reporter: Michael Harp
Assignee: Devaraj Das
 Attachments: 11293-1.txt


 Setting 
 {code}
 hbase.master.ipc.address=0.0.0.0
 hbase.regionserver.ipc.address=0.0.0.0
 {code}
 causes the _HOST substitution in hbase/_h...@example.com to result in 
 hbase/0:0:0:0:0:0:0:0...@example.com which in turn causes kerberos 
 authentication to fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11293) Master and Region servers fail to start when hbase.master.ipc.address=0.0.0.0, hbase.regionserver.ipc.address=0.0.0.0 and Kerberos is enabled

2014-07-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063876#comment-14063876
 ] 

Devaraj Das commented on HBASE-11293:
-

Yes, will get to it this week.

 Master and Region servers fail to start when 
 hbase.master.ipc.address=0.0.0.0, hbase.regionserver.ipc.address=0.0.0.0 and 
 Kerberos is enabled
 -

 Key: HBASE-11293
 URL: https://issues.apache.org/jira/browse/HBASE-11293
 Project: HBase
  Issue Type: Bug
Reporter: Michael Harp
Assignee: Devaraj Das
 Attachments: 11293-1.txt


 Setting 
 {code}
 hbase.master.ipc.address=0.0.0.0
 hbase.regionserver.ipc.address=0.0.0.0
 {code}
 causes the _HOST substitution in hbase/_h...@example.com to result in 
 hbase/0:0:0:0:0:0:0:0...@example.com which in turn causes kerberos 
 authentication to fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11482) Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as Spark RDDs

2014-07-16 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063880#comment-14063880
 ] 

Ted Malaska commented on HBASE-11482:
-

Can I take this.  I'm working on Spark-2447 and I've got a first cut at 
interacting with HBase at https://github.com/tmalaska/SparkOnHBase 

Next on my list was to add support for tables and even bulk loads.   Adding 
snapshots shouldn't be that hard.

Let me know
Thanks

 Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as 
 Spark RDDs
 ---

 Key: HBASE-11482
 URL: https://issues.apache.org/jira/browse/HBASE-11482
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell

 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like HBase's TableInputFormat and 
 TableSnapshotInputFormat. 
 Insure the integration is reasonable and provides good performance. 
 Add the ability to save RDDs back to HBase with a {{saveAsHBaseTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates down to the 
 server as HBase filters. 
 Consider supporting conversions between Scala and Java types and HBase data 
 using the HBase types library.
 Consider an option to lazily and automatically produce a snapshot only when 
 needed, in a coordinated way. (Concurrently executing workers may want to 
 materialize a table snapshot RDD at the same time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11482) Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as Spark RDDs

2014-07-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11482:
---

Assignee: Ted Malaska

 Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as 
 Spark RDDs
 ---

 Key: HBASE-11482
 URL: https://issues.apache.org/jira/browse/HBASE-11482
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Ted Malaska

 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like HBase's TableInputFormat and 
 TableSnapshotInputFormat. 
 Insure the integration is reasonable and provides good performance. 
 Add the ability to save RDDs back to HBase with a {{saveAsHBaseTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates down to the 
 server as HBase filters. 
 Consider supporting conversions between Scala and Java types and HBase data 
 using the HBase types library.
 Consider an option to lazily and automatically produce a snapshot only when 
 needed, in a coordinated way. (Concurrently executing workers may want to 
 materialize a table snapshot RDD at the same time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11482) Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as Spark RDDs

2014-07-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063887#comment-14063887
 ] 

Andrew Purtell commented on HBASE-11482:


Go for it [~ted.m]]

 Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as 
 Spark RDDs
 ---

 Key: HBASE-11482
 URL: https://issues.apache.org/jira/browse/HBASE-11482
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Ted Malaska

 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like HBase's TableInputFormat and 
 TableSnapshotInputFormat. 
 Insure the integration is reasonable and provides good performance. 
 Add the ability to save RDDs back to HBase with a {{saveAsHBaseTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates down to the 
 server as HBase filters. 
 Consider supporting conversions between Scala and Java types and HBase data 
 using the HBase types library.
 Consider an option to lazily and automatically produce a snapshot only when 
 needed, in a coordinated way. (Concurrently executing workers may want to 
 materialize a table snapshot RDD at the same time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11523) JSON serialization of PE Options is broke

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063891#comment-14063891
 ] 

Hudson commented on HBASE-11523:


FAILURE: Integrated in HBase-1.0 #48 (See 
[https://builds.apache.org/job/HBase-1.0/48/])
HBASE-11523 JSON serialization of PE Options is broke (stack: rev 
f2b6a6b2e0285d425d33533f98bf5db7fe8a6b9c)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


 JSON serialization of PE Options is broke
 -

 Key: HBASE-11523
 URL: https://issues.apache.org/jira/browse/HBASE-11523
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.99.0, 2.0.0

 Attachments: 11520v3.txt, 11523v2.txt, HBASE-11523.00.nd.patch


 I see this when I try to run a PE MR job on master:
 {code}
 4/07/15 22:02:27 INFO mapreduce.Job: Task Id : 
 attempt_1405482830657_0004_m_15_2, Status : FAILED
 Error: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
 Unrecognized field blockEncoding (Class 
 org.apache.hadoop.hbase.PerformanceEvaluation$TestOptions), not marked as 
 ignorable
  at [Source: java.io.StringReader@41c7d592; line: 1, column: 37] (through 
 reference chain: org.apache.hadoop.hbase.TestOptions[blockEncoding])
   at 
 org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
   at 
 org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:246)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.reportUnknownProperty(StdDeserializer.java:604)
   at 
 org.codehaus.jackson.map.deser.StdDeserializer.handleUnknownProperty(StdDeserializer.java:590)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:689)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:514)
   at 
 org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:350)
   at 
 org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
   at 
 org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:255)
   at 
 org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:210)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 The JSON serialization of PE Options does not seem to be working.  If I add a 
 setter, it does work so unless I hear otherwise, seems like adding a setter 
 for each PE option is the way to go (I like the new JSON serialization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11482) Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as Spark RDDs

2014-07-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063887#comment-14063887
 ] 

Andrew Purtell edited comment on HBASE-11482 at 7/16/14 6:44 PM:
-

Go for it [~ted.m]


was (Author: apurtell):
Go for it [~ted.m]]

 Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as 
 Spark RDDs
 ---

 Key: HBASE-11482
 URL: https://issues.apache.org/jira/browse/HBASE-11482
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Ted Malaska

 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like HBase's TableInputFormat and 
 TableSnapshotInputFormat. 
 Insure the integration is reasonable and provides good performance. 
 Add the ability to save RDDs back to HBase with a {{saveAsHBaseTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates down to the 
 server as HBase filters. 
 Consider supporting conversions between Scala and Java types and HBase data 
 using the HBase types library.
 Consider an option to lazily and automatically produce a snapshot only when 
 needed, in a coordinated way. (Concurrently executing workers may want to 
 materialize a table snapshot RDD at the same time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11482) Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as Spark RDDs

2014-07-16 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063894#comment-14063894
 ] 

Ted Malaska commented on HBASE-11482:
-

Wow thanks.  I will do my best.

Thanks

 Optimize HBase TableInput/OutputFormats for exposing tables and snapshots as 
 Spark RDDs
 ---

 Key: HBASE-11482
 URL: https://issues.apache.org/jira/browse/HBASE-11482
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Ted Malaska

 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like HBase's TableInputFormat and 
 TableSnapshotInputFormat. 
 Insure the integration is reasonable and provides good performance. 
 Add the ability to save RDDs back to HBase with a {{saveAsHBaseTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates down to the 
 server as HBase filters. 
 Consider supporting conversions between Scala and Java types and HBase data 
 using the HBase types library.
 Consider an option to lazily and automatically produce a snapshot only when 
 needed, in a coordinated way. (Concurrently executing workers may want to 
 materialize a table snapshot RDD at the same time.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-1214) HRegionServer fails to stop if hdfs client it tries to recover a block

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1214.
--

Resolution: Cannot Reproduce

Resolving as can't repro.  Old issue.

 HRegionServer fails to stop if hdfs client it tries to recover a block 
 ---

 Key: HBASE-1214
 URL: https://issues.apache.org/jira/browse/HBASE-1214
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver
Affects Versions: 0.19.0
 Environment: 4 Node cluster with poor hardware: (DN+RS / DN+RS / RS / 
 MA+NN)
 1 Gb memory each 
 Ubuntu linux
 Java 6
Reporter: Jean-Adrien
Priority: Critical

 One of my region server falls in a long GC time that get it unresponsive 
 during about 10 minutes.
 As I can see in the log, it seems that its DFSClient component was sending a 
 file to hdfs, the sending times out when it recovers from GC:
 h5. 1. Region server log after recovering from GC
 {noformat}
 2009-02-21 01:22:26,454 WARN org.apache.hadoop.hdfs.DFSClient: 
 DFSOutputStream ResponseProcessor exception  
 for block blk_7545556036225037274_1820952java.io.IOException: Bad 
 response 1 for block 
 blk_7545556036225037274_1820952 from datanode 192.168.1.10:50010
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2342)
 {noformat}
 h5. 2. corresponding error in receiving datanode
 {noformat}
 2009-02-21 01:23:56,608 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Exception in receiveBlock for block 
 blk_7545556036225037274_1820952 java.io.EOFException: while trying to 
 read 65557 bytes
 [...]
 2009-02-21 01:23:59,076 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 PacketResponder 0 for block 
 blk_7545556036225037274_1820952 Interrupted.
 2009-02-21 01:24:01,484 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 PacketResponder 0 for block 
 blk_7545556036225037274_1820952 terminating
 2009-02-21 01:24:01,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 writeBlock 
blk_7545556036225037274_1820952 received exception java.io.EOFException: 
 while trying to read 65557 bytes
 {noformat}
 Since region server misses its lease to report to the master, it has to close 
 all its regions before recover, and reopens them when the master asks for.
 From this time, it tries to _recover_ this block, as seen in the regionserver 
 log:
 h5. 3. Region server log in an endless loop to recover
 {noformat}
 009-02-21 01:22:29,327 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
 for block 
 blk_7545556036225037274_1820952 bad datanode[1] 192.168.1.10:50010
 2009-02-21 01:22:29,327 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
 for block 
 blk_7545556036225037274_1820952 in pipeline 192.168.1.13:50010, 
192.168.1.10:50010: bad datanode 192.168.1.10:50010
 2009-02-21 01:22:29,689 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
 for block 
 blk_7545556036225037274_1820952 failed  because recovery from primary 
 datanode 192.168.1.13:50010 failed 2 times. Will retry...
 {noformat}
 To this _recover_ request, all datanodes fail with this Exception
 h5. 4.
 {noformat}
 2009-02-21 01:24:18,650 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 1 on 50020, call 
recoverBlock(blk_7545556036225037274_1820952, false, 
 [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@311c4f) from 
192.168.1.13:56968: error: org.apache.hadoop.ipc.RemoteException: 
 java.io.IOException: 
blk_7545556036225037274_1820952 is already commited, storedBlock == null.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:4536)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:402)
 at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
 blk_7545556036225037274_1820952 is already commited, 
storedBlock == null.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:4536)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:402)
 at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 

[jira] [Resolved] (HBASE-1469) TestMurmurHash data4.length - 4 failing

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1469.
--

Resolution: Won't Fix

Resolving old, stale issue.

 TestMurmurHash data4.length - 4 failing
 ---

 Key: HBASE-1469
 URL: https://issues.apache.org/jira/browse/HBASE-1469
 Project: HBase
  Issue Type: Bug
Reporter: stack

 data4 in the test murmur hash is failing.  I commented out for now:
 {code}
 // TODO: This if failing St.Ack
 // assertEquals(-969272041, hash.hash(data4, 4, data4.length-4, seed));
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-1603) MR failed RetriesExhaustedException: Trying to contact region server Some server for region TestTable...

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1603.
--

Resolution: Cannot Reproduce

 MR failed RetriesExhaustedException: Trying to contact region server Some 
 server for region TestTable...
 --

 Key: HBASE-1603
 URL: https://issues.apache.org/jira/browse/HBASE-1603
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Attachments: debugging-v4.patch


 Here is the master.  Region 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246462685358 was split at 
 16:11:42,865.  My MR job failed at 18:12:26,462 with this:
 {code}
 2009-07-01 18:12:26,462 WARN org.apache.hadoop.mapred.TaskTracker: Error 
 running child
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact 
 region server Some server for region TestTable,�,1246464670313, row '��   
 ', but failed after 10 attempts.
 Exceptions:
 ...
 {code}
 Why after ten attempts did the client not find the region?
 {code}
 2009-07-01 16:11:42,865 [IPC Server handler 2 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received MSG_REPORT_SPLIT: 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246462685358: Daughters; 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313, 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 1 of 3
 2009-07-01 16:11:42,866 [IPC Server handler 2 on 60001] INFO 
 org.apache.hadoop.hbase.master.RegionManager: Assigning region 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313 to 
 aa0-000-15.u.powerset.com,60020,1246461673026
 2009-07-01 16:11:42,866 [IPC Server handler 2 on 60001] INFO 
 org.apache.hadoop.hbase.master.RegionManager: Assigning region 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 to 
 aa0-000-15.u.powerset.com,60020,1246461673026
 2009-07-01 16:11:45,905 [IPC Server handler 8 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received 
 MSG_REPORT_PROCESS_OPEN: 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 1 of 3
 2009-07-01 16:11:45,905 [IPC Server handler 8 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received MSG_REPORT_OPEN: 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 2 of 3
 2009-07-01 16:11:45,906 [IPC Server handler 8 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received MSG_REPORT_OPEN: 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 3 of 3
 2009-07-01 16:11:45,906 [HMaster] INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313 open on 
 208.76.44.142:60020
 2009-07-01 16:11:45,906 [HMaster] INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: updating row 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313 in region 
 .META.,,1 with startcode 1246461673026 and server 208.76.44.142:60020
 2009-07-01 16:11:45,908 [HMaster] INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 open on 
 208.76.44.142:60020
 2009-07-01 16:11:45,908 [HMaster] INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: updating row 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 in region 
 .META.,,1 with startcode 1246461673026 and server 208.76.44.142:60020
 2009-07-01 17:46:42,670 [IPC Server handler 0 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received MSG_REPORT_SPLIT: 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313: Daughters; 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246470379467, 
 TestTable,\x00\x00\x08\x04\x05\x07\x02\x05\x04\x08,1246470379467 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 5 of 7
 {code}
 Here is over on the regionserver:
 {code}
 2009-07-01 16:11:42,865 [IPC Server handler 2 on 60001] INFO 
 org.apache.hadoop.hbase.master.ServerManager: Received MSG_REPORT_SPLIT: 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246462685358: Daughters; 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313, 
 TestTable,\x00\x01\x01\x04\x04\x07\x02\x08\x08\x03,1246464670313 from 
 aa0-000-15.u.powerset.com,60020,1246461673026; 1 of 3
 2009-07-01 16:11:42,866 [IPC Server handler 2 on 60001] INFO 
 org.apache.hadoop.hbase.master.RegionManager: Assigning region 
 TestTable,\x00\x00\x06\x05\x01\x05\x07\x09\x08\x00,1246464670313 to 
 aa0-000-15.u.powerset.com,60020,1246461673026
 2009-07-01 16:11:42,866 [IPC Server handler 2 on 

[jira] [Resolved] (HBASE-1775) Not all public methods have complete (or correct) javadoc

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1775.
--

Resolution: Won't Fix

Resolving inspecific issue that we are addressing piecemeal elsewhere.

 Not all public methods have complete (or correct) javadoc
 -

 Key: HBASE-1775
 URL: https://issues.apache.org/jira/browse/HBASE-1775
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.20.0
Reporter: Jim Kellerman
Assignee: Jim Kellerman

 In http://wiki.apache.org/hadoop/Hbase/HowToContribute it clearly states:
 {code}
  o All public classes and methods should have informative Javadoc comments. 
 {code}
 (it should probably read correct and informative)
 I have seen missing input parameters and return values. Paying attention to 
 details like this will also lessen the number of questions seen on the 
 hbase-user mailing list.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-1736) If RS can't talk to master, pause; more importantly, don't split (Currently we do and splits are lost and table is wounded)

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1736.
--

Resolution: Invalid

All is different now, 5 years later.

 If RS can't talk to master, pause; more importantly, don't split (Currently 
 we do and splits are lost and table is wounded)
 ---

 Key: HBASE-1736
 URL: https://issues.apache.org/jira/browse/HBASE-1736
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Critical

 What I saw was master shutting itself down because it had lost zk lease.  
 Fine.   The RS though doesn't look like it can deal with this situation.
 We'll see stuff like this:
 {code}
 ...failed on connection exception: java.net.ConnectException: Connection 
 refused
 at 
 org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:744)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:722)
 at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:328)
 at $Proxy0.regionServerReport(Unknown Source)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:470)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
 at 
 org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:305)
 at 
 org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:826)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:707)
 ... 4 more
 {code}
 ... all over the regionserver as it tries to send heartbeat to master on this 
 broken connection.
 On split, we close parent, add children to the catalog but then when we try 
 to tell the master about the split, it fails.  Means the children never get 
 deployed.  Meantime  the parent is offline.
 This issue is about going through the regionserver and anytime it has a 
 connection to master, make sure on fault that no damage is done the table and 
 then that the regionserver puts a pause on splitting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2088) Review code base to purge wacky things done to compensate for lack of hdfs sync

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2088.
--

Resolution: Won't Fix

Stale

 Review code base to purge wacky things done to compensate for lack of hdfs 
 sync
 ---

 Key: HBASE-2088
 URL: https://issues.apache.org/jira/browse/HBASE-2088
 Project: HBase
  Issue Type: Task
Reporter: stack

 Like we did with the compaction limiting thread and region server safe 
 mode after the transition to 0.20. apurtell



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2096) Runtime error on HMaster and HRegionServer because dynamic instantiation using getConstructor fails

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2096.
--

Resolution: Won't Fix

Stale

 Runtime error on HMaster and HRegionServer because dynamic instantiation 
 using getConstructor fails
 ---

 Key: HBASE-2096
 URL: https://issues.apache.org/jira/browse/HBASE-2096
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.0
Reporter: Andrei Dragomir
 Attachments: hbase_runtime_configuration_error_getConstructor.patch


 When starting the regionserver (issue also reproduces on master), I get the 
 following error: 
 ---
 2010-01-06 15:10:48,208 INFO 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, 
 -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, 
 -Dhbase.log.dir=/Users/adragomi/hbase/bin/../logs, 
 -Dhbase.log.file=hbase-adragomi-regionserver-adragomi-mac.corp.adobe.com.log, 
 -Dhbase.home.dir=/Users/adragomi/hbase/bin/.., -Dhbase.id.str=adragomi, 
 -Dhbase.root.logger=INFO,DRFA]
 2010-01-06 15:10:48,211 ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: Can not start region 
 server because java.lang.NoSuchMethodException: 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(org.apache.hadoop.hbase.HBaseConfiguration)
   at java.lang.Class.getConstructor0(Class.java:2706)
   at java.lang.Class.getConstructor(Class.java:1657)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doMain(HRegionServer.java:2313)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2394)
 ---



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2104) No exception thrown while during scanning no connection can be made to the regionserver

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2104.
--

Resolution: Won't Fix

Stale

 No exception thrown while during scanning no connection can be made to the 
 regionserver
 ---

 Key: HBASE-2104
 URL: https://issues.apache.org/jira/browse/HBASE-2104
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.20.1
Reporter: Robert Hofstra

 When a regionservers is stopped (shutdown or crash) and on the same moment a 
 client performs a scan on that regionserver no exception is thrown at client 
 side nor a reconnect toanother regionsserver is tried.
 The ResultScanner.Iterator.hasNext ()just returns false, so the client 
 assumes that there are no records anymore.
 In the ScannerCallable.call I notice that the java.net.ConnectionException is 
 catched and a empyt Result array is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2148) Document how to enable/disable compression on a table; write an article... point at lzo howto

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2148.
--

Resolution: Fixed

See refguide

 Document how to enable/disable compression on a table; write an article... 
 point at lzo howto
 -

 Key: HBASE-2148
 URL: https://issues.apache.org/jira/browse/HBASE-2148
 Project: HBase
  Issue Type: Task
Reporter: stack





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2145) bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --miniCluster randomRead 1 don't work

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2145.
--

Resolution: Fixed

This works now.

 bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --miniCluster 
 randomRead 1 don't work
 -

 Key: HBASE-2145
 URL: https://issues.apache.org/jira/browse/HBASE-2145
 Project: HBase
  Issue Type: Bug
Reporter: stack

 I see this in the 0.20.3RC.  Its been there a while I'd guess.  Not enough to 
 sink RC I'd say.  In fact, all PE args could do with a review.
 {code}
 ...
 10/01/18 21:25:31 DEBUG zookeeper.ZooKeeperWrapper: Failed to read: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/master
 10/01/18 21:25:32 INFO zookeeper.ClientCnxn: Attempting connection to server 
 localhost/fe80:0:0:0:0:0:0:1%1:2181
 10/01/18 21:25:32 WARN zookeeper.ClientCnxn: Exception closing session 0x0 to 
 sun.nio.ch.SelectionKeyImpl@7284aa02
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
 10/01/18 21:25:32 WARN zookeeper.ClientCnxn: Ignoring exception during 
 shutdown input
 java.nio.channels.ClosedChannelException
 at 
 sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
 at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
 at 
 org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 10/01/18 21:25:32 WARN zookeeper.ClientCnxn: Ignoring exception during 
 shutdown output
 java.nio.channels.ClosedChannelException
 at 
 sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
 at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
 at 
 org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 10/01/18 21:25:32 WARN zookeeper.ZooKeeperWrapper: Failed to create /hbase -- 
 check quorum servers, currently=localhost:2181
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
 at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:405)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:428)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.writeMasterAddress(ZooKeeperWrapper.java:516)
 at 
 org.apache.hadoop.hbase.master.HMaster.writeAddressToZooKeeper(HMaster.java:263)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:245)
 at 
 org.apache.hadoop.hbase.LocalHBaseCluster.init(LocalHBaseCluster.java:94)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:61)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:53)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.runTest(PerformanceEvaluation.java:871)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.doCommandLine(PerformanceEvaluation.java:981)
 at 
 org.apache.hadoop.hbase.PerformanceEvaluation.main(PerformanceEvaluation.java:1001)
 10/01/18 21:25:34 INFO zookeeper.ClientCnxn: Attempting connection to server 
 localhost/0:0:0:0:0:0:0:1:2181
 10/01/18 21:25:34 WARN zookeeper.ClientCnxn: Exception closing session 0x0 to 
 sun.nio.ch.SelectionKeyImpl@52a34783
 java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
 10/01/18 21:25:34 WARN zookeeper.ClientCnxn: Ignoring exception during 
 shutdown input
 java.nio.channels.ClosedChannelException
 at 
 sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
 at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
 at 
 org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 10/01/18 21:25:34 WARN zookeeper.ClientCnxn: Ignoring exception during 
 shutdown output
 

[jira] [Updated] (HBASE-2171) Alter statement in the hbase shell doesn't match documentation.

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2171:
-

Assignee: Alex Newman

 Alter statement in the hbase shell doesn't match documentation.
 ---

 Key: HBASE-2171
 URL: https://issues.apache.org/jira/browse/HBASE-2171
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.20.2, 0.20.3
 Environment: linux 
 java -version
 java version 1.6.0_16
 Java(TM) SE Runtime Environment (build 1.6.0_16-b01)
 Java HotSpot(TM) 64-Bit Server VM (build 14.2-b01, mixed mode)
Reporter: Alex Newman
Assignee: Alex Newman

 The documentation claims this should work. Perhaps this jira could be a 
 starting point for a more detailed explanation of alter
 HBASE SHELL COMMANDS:
  alter Alter column family schema;  pass table name and a dictionary
specifying new column family schema. Dictionaries are described
below in the GENERAL NOTES section.  Dictionary must include name
of column family to alter.  For example,
To change or add the 'f1' column family in table 't1' from defaults
to instead keep a maximum of 5 cell VERSIONS, do:
hbase alter 't1', {NAME = 'f1', VERSIONS = 5}
To delete the 'f1' column family in table 't1', do:
hbase alter 't1', {NAME = 'f1', METHOD = 'delete'}
You can also change table-scope attributes like MAX_FILESIZE
MEMSTORE_FLUSHSIZE and READONLY.
For example, to change the max size of a family to 128MB, do:
hbase alter 't1', {METHOD = 'table_att', MAX_FILESIZE = 
 '134217728'}
 
 ase Shell; enter 'helpRETURN' for list of supported commands.
 Version: 0.20.3, r902334, Mon Jan 25 13:13:08 PST 2010
 hbase(main):001:0 drop 't3'
 0 row(s) in 0.0060 seconds
 0 row(s) in 0.0050 seconds
 0 row(s) in 0.1560 seconds
 hbase(main):002:0 create 't3'
 0 row(s) in 2.1050 seconds
 hbase(main):003:0 disable 't3'
 0 row(s) in 2.0980 seconds
 hbase(main):004:0 alter 't3', {NAME = 'f1', VERSIONS = 5}
 NativeException: java.lang.NullPointerException: null
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2176) HRegionInfo reported empty on regions in meta, leading to them being deleted, although the regions contain data and exist

2014-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-2176.
--

Resolution: Won't Fix

stale

 HRegionInfo reported empty on regions in meta, leading to them being deleted, 
 although the regions contain data and exist
 -

 Key: HBASE-2176
 URL: https://issues.apache.org/jira/browse/HBASE-2176
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.0
Reporter: Andrei Dragomir
Priority: Critical
 Attachments: 799255.txt


 We ran some tests on our cluster, and getting back reports about 
 WrongRegionException, on some rows. After looking at the data, we see that we 
 have gaps between regions, like this:
 {noformat}
 demo__users,user_8949795897,1264089193398  l2:60030  736660864  
 user_8949795897  user_8950697145 - end key
 demo__users,user_8953502603,1263992844343  l5:60030  593335873  
 user_8953502603 - should be star key here   user_8956071605
 {noformat}
 Fact: we had 28 regions that were reported with empty HRegionInfo, and 
 deleted from .META.. 
 Fact: we recovered our data entirely, without any issues, by running the 
 .META. restore script from table contents (bin/add_table.rb)
 Fact: on our regionservers, we have three days with no logs. To the best of 
 our knowledge, the machines were not rebooted, the processes were running. 
 During these three days, on the master, the only entry in the logs 
 (repeated), every second, is a .META. scan:
 {noformat}
 2010-01-23 00:01:27,816 INFO org.apache.hadoop.hbase.master.BaseScanner: 
 RegionManager.rootScanner scan of 1 row(s) of meta region {server: 
 10.72.135.7:60020, regionname: -ROOT-,,0, startKey: } complete
 2010-01-23 00:01:34,413 INFO org.apache.hadoop.hbase.master.ServerManager: 6 
 region servers, 0 dead, average load 1113.7
 2010-01-23 00:02:23,645 INFO org.apache.hadoop.hbase.master.BaseScanner: 
 RegionManager.metaScanner scanning meta region {server: 10.72.135.10:60020, 
 regionname: .META.,,1, startKey: }
 2010-01-23 00:02:26,002 INFO org.apache.hadoop.hbase.master.BaseScanner: 
 RegionManager.metaScanner scan of 6679 row(s) of meta region {server: 
 10.72.135.10:60020, regionname: .META.,,1, startKey: } complete
 2010-01-23 00:02:26,002 INFO org.apache.hadoop.hbase.master.BaseScanner: All 
 1 .META. region(s) scanned
 2010-01-23 00:02:27,821 INFO org.apache.hadoop.hbase.master.BaseScanner: 
 RegionManager.rootScanner scanning meta region {server: 10.72.135.7:60020, 
 regionname: -ROOT-,,0, startKey: }
 ...
 {noformat}
 In the master logs, we see a pretty normal evolution: region r0 is split into 
 r1 and r2. Now, r1 exists and is good, r2 does not exist in .META. anymore, 
 because it was reported as having empty HRegionInfo. The only thing in the 
 master logs that is weird is that the message about updating the region in 
 meta comes up twice:
 {noformat}
 2010-01-27 22:46:45,007 INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: 
 demo__users,user_8950697145,1264089193398 open on 10.72.135.7:60020
 2010-01-27 22:46:45,010 INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: Updated row 
 demo__users,user_8950697145,1264089193398 in region .META.,,1 with 
 startcode=1264661019484, server=10.72.135.7:60020
 2010-01-27 22:46:45,010 INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: 
 demo__users,user_8950697145,1264089193398 open on 10.72.135.7:60020
 2010-01-27 22:46:45,012 INFO 
 org.apache.hadoop.hbase.master.RegionServerOperation: Updated row 
 demo__users,user_8950697145,1264089193398 in region .META.,,1 with 
 startcode=1264661019484, server=10.72.135.7:60020
 {noformat}
 Attached you will find the entire forensics work, with explanations, in a 
 text file. 
 Suppositions:
 Our entire cluster was in a really weird state. All the regionservers are 
 missing logs for three days, and to the best of our knowledge they were 
 running, and in this time the master has ONLY .META. scan messages, every 
 second, reporting 6 regionservers live, out of 7 total. 
 Also, during this time, we get filesystem closed messages on a regionservers 
 with one of the missing regions. This is after the gap in the logs. 
 How we suppose the data in .META. was lost
 1. Race conditions in ServerManager / RegionManager. In our logs, we have 
 about 3 or 4 CME, in these classes (see the attached file)
 2. Data loss in HDFS. On a regionserver, we get filesystem closed messages
 3. Data could not be read fro HDFS ( highly unlikely, there are no weird data 
 read messages)
 4. Race condition leading to loss of the HRegionInfo from memory, and then 
 persisted as empty. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-11517:


Status: Patch Available  (was: Open)

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 11517v2.txt, HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-11517:


Status: Open  (was: Patch Available)

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 11517v2.txt, HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063949#comment-14063949
 ] 

Andrew Purtell commented on HBASE-11516:


Why not track coprocessor upcall latency the same way we track HFile read and 
write op latencies? Start with HFile.java, look for the 'LATENCY_BUFFER_SIZE' 
constant, surrounding members and methods, and related code. 

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11516) Track time spent in executing coprocessors in each region.

2014-07-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063949#comment-14063949
 ] 

Andrew Purtell edited comment on HBASE-11516 at 7/16/14 7:19 PM:
-

Why not track coprocessor upcall latency the same way we track HFile read and 
write op latencies? Start with HFile.java, look for the 'LATENCY_BUFFER_SIZE' 
constant, surrounding members and methods, and related code.  This will 
aggregate the latency of all upcalls to all coprocessors over the metrics 
reporting period, but that would be more actionable then a single global 
incrementing counter in my opinion. You could further refine this by tracking 
latencies using a small sample buffer in the coprocessor environment instead of 
with a global sample buffer and report them per coprocessor class. 


was (Author: apurtell):
Why not track coprocessor upcall latency the same way we track HFile read and 
write op latencies? Start with HFile.java, look for the 'LATENCY_BUFFER_SIZE' 
constant, surrounding members and methods, and related code. 

 Track time spent in executing coprocessors in each region.
 --

 Key: HBASE-11516
 URL: https://issues.apache.org/jira/browse/HBASE-11516
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Affects Versions: 0.98.4
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 0.98.5

 Attachments: HBASE-11516.patch, HBASE-11516_v2.patch


 Currently, the time spent in executing coprocessors is not yet being tracked. 
 This feature can be handy for debugging coprocessors in case of any trouble.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11524) TestReplicaWithCluster#testChangeTable and TestReplicaWithCluster#testCreateDeleteTable fail

2014-07-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063956#comment-14063956
 ] 

Mikhail Antonov commented on HBASE-11524:
-

Just to double check, the patch I referred to (the last one, 
https://issues.apache.org/jira/secure/attachment/12655995/HBASE-11517_v1-mantonov.patch)
 hasn't been committed, it's attached to that related jira. Does the test fail 
with this patch manually applied on top of current master? 

If so, is there more specific logs for me to look at?

 TestReplicaWithCluster#testChangeTable and 
 TestReplicaWithCluster#testCreateDeleteTable fail
 

 Key: HBASE-11524
 URL: https://issues.apache.org/jira/browse/HBASE-11524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Qiang Tian

 git bisect points to HBASE-11367.
 build server run: (I did not get it in my local test)
 {quote}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {quote}
 suspected log messages:
 {quote}
 2014-07-15 23:52:09,272 INFO  2014-07-15 23:52:09,263 WARN  
 [PostOpenDeployTasks:44a7fe2589d83138452640fecb7cae80] 
 handler.OpenRegionHandler$PostOpenDeployTasksThread(326): Exception running 
 postOpenDeployTasks; region=44a7fe2589d83138452640fecb7cae80
 java.lang.NullPointerException: No connection
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getHTable(MetaTableAccessor.java:180)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:193)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:941)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateLocation(MetaTableAccessor.java:1300)
 -at 
 org.apache.hadoop.hbase.MetaTableAccessor.updateRegionLocation(MetaTableAccessor.java:1278)
 -at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1724)
 -at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:321)
 coordination.ZkOpenRegionCoordination(231): Opening of region {ENCODED = 
 44a7fe2589d83138452640fecb7cae80, NAME = 
 'testCreateDeleteTable,,1405493529036_0001.44a7fe2589d83138452640fecb7cae80.',
  STARTKEY = '', ENDKEY = '', REPLICA_ID = 1} failed, transitioning from 
 OPENING to FAILED_OPEN in ZK, expecting version 1
 2014-07-15 23:52:09,272 INFO  [RS_OPEN_REGION-bdvm101:18352-1] 
 regionserver.HRegion(1239): Closed 
 testCreateDeleteTable,,1405493529036.8573bc63fc7f328cf926a28f22c0db07.
 2014-07-15 23:52:09,272 DEBUG [RS_OPEN_REGION-bdvm101:23828-0] 
 zookeeper.ZKAssign(805): regionserver:23828-0x1473df0c2ef0002, 
 quorum=localhost:61041, baseZNode=/hbase Transitioning 
 44a7fe2589d83138452640fecb7cae80 from RS_ZK_REGION_OPENING to 
 RS_ZK_REGION_FAILED_OPEN
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063954#comment-14063954
 ] 

Mikhail Antonov commented on HBASE-11517:
-

linked to another similar failure.

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 11517v2.txt, HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11517) TestReplicaWithCluster turns zombie

2014-07-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063958#comment-14063958
 ] 

Mikhail Antonov commented on HBASE-11517:
-

Retrying. Test passes locally when run isolated for me. Will try the whole 
suite (sorry for that p1 thing, exported patch from idea vs. 'git diff ')

 TestReplicaWithCluster turns zombie
 ---

 Key: HBASE-11517
 URL: https://issues.apache.org/jira/browse/HBASE-11517
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 10930v4.txt, 11517.timeouts.txt, 11517v2.txt, 
 11517v2.txt, HBASE-11517_v1-mantonov.patch


 Happened a few times for me fixing unrelated findbugs.  Here is example: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/10065//consoleFull  See 
 how it is hanging creating a table:
 pool-1-thread-1 prio=10 tid=0x7f1714657000 nid=0x4b7f waiting on 
 condition [0x7f16e9f8]
java.lang.Thread.State: TIMED_WAITING (sleeping)
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:539)
   at 
 org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:424)
   at 
 org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1185)
   at 
 org.apache.hadoop.hbase.client.TestReplicaWithCluster.testCreateDeleteTable(TestReplicaWithCluster.java:138)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11528) The restoreSnapshot operation should delete the rollback snapshot upon a successful restore

2014-07-16 Thread churro morales (JIRA)
churro morales created HBASE-11528:
--

 Summary: The restoreSnapshot operation should delete the rollback 
snapshot upon a successful restore
 Key: HBASE-11528
 URL: https://issues.apache.org/jira/browse/HBASE-11528
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.20
Reporter: churro morales
Assignee: churro morales
Priority: Minor


We take a snapshot: rollbackSnapshot prior to doing a restore such that if 
the restore fails we can revert the table back to its pre-restore state.  If we 
are successful in restoring the table, we should delete the rollbackSnapshot 
when the restoreSnapshot operation successfully completes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064020#comment-14064020
 ] 

Hudson commented on HBASE-10322:


FAILURE: Integrated in HBase-TRUNK #5313 (See 
[https://builds.apache.org/job/HBase-TRUNK/5313/])
HBASE-10398 HBase book updates for Replication after HBASE-10322. (Misty) 
(anoopsamjohn: rev da8f0a336d9a3b516fc1f5d33c462b1ef4996117)
* src/main/docbkx/book.xml
* src/main/docbkx/security.xml
* src/main/docbkx/ops_mgt.xml


 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_V6.patch, HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11528) The restoreSnapshot operation should delete the rollback snapshot upon a successful restore

2014-07-16 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-11528:
---

Attachment: HBASE-11528-0.94.patch

 The restoreSnapshot operation should delete the rollback snapshot upon a 
 successful restore
 ---

 Key: HBASE-11528
 URL: https://issues.apache.org/jira/browse/HBASE-11528
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.20
Reporter: churro morales
Assignee: churro morales
Priority: Minor
 Attachments: HBASE-11528-0.94.patch


 We take a snapshot: rollbackSnapshot prior to doing a restore such that if 
 the restore fails we can revert the table back to its pre-restore state.  If 
 we are successful in restoring the table, we should delete the 
 rollbackSnapshot when the restoreSnapshot operation successfully completes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11521) Modify pom.xml to copy the images/ and css/ directories to the right location for the Ref Guide to see them correctly

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064021#comment-14064021
 ] 

Hudson commented on HBASE-11521:


FAILURE: Integrated in HBase-TRUNK #5313 (See 
[https://builds.apache.org/job/HBase-TRUNK/5313/])
HBASE-11521 Modify pom.xml to copy the images/ and css/ directories to the 
right location for the Ref Guide to see them correctly AMENDMENT (stack: rev 
8603da0d0db35adee5ba9af1565aeef1f5ca53f0)
* pom.xml


 Modify pom.xml to copy the images/ and css/ directories to the right location 
 for the Ref Guide to see them correctly
 -

 Key: HBASE-11521
 URL: https://issues.apache.org/jira/browse/HBASE-11521
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Critical
 Fix For: 2.0.0

 Attachments: 11521.amendment.txt, HBASE-11521.patch


 Currently, images are broken in the html-single version of the Ref Guide and 
 a CSS file is missing from it too. This change fixes those issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10398) HBase book updates for Replication after HBASE-10322

2014-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064022#comment-14064022
 ] 

Hudson commented on HBASE-10398:


FAILURE: Integrated in HBase-TRUNK #5313 (See 
[https://builds.apache.org/job/HBase-TRUNK/5313/])
HBASE-10398 HBase book updates for Replication after HBASE-10322. (Misty) 
(anoopsamjohn: rev da8f0a336d9a3b516fc1f5d33c462b1ef4996117)
* src/main/docbkx/book.xml
* src/main/docbkx/security.xml
* src/main/docbkx/ops_mgt.xml


 HBase book updates for Replication after HBASE-10322
 

 Key: HBASE-10398
 URL: https://issues.apache.org/jira/browse/HBASE-10398
 Project: HBase
  Issue Type: Task
  Components: documentation
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Misty Stanley-Jones
 Fix For: 2.0.0

 Attachments: HBASE-10398-1.patch, HBASE-10398-2.patch, 
 HBASE-10398.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >