[jira] [Created] (HBASE-16418) Reduce duration of sleep waiting for region reopen in IntegrationTestBulkLoad#installSlowingCoproc()

2016-08-15 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16418:
--

 Summary: Reduce duration of sleep waiting for region reopen in 
IntegrationTestBulkLoad#installSlowingCoproc()
 Key: HBASE-16418
 URL: https://issues.apache.org/jira/browse/HBASE-16418
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


Currently we have the following code:
{code}
desc.addCoprocessor(SlowMeCoproScanOperations.class.getName());
HBaseTestingUtility.modifyTableSync(admin, desc);
//sleep for sometime. Hope is that the regions are closed/opened before
//the sleep returns. TODO: do this better
Thread.sleep(3);
{code}
Instead of sleeping for fixed duration, we should detect when the regions have 
reopened with custom Coprocessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16418) Reduce duration of sleep waiting for region reopen in IntegrationTestBulkLoad#installSlowingCoproc()

2016-08-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421805#comment-15421805
 ] 

Ted Yu commented on HBASE-16418:


{code}
Thread.sleep(1 * 1000l);
  } else {
LOG.debug("All regions updated.");
break;
  }
} while (status.getFirst() != 0 && i++ < 500);
{code}
modifyTableSync() already waits 500 seconds at maximum.
Looks like the extra wait in IntegrationTestBulkLoad is not necessary.

> Reduce duration of sleep waiting for region reopen in 
> IntegrationTestBulkLoad#installSlowingCoproc()
> 
>
> Key: HBASE-16418
> URL: https://issues.apache.org/jira/browse/HBASE-16418
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> Currently we have the following code:
> {code}
> desc.addCoprocessor(SlowMeCoproScanOperations.class.getName());
> HBaseTestingUtility.modifyTableSync(admin, desc);
> //sleep for sometime. Hope is that the regions are closed/opened before
> //the sleep returns. TODO: do this better
> Thread.sleep(3);
> {code}
> Instead of sleeping for fixed duration, we should detect when the regions 
> have reopened with custom Coprocessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16416) Make NoncedRegionServerCallable extends RegionServerCallable

2016-08-15 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421961#comment-15421961
 ] 

Guanghao Zhang commented on HBASE-16416:


[~stack] Any ideas?

> Make NoncedRegionServerCallable extends RegionServerCallable
> 
>
> Key: HBASE-16416
> URL: https://issues.apache.org/jira/browse/HBASE-16416
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-16416.patch
>
>
> After HBASE-16308, there are a new class NoncedRegionServerCallable which 
> extends AbstractRegionServerCallable. But it have some duplicate methods with 
> RegionServerCallable. So we can make NoncedRegionServerCallable extends 
> RegionServerCallable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-7621) REST server doesn't support binary row keys

2016-08-15 Thread Keith David Winkler (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421936#comment-15421936
 ] 

Keith David Winkler edited comment on HBASE-7621 at 8/16/16 12:40 AM:
--

A more concise and complete description of the problem I hope: 

org.apache.hadoop.hbase.util.Bytes.toStringBinary does not work for encoding 
row keys in URLs for two reasons.

(1) It escapes characters with a backslash x instead of % (\x02 instead of %02) 
and backslash is NOT a valid URL character.
(2) It escapes a SUBSET of the characters which must be escaped.  For example 
it does not escape "|", which is not a valid URL character and must be escaped.

This problem makes RemoteHTable unusable for tables with arbitrary binary keys. 
 Users cannot do the URI escape before calling RemoteHTable methods in all 
cases because, in the put methods, for example, the passed row key 
(pre-escaped) is also added to the request body, where it should NOT be 
escaped.  

Note the annotations on RemoteHTable 

@InterfaceAudience.Public
@InterfaceStability.Stable
public class RemoteHTable implements Table {


was (Author: kdwinkler):
A more concise and complete description of the problem I hope: 

org.apache.hadoop.hbase.util.Bytes.toStringBinary does not work for encoding 
row keys in URLs for two reasons.

(1) It escapes characters with a backslash x instead of % (\x02 instead of %02) 
and backslash is NOT a valid URL character.
(2) It escapes a SUBSET of the characters which must be escaped.  For example 
it does not escape "|", which is not a valid URL character and must be escaped.

This problem makes RemoteHTable unusable for tables with arbitrary binary keys. 
 Users cannot do the URI escape before calling RemoteHTable methods in all 
cases because, in the put methods, for example, the passed row key 
(pre-escaped) is also added to the request body, where it should NOT be 
escaped.  



> REST server doesn't support binary row keys
> ---
>
> Key: HBASE-7621
> URL: https://issues.apache.org/jira/browse/HBASE-7621
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.94.0, 0.95.2, 0.98.4
>Reporter: Craig Muchinsky
>
> The REST server doesn't seem to support using binary (MD5 for example) row 
> keys. I believe the root cause of this is the use of Bytes.toBytes() in the 
> RowSpec.parseRowKeys() method. Based on the use of Bytes.toStringBinary() 
> within RemoteHTable.buildRowSpec(), I believe the converse function 
> Bytes.toBytesBinary() should be used for row key parsing in 
> RowSpec.parseRowKeys().
> I also noticed that the RemoteHTable.buildRowSpec() method isn't URL encoding 
> the row key, which is a mismatch to the logic in RowSpec.parseRowKeys() which 
> performs URL decoding for both the start and stop row keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7621) REST server doesn't support binary row keys

2016-08-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421975#comment-15421975
 ] 

Andrew Purtell commented on HBASE-7621:
---

Repeating my above comment

RemoteHTable was contributed primarily to aid unit tests. We should fix these 
issues with it and/or move it into the tests package. I vote for moving it 
under src/test/. There are many HTTP client libraries of far fuller 
functionality than we would accomplish, not being in the business of writing 
HTTP clients... 

> REST server doesn't support binary row keys
> ---
>
> Key: HBASE-7621
> URL: https://issues.apache.org/jira/browse/HBASE-7621
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.94.0, 0.95.2, 0.98.4
>Reporter: Craig Muchinsky
>
> The REST server doesn't seem to support using binary (MD5 for example) row 
> keys. I believe the root cause of this is the use of Bytes.toBytes() in the 
> RowSpec.parseRowKeys() method. Based on the use of Bytes.toStringBinary() 
> within RemoteHTable.buildRowSpec(), I believe the converse function 
> Bytes.toBytesBinary() should be used for row key parsing in 
> RowSpec.parseRowKeys().
> I also noticed that the RemoteHTable.buildRowSpec() method isn't URL encoding 
> the row key, which is a mismatch to the logic in RowSpec.parseRowKeys() which 
> performs URL decoding for both the start and stop row keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16414) Improve performance for RPC encryption with Apache Common Crypto

2016-08-15 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421985#comment-15421985
 ] 

Colin Ma commented on HBASE-16414:
--

Hi, [~tedyu], the link for review board is already added ([review 
board|https://reviews.apache.org/r/51089/]), thanks for review the patch.
I'll update the patch for the problems from HADOOP QA.

> Improve performance for RPC encryption with Apache Common Crypto
> 
>
> Key: HBASE-16414
> URL: https://issues.apache.org/jira/browse/HBASE-16414
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16414.001.patch, HbaseRpcEncryptionWithCrypoto.docx
>
>
> Hbase RPC encryption is enabled by setting “hbase.rpc.protection” to 
> "privacy". With the token authentication, it utilized DIGEST-MD5 mechanisms 
> for secure authentication and data protection. For DIGEST-MD5, it uses DES, 
> 3DES or RC4 to do encryption and it is very slow, especially for Scan. This 
> will become the bottleneck of the RPC throughput.
> Apache Commons Crypto is a cryptographic library optimized with AES-NI. It 
> provides Java API for both cipher level and Java stream level. Developers can 
> use it to implement high performance AES encryption/decryption with the 
> minimum code and effort. Compare with the current implementation of 
> org.apache.hadoop.hbase.io.crypto.aes.AES, Crypto supports both JCE Cipher 
> and OpenSSL Cipher which is better performance than JCE Cipher. User can 
> configure the cipher type and the default is JCE Cipher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421989#comment-15421989
 ] 

yang ming commented on HBASE-16395:
---

!attached-

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421996#comment-15421996
 ] 

yang ming commented on HBASE-16395:
---

!https://issues.apache.org/jira/secure/attachment/12823800/image.png!

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
> Attachments: image.png
>
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422004#comment-15422004
 ] 

Dima Spivak commented on HBASE-16395:
-

Did you subscribe to the mailing list? As you can see, I saw and replied to 
your original message...

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
> Attachments: image.png
>
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16341) Missing bit on "Regression: Random Read/WorkloadC slower in 1.x than 0.98"

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421959#comment-15421959
 ] 

Hudson commented on HBASE-16341:


FAILURE: Integrated in Jenkins build HBase-1.3 #817 (See 
[https://builds.apache.org/job/HBase-1.3/817/])
HBASE-16341 Missing bit on "Regression: Random Read/WorkloadC slower in (stack: 
rev f320166142c00b8bb60574d3ba7b586c07c6780c)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java


> Missing bit on "Regression: Random Read/WorkloadC slower in 1.x than 0.98"
> --
>
> Key: HBASE-16341
> URL: https://issues.apache.org/jira/browse/HBASE-16341
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-16341.master.001.patch, HBASE-16341.patch, 
> HBASE-16341.patch
>
>
> [~larsgeorge] found a missing bit in HBASE-15971 "Regression: Random 
> Read/WorkloadC slower in 1.x than 0.98" Let me fix here. Let me quote the man:
> {code}
> BTW, in constructor we do this
> ```String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY,
> CALL_QUEUE_TYPE_FIFO_CONF_VALUE);
> ```
> (edited)
> [8:19]  
> but in `onConfigurationChange()` we do
> ```String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY,
>   CALL_QUEUE_TYPE_DEADLINE_CONF_VALUE);
> ```
> (edited)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-7621) REST server doesn't support binary row keys

2016-08-15 Thread Keith David Winkler (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421936#comment-15421936
 ] 

Keith David Winkler edited comment on HBASE-7621 at 8/16/16 12:40 AM:
--

A more concise and complete description of the problem I hope: 

org.apache.hadoop.hbase.util.Bytes.toStringBinary does not work for encoding 
row keys in URLs for two reasons.

(1) It escapes characters with a backslash x instead of % (\x02 instead of %02) 
and backslash is NOT a valid URL character.
(2) It escapes a SUBSET of the characters which must be escaped.  For example 
it does not escape "|", which is not a valid URL character and must be escaped.

This problem makes RemoteHTable unusable for tables with arbitrary binary keys. 
 Users cannot do the URI escape before calling RemoteHTable methods in all 
cases because, in the put methods, for example, the passed row key 
(pre-escaped) is also added to the request body, where it should NOT be 
escaped.  

Note the annotations on RemoteHTable:

@InterfaceAudience.Public
@InterfaceStability.Stable
public class RemoteHTable implements Table {


was (Author: kdwinkler):
A more concise and complete description of the problem I hope: 

org.apache.hadoop.hbase.util.Bytes.toStringBinary does not work for encoding 
row keys in URLs for two reasons.

(1) It escapes characters with a backslash x instead of % (\x02 instead of %02) 
and backslash is NOT a valid URL character.
(2) It escapes a SUBSET of the characters which must be escaped.  For example 
it does not escape "|", which is not a valid URL character and must be escaped.

This problem makes RemoteHTable unusable for tables with arbitrary binary keys. 
 Users cannot do the URI escape before calling RemoteHTable methods in all 
cases because, in the put methods, for example, the passed row key 
(pre-escaped) is also added to the request body, where it should NOT be 
escaped.  

Note the annotations on RemoteHTable 

@InterfaceAudience.Public
@InterfaceStability.Stable
public class RemoteHTable implements Table {

> REST server doesn't support binary row keys
> ---
>
> Key: HBASE-7621
> URL: https://issues.apache.org/jira/browse/HBASE-7621
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.94.0, 0.95.2, 0.98.4
>Reporter: Craig Muchinsky
>
> The REST server doesn't seem to support using binary (MD5 for example) row 
> keys. I believe the root cause of this is the use of Bytes.toBytes() in the 
> RowSpec.parseRowKeys() method. Based on the use of Bytes.toStringBinary() 
> within RemoteHTable.buildRowSpec(), I believe the converse function 
> Bytes.toBytesBinary() should be used for row key parsing in 
> RowSpec.parseRowKeys().
> I also noticed that the RemoteHTable.buildRowSpec() method isn't URL encoding 
> the row key, which is a mismatch to the logic in RowSpec.parseRowKeys() which 
> performs URL decoding for both the start and stop row keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread yang ming (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yang ming updated HBASE-16395:
--
Attachment: 63D4.tm.png

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
> Attachments: 63D4.tm.png, image.png
>
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422008#comment-15422008
 ] 

yang ming edited comment on HBASE-16395 at 8/16/16 1:35 AM:


I have not received 
!https://issues.apache.org/jira/secure/attachment/12823808/63D4.tm.png!


was (Author: yangming860101):
!https://issues.apache.org/jira/secure/attachment/12823808/63D4.tm.png!

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
> Attachments: 63D4.tm.png, image.png
>
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16395) ShortCircuitLocalReads Failed when enabled replication

2016-08-15 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422008#comment-15422008
 ] 

yang ming edited comment on HBASE-16395 at 8/16/16 1:35 AM:


*I have not received a confirmation request by email*
!https://issues.apache.org/jira/secure/attachment/12823808/63D4.tm.png!


was (Author: yangming860101):
I have not received 
!https://issues.apache.org/jira/secure/attachment/12823808/63D4.tm.png!

> ShortCircuitLocalReads Failed when enabled replication
> --
>
> Key: HBASE-16395
> URL: https://issues.apache.org/jira/browse/HBASE-16395
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.20
>Reporter: yang ming
>Priority: Critical
> Attachments: 63D4.tm.png, image.png
>
>
> I had sended an email to u...@hbase.apache.org,but received no help.
> The cluster enabled shortCircuitLocalReads.
> 
> dfs.client.read.shortcircuit
> true
> 
> When enabled replication,we found a large number of error logs.
> 1.shortCircuitLocalReads(fail everytime).
> 2.Try reading via the datanode on targetAddr(success).
> How to make shortCircuitLocalReads successfully when enabled replication?
> 2016-08-03 10:46:21,721 DEBUG 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening 
> log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset: Offset 0 and length 
> 17073479 don't match block blk_4137524355009640437_53760530 ( blockLen 
> 16999670 )
> 2016-08-03 10:46:21,723 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_4137524355009640437_53760530 from cache 
> because local file 
> /sdd/hdfs/dfs/data/blocksBeingWritten/blk_4137524355009640437 could not be 
> opened.
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_4137524355009640437_53760530 on local machinejava.io.IOException: 
> Offset 0 and length 17073479 don't match block 
> blk_4137524355009640437_53760530 ( blockLen 16999670 )
> at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:287)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:171)
> at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:358)
> at org.apache.hadoop.hdfs.DFSClient.access$800(DFSClient.java:74)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2073)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
> at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:574)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> 2016-08-03 10:46:21,724 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /192.168.7.139:50010



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16414) Improve performance for RPC encryption with Apache Common Crypto

2016-08-15 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422040#comment-15422040
 ] 

Colin Ma commented on HBASE-16414:
--

hi, [~ghelmling], can you help to review the design. I think it will be a lot 
of performance improvement for security + DIGEST-MD5 + RPC protection, 
especially for Scan.

> Improve performance for RPC encryption with Apache Common Crypto
> 
>
> Key: HBASE-16414
> URL: https://issues.apache.org/jira/browse/HBASE-16414
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16414.001.patch, HbaseRpcEncryptionWithCrypoto.docx
>
>
> Hbase RPC encryption is enabled by setting “hbase.rpc.protection” to 
> "privacy". With the token authentication, it utilized DIGEST-MD5 mechanisms 
> for secure authentication and data protection. For DIGEST-MD5, it uses DES, 
> 3DES or RC4 to do encryption and it is very slow, especially for Scan. This 
> will become the bottleneck of the RPC throughput.
> Apache Commons Crypto is a cryptographic library optimized with AES-NI. It 
> provides Java API for both cipher level and Java stream level. Developers can 
> use it to implement high performance AES encryption/decryption with the 
> minimum code and effort. Compare with the current implementation of 
> org.apache.hadoop.hbase.io.crypto.aes.AES, Crypto supports both JCE Cipher 
> and OpenSSL Cipher which is better performance than JCE Cipher. User can 
> configure the cipher type and the default is JCE Cipher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16094) Procedure v2 - Improve cleaning up of proc wals

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422044#comment-15422044
 ] 

Hadoop QA commented on HBASE-16094:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 14s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 54s {color} 
| {color:red} hbase-procedure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure2.store.wal.TestWALProcedureStore |
|   | hadoop.hbase.procedure2.TestProcedureRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-08-16 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823798/HBASE-16094.master.002.patch
 |
| JIRA Issue | HBASE-16094 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c6ba8356f7e0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-16418) Reduce duration of sleep waiting for region reopen in IntegrationTestBulkLoad#installSlowingCoproc()

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422075#comment-15422075
 ] 

Hudson commented on HBASE-16418:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1422 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1422/])
HBASE-16418 Reduce duration of sleep waiting for region reopen in (tedyu: rev 
d5080e82fb47b5499b72fbafbbc52f4f432622d3)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java


> Reduce duration of sleep waiting for region reopen in 
> IntegrationTestBulkLoad#installSlowingCoproc()
> 
>
> Key: HBASE-16418
> URL: https://issues.apache.org/jira/browse/HBASE-16418
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16418.v1.txt
>
>
> Currently we have the following code:
> {code}
> desc.addCoprocessor(SlowMeCoproScanOperations.class.getName());
> HBaseTestingUtility.modifyTableSync(admin, desc);
> //sleep for sometime. Hope is that the regions are closed/opened before
> //the sleep returns. TODO: do this better
> Thread.sleep(3);
> {code}
> Instead of sleeping for fixed duration, we should detect when the regions 
> have reopened with custom Coprocessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12721) Create Docker container cluster infrastructure to enable better testing

2016-08-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422083#comment-15422083
 ] 

Sean Busbey commented on HBASE-12721:
-

I think it got missed because it isn't in "Patch Available" status.

I'm +1 and planning to push as soon as I have my local repo ready for patches.

> Create Docker container cluster infrastructure to enable better testing
> ---
>
> Key: HBASE-12721
> URL: https://issues.apache.org/jira/browse/HBASE-12721
> Project: HBase
>  Issue Type: New Feature
>  Components: build, community, documentation, test
>Reporter: Dima Spivak
>Assignee: Dima Spivak
>
> Some simple work on using HBase with Docker was committed into /dev-support 
> as "hbase_docker;" all this did was stand up a standalone cluster from source 
> and start a shell. Now seems like a good time to extend this to be useful for 
> applications that could actual benefit the community, especially around 
> testing. Some ideas:
> - Integration testing would be much more accessible if people could stand up 
> distributed HBase clusters on a single host machine in a couple minutes and 
> run our awesome hbase-it suite against it.
> - Binary compatibility testing of an HBase client is easiest when standing up 
> an HBase cluster can be done once and then different client source/binary 
> permutations run against it.
> - Upgrade testing, and especially rolling upgrade testing, doesn't have any 
> upstream automation on build.apache.org, in part because it's a pain to set 
> up x-node clusters on Apache infrastructure.
> This proposal, whether it stays under /dev-support or moves out into it's own 
> top-level module ("hbase-docker" would conveniently fit the existing schema 
> :-)), strives to create a simple framework for deploying "distributed," 
> multi-container Apache HBase clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16418) Reduce duration of sleep waiting for region reopen in IntegrationTestBulkLoad#installSlowingCoproc()

2016-08-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16418:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Reduce duration of sleep waiting for region reopen in 
> IntegrationTestBulkLoad#installSlowingCoproc()
> 
>
> Key: HBASE-16418
> URL: https://issues.apache.org/jira/browse/HBASE-16418
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16418.v1.txt
>
>
> Currently we have the following code:
> {code}
> desc.addCoprocessor(SlowMeCoproScanOperations.class.getName());
> HBaseTestingUtility.modifyTableSync(admin, desc);
> //sleep for sometime. Hope is that the regions are closed/opened before
> //the sleep returns. TODO: do this better
> Thread.sleep(3);
> {code}
> Instead of sleeping for fixed duration, we should detect when the regions 
> have reopened with custom Coprocessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16148) Hybrid Logical Clocks(placeholder for running tests)

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422104#comment-15422104
 ] 

Hadoop QA commented on HBASE-16148:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 34s {color} 
| {color:red} hbase-server-jdk1.7.0_101 with JDK v1.7.0_101 generated 2 new + 4 
unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 34s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 43s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 31s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
50s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 157m 27s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 

[jira] [Updated] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-08-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15984:
--
Fix Version/s: (was: 1.2.3)
   1.2.4

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 0.98.22, 1.1.7, 1.2.4
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2