[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168630#comment-15168630
 ] 

Jianwei Cui commented on HBASE-15340:
-

A direct solution is that we can make ClientScanner record the readPoint when 
the scanner for the region is firstly opened, the following scanners for the 
same region use the same readPoint if RegionMovedException happens. Any 
suggestion? 

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14798) NPE reporting server load causes regionserver abort; causes TestAcidGuarantee to fail

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168632#comment-15168632
 ] 

Hudson commented on HBASE-14798:


SUCCESS: Integrated in HBase-1.1-JDK7 #1669 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1669/])
HBASE-14798 NPE reporting server load causes regionserver abort; causes 
(jerryjch: rev 6cb16e93dd1b48ee80c8b15115055eefdc03e571)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java


> NPE reporting server load causes regionserver abort; causes TestAcidGuarantee 
> to fail
> -
>
> Key: HBASE-14798
> URL: https://issues.apache.org/jira/browse/HBASE-14798
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4
>
> Attachments: 14798.patch, 14798.patch
>
>
> Below crashed out a RS. Caused TestAcidGuarantees to fail because then there 
> were not RS to assign too... 
> {code}
> 2015-11-11 11:36:23,092 ERROR 
> [B.defaultRpcServer.handler=4,queue=0,port=58655] 
> master.MasterRpcServices(388): Region server 
> asf907.gq1.ygridcore.net,55184,1447241756717 reported a fatal error:
> ABORTING region server asf907.gq1.ygridcore.net,55184,1447241756717: 
> Unhandled: null
> Cause:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getOldestHfileTs(HRegion.java:1643)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:1503)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1210)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:969)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Here is the failure: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/457/jdk=latest1.8,label=Hadoop/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15311) Prevent NPE in BlockCacheViewTmpl

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168633#comment-15168633
 ] 

Hudson commented on HBASE-15311:


SUCCESS: Integrated in HBase-1.1-JDK7 #1669 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1669/])
HBASE-15311 Prevent NPE in BlockCacheViewTmpl. (stack: rev 
4743fde0a08ee926ca74139c49fdfcc61cb0f81b)
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheViewTmpl.jamon


> Prevent NPE in BlockCacheViewTmpl
> -
>
> Key: HBASE-15311
> URL: https://issues.apache.org/jira/browse/HBASE-15311
> Project: HBase
>  Issue Type: Sub-task
>  Components: UI
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-15311_v0.patch
>
>
> Currently we have this URL (rs-status?format=json&bcn=L1) for displaying 
> block cache stats in json format. If arbitrary parameter is supplied instead 
> of L1 it will cause NPE and we will display 500 error.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

Attachment: HBASE-15338-trunk-v3.diff

Making cache on read in CacheConfig configurable according to 
[~jingcheng...@intel.com] and [~anoop.hbase] 's suggestions. Thanks

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168642#comment-15168642
 ] 

ramkrishna.s.vasudevan commented on HBASE-15340:


Is this same as https://issues.apache.org/jira/browse/HBASE-15325?  Even there 
it talks about partial row results when the region moves.

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15327) Canary will always invoke admin.balancer() in each sniffing period when writeSniffing is enabled

2016-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168643#comment-15168643
 ] 

Liu Shaohui commented on HBASE-15327:
-

LGTM~ Thanks [~cuijianwei]

> Canary will always invoke admin.balancer() in each sniffing period when 
> writeSniffing is enabled
> 
>
> Key: HBASE-15327
> URL: https://issues.apache.org/jira/browse/HBASE-15327
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-15327-trunk.patch
>
>
> When Canary#writeSniffing is enabled, Canary#checkWriteTableDistribution will 
> make sure the regions of write table distributed on all region servers as:
> {code}
>   int numberOfServers = admin.getClusterStatus().getServers().size();
>   ..
>   int numberOfCoveredServers = serverSet.size();
>   if (numberOfCoveredServers < numberOfServers) {
> admin.balancer();
>   }
> {code}
> The master will also work as a regionserver, so that ClusterStatus#getServers 
> will contain the master. On the other hand, write table of Canary will not be 
> assigned to master, making numberOfCoveredServers always smaller than 
> numberOfServers and admin.balancer always be invoked in each sniffing period. 
> This may cause frequent region moves. A simple fix is excluding master from 
> numberOfServers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15311) Prevent NPE in BlockCacheViewTmpl

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168661#comment-15168661
 ] 

Hudson commented on HBASE-15311:


FAILURE: Integrated in HBase-1.1-JDK8 #1756 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1756/])
HBASE-15311 Prevent NPE in BlockCacheViewTmpl. (stack: rev 
4743fde0a08ee926ca74139c49fdfcc61cb0f81b)
* 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheViewTmpl.jamon


> Prevent NPE in BlockCacheViewTmpl
> -
>
> Key: HBASE-15311
> URL: https://issues.apache.org/jira/browse/HBASE-15311
> Project: HBase
>  Issue Type: Sub-task
>  Components: UI
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-15311_v0.patch
>
>
> Currently we have this URL (rs-status?format=json&bcn=L1) for displaying 
> block cache stats in json format. If arbitrary parameter is supplied instead 
> of L1 it will cause NPE and we will display 500 error.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14798) NPE reporting server load causes regionserver abort; causes TestAcidGuarantee to fail

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168660#comment-15168660
 ] 

Hudson commented on HBASE-14798:


FAILURE: Integrated in HBase-1.1-JDK8 #1756 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1756/])
HBASE-14798 NPE reporting server load causes regionserver abort; causes 
(jerryjch: rev 6cb16e93dd1b48ee80c8b15115055eefdc03e571)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFileManager.java


> NPE reporting server load causes regionserver abort; causes TestAcidGuarantee 
> to fail
> -
>
> Key: HBASE-14798
> URL: https://issues.apache.org/jira/browse/HBASE-14798
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.4
>
> Attachments: 14798.patch, 14798.patch
>
>
> Below crashed out a RS. Caused TestAcidGuarantees to fail because then there 
> were not RS to assign too... 
> {code}
> 2015-11-11 11:36:23,092 ERROR 
> [B.defaultRpcServer.handler=4,queue=0,port=58655] 
> master.MasterRpcServices(388): Region server 
> asf907.gq1.ygridcore.net,55184,1447241756717 reported a fatal error:
> ABORTING region server asf907.gq1.ygridcore.net,55184,1447241756717: 
> Unhandled: null
> Cause:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getOldestHfileTs(HRegion.java:1643)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:1503)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1210)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:969)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:140)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:307)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:138)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Here is the failure: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/457/jdk=latest1.8,label=Hadoop/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168664#comment-15168664
 ] 

Jianwei Cui commented on HBASE-15325:
-

When user set batch for scan, the client may also return partial row result to 
application and suffer this problem if region moves. The reason is that the 
server will judge whether the result is partial as:
{code}
  boolean partialResultFormed() {
return scannerState == NextState.SIZE_LIMIT_REACHED_MID_ROW
|| scannerState == NextState.TIME_LIMIT_REACHED_MID_ROW;
  }
{code}
The NextState.BATCH_LIMIT_REACHED is not considered as partial result, so that 
the ClientScanner won't get a partial result from server and will go to the 
next row when retrying:
  if (!this.lastResult.isPartial()) {
if (scan.isReversed()) {
  scan.setStartRow(createClosestRowBefore(lastResult.getRow()));
} else {
  scan.setStartRow(Bytes.add(lastResult.getRow(), new byte[1]));  
// <=== partial result from batch limit reached case will go to the next row 
and missing rest data
}
  } else {
// we need rescan this row because we only load partial row before
scan.setStartRow(lastResult.getRow());
  }
{code}
I think if user sets batch for scan, it means the user allows partial result? 
We can set scan.allowPartialResults to true in this situation, and the server 
should also take NextState.BATCH_LIMIT_REACHED as a partial result, then the 
ClientScanner will receive a partial result and retry the same row if region 
moved after applied the patch.  

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168669#comment-15168669
 ] 

ramkrishna.s.vasudevan commented on HBASE-14918:


Since BR is used heavily in the prefix-tree area, that is one reason why still 
Prefix-Tree read path does not work completely with offheap. We have to rewrite 
the logic in prefix-tree replacing BR with BBs. 

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
> Attachments: CellBlocksSegmentDesign.pdf, MSLABMove.patch
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168678#comment-15168678
 ] 

Jianwei Cui commented on HBASE-15340:
-

[~ram_krish], this is a different problem caused by region move when scanning 
IMO. When [HBASE-15325|https://issues.apache.org/jira/browse/HBASE-15325] is 
resolved, there is no data miss, however, the returned data may combined from 
different row-level transactions which is unexpected for application. I think 
we should also keep the READ_COMMITTED isolation level in this situation?

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168679#comment-15168679
 ] 

Jingcheng Du commented on HBASE-15338:
--

So the meta blocks don't need to be cached if cache on read is disabled? Is it 
done on purpose?
In the current implementation, the meta blocks are always cached.
I noticed you wanted the index and meta blocks to be always cached even if the 
data blocks are disabled. Right? Maybe some changes are needed in 
CacheConfig.shouldCacheBlockOnRead(BlockCategory category), allow the meta 
blocks to be cached always?
{code}
public boolean shouldCacheBlockOnRead(BlockCategory category) {
return isBlockCacheEnabled()
&& (cacheDataOnRead ||
category == BlockCategory.INDEX ||
category == BlockCategory.BLOOM ||
+  category == BlockCategory.META ||
(prefetchOnOpen &&
(category != BlockCategory.META &&
 category != BlockCategory.UNKNOWN)));
  }
{code}

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168682#comment-15168682
 ] 

Anoop Sam John commented on HBASE-15338:


Again am missing some thing
-family.isBlockCacheEnabled(),
+conf.getBoolean(CACHE_DATA_ON_READ_KEY, DEFAULT_CACHE_DATA_ON_READ)
+   && family.isBlockCacheEnabled(),

Why we need this new config?  Why can not we rely on HCD setting?  
{code}
/**
   * Returns whether the DATA blocks of this HFile should be cached on read or 
not (we always
   * cache the meta blocks, the INDEX and BLOOM blocks).
   * @return true if blocks should be cached on read, false if not
   */
  public boolean shouldCacheDataOnRead() {
return isBlockCacheEnabled() && cacheDataOnRead;
  }
{code}
This may be the issue you are saying?  This is called from getMetaBlock().  As 
per the comment, when we read meta blocks, we must cache it.  As we do not pass 
any type we seems may not do that..  That is a bug IMO..So we better 
correct that bug (Any other?)   and test ur case with HCD setting?

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168682#comment-15168682
 ] 

Anoop Sam John edited comment on HBASE-15338 at 2/26/16 9:01 AM:
-

Again am missing some thing
{code}
-family.isBlockCacheEnabled(),
+conf.getBoolean(CACHE_DATA_ON_READ_KEY, DEFAULT_CACHE_DATA_ON_READ)
+   && family.isBlockCacheEnabled(),
{code}
Why we need this new config?  Why can not we rely on HCD setting?  
{code}
/**
   * Returns whether the DATA blocks of this HFile should be cached on read or 
not (we always
   * cache the meta blocks, the INDEX and BLOOM blocks).
   * @return true if blocks should be cached on read, false if not
   */
  public boolean shouldCacheDataOnRead() {
return isBlockCacheEnabled() && cacheDataOnRead;
  }
{code}
This may be the issue you are saying?  This is called from getMetaBlock().  As 
per the comment, when we read meta blocks, we must cache it.  As we do not pass 
any type we seems may not do that..  That is a bug IMO..So we better 
correct that bug (Any other?)   and test ur case with HCD setting?

And ya as per Jingcheng suggestion, we need consider META block category as 
well? This is considered with prefetch only now.. Need to read code more..


was (Author: anoop.hbase):
Again am missing some thing
-family.isBlockCacheEnabled(),
+conf.getBoolean(CACHE_DATA_ON_READ_KEY, DEFAULT_CACHE_DATA_ON_READ)
+   && family.isBlockCacheEnabled(),

Why we need this new config?  Why can not we rely on HCD setting?  
{code}
/**
   * Returns whether the DATA blocks of this HFile should be cached on read or 
not (we always
   * cache the meta blocks, the INDEX and BLOOM blocks).
   * @return true if blocks should be cached on read, false if not
   */
  public boolean shouldCacheDataOnRead() {
return isBlockCacheEnabled() && cacheDataOnRead;
  }
{code}
This may be the issue you are saying?  This is called from getMetaBlock().  As 
per the comment, when we read meta blocks, we must cache it.  As we do not pass 
any type we seems may not do that..  That is a bug IMO..So we better 
correct that bug (Any other?)   and test ur case with HCD setting?

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168687#comment-15168687
 ] 

Anoop Sam John commented on HBASE-15338:


When we set cache data on read to false in HCD and no other changes in configs 
from default values, the req is we should not cache DATA blocks only. But INDEX 
, BLOOM etc should get cached..  If it is not happening, it may be a bug and we 
need address that rather than adding a new config again.

See the javadoc of HCD setter
{code}
 /**
   * @param blockCacheEnabled True if hfile DATA type blocks should be cached 
(We always cache
   * INDEX and BLOOM blocks; you cannot turn this off).
   * @return this (for chained invocation)
   */
  public HColumnDescriptor setBlockCacheEnabled(boolean blockCacheEnabled) {
return setValue(BLOCKCACHE, Boolean.toString(blockCacheEnabled));
  }
{code}

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15341) 1.3 release umbrella

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15341:
---

 Summary: 1.3 release umbrella
 Key: HBASE-15341
 URL: https://issues.apache.org/jira/browse/HBASE-15341
 Project: HBase
  Issue Type: Umbrella
  Components: build
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0


Umbrella jira for 1.3 release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168696#comment-15168696
 ] 

Hadoop QA commented on HBASE-15205:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 29 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 42s 
{color} | {color:red} Patch generated 3 new checkstyle issues in hbase-server 
(total was 445, now 444). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 44s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 10s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 239m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
|   | hadoop.hbase.TestStochasticBalancerJmxMetrics |
|   | hadoop.hbase.regionserver.TestHRegion |
|   | hadoop.hbase.client.TestBlockEvictionFromClient |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
|   | hadoop.hbase.regionserver.Te

[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168700#comment-15168700
 ] 

Anoop Sam John commented on HBASE-15340:


After seeing an issue around partial results while region move yday, I was 
thinking on this ..   And the solution you mentioned only came first to my mind 
as well :-)Ya in case of client recreate scanner (because of NSRE or region 
moved) the ReadPoint MVCC stuff will get broken.  As the new Scanner will have 
a new readPnt.

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168702#comment-15168702
 ] 

Anoop Sam John commented on HBASE-15340:


And this is an issue in all versions of HBase I think. From day one we have 
this issue (?)

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15293) Handle TableNotFound and IllegalArgument exceptions in table.jsp.

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168714#comment-15168714
 ] 

Hadoop QA commented on HBASE-15293:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 7 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 58s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 59s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 229m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-02-26 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790067/HBASE-15293_v1.patch |
| JIRA Issue | HBASE-15293 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 2fad14b75fcf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / bf4fcc3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_95.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/717/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt
 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_95.txt
 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Max memory used | 439MB |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/717/console |


This message was automatically generated.



> Handle TableNotFound and IllegalArgume

[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168727#comment-15168727
 ] 

ramkrishna.s.vasudevan commented on HBASE-15340:


bq. When HBASE-15325 is resolved, there is no data miss, however, the returned 
data may combined from different row-level transactions which is unexpected for 
application. 
Ya got it now.

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168728#comment-15168728
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~jingcheng...@intel.com]
{quota}
So the meta blocks don't need to be cached if cache on read is disabled? Is it 
done on purpose?
{quota}
No. I will update the patch later according to your advice.



> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15342) create branch-1.3

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15342:
---

 Summary: create branch-1.3
 Key: HBASE-15342
 URL: https://issues.apache.org/jira/browse/HBASE-15342
 Project: HBase
  Issue Type: Sub-task
  Components: build
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0


create branch-1.3 and  update branch-1 poms to 1.4.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168733#comment-15168733
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~anoop.hbase]
{quote}
Why we need this new config? Why can not we rely on HCD setting? 
{quote}
I think it's better to have a global switch.
{quote}
This may be the issue you are saying? This is called from getMetaBlock(). As 
per the comment, when we read meta blocks, we must cache it. As we do not pass 
any type we seems may not do that.. That is a bug IMO.. So we better correct 
that bug (Any other?) and test ur case with HCD setting?
And ya as per Jingcheng suggestion, we need consider META block category as 
well?
{quote}
Thanks for pointing out. I will update patch according Jingcheng's suggestion 
and add more tests.


> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168736#comment-15168736
 ] 

Jianwei Cui commented on HBASE-15340:
-

[~anoop.hbase], the intra-row scanning seems come from 
[HBASE-1537|https://issues.apache.org/jira/browse/HBASE-1537], so that versions 
after 0.90.0 will have this issue. I will make a patch following the idea and 
check the result:)

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168735#comment-15168735
 ] 

ramkrishna.s.vasudevan commented on HBASE-15205:


Only TestHRegion is a failure due to mocking issue. TestblockEvictionFrmClient 
happened because of accidental change that I had done for testing some other 
thing. 

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_2.patch, HBASE-15205_3.patch, HBASE-15205_4.patch, 
> HBASE-15205_6.patch, HBASE-15205_6.patch, HBASE-15205_7.patch, 
> HBASE-15205_8.patch, HBASE-15205_9.patch, ScopeWALEdits.jpg, 
> ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15205:
---
Status: Open  (was: Patch Available)

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_2.patch, HBASE-15205_3.patch, HBASE-15205_4.patch, 
> HBASE-15205_6.patch, HBASE-15205_6.patch, HBASE-15205_7.patch, 
> HBASE-15205_8.patch, HBASE-15205_9.patch, ScopeWALEdits.jpg, 
> ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15205:
---
Attachment: HBASE-15205_12.patch

Patch for final QA run

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_12.patch, HBASE-15205_2.patch, HBASE-15205_3.patch, 
> HBASE-15205_4.patch, HBASE-15205_6.patch, HBASE-15205_6.patch, 
> HBASE-15205_7.patch, HBASE-15205_8.patch, HBASE-15205_9.patch, 
> ScopeWALEdits.jpg, ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15205:
---
Status: Patch Available  (was: Open)

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_12.patch, HBASE-15205_2.patch, HBASE-15205_3.patch, 
> HBASE-15205_4.patch, HBASE-15205_6.patch, HBASE-15205_6.patch, 
> HBASE-15205_7.patch, HBASE-15205_8.patch, HBASE-15205_9.patch, 
> ScopeWALEdits.jpg, ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168741#comment-15168741
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 9s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 16s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 14, now 15). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 46s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 32s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 215m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || R

[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168742#comment-15168742
 ] 

Anoop Sam John commented on HBASE-15340:


Not just intra row I would say.  Even consider a normal Scan. We have writes 
also in parallel.  A row 'r5' (say only one cell in it ) is inserted after 
begin of the scan.  So if there is no region move in btw, we wont see this row 
at all. The cell will get removed from the return result by the seqId check 
against the readPnt.  But if there is a region move in btw, we may see it.   So 
it is a Q of consistency wrt results as well.  Get my point?  Just saying..
With intra row results (By setting batch on Scan/ result chunking)  this got to 
be more visible issue

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

Attachment: HBASE-15338-trunk-v4.diff

Update the patch ~
Thanks for [~jingcheng...@intel.com] and [~anoop.hbase].

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

Attachment: HBASE-15338-trunk-v5.diff

Fix the check style errors

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15181) A simple implementation of date based tiered compaction

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168754#comment-15168754
 ] 

Hadoop QA commented on HBASE-15181:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
0s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 7s 
{color} | {color:red} hbase-server introduced 1 new FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 19s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 14s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 226m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  storeFile must be non-null but is marked as nullable  At 
DateTieredCompactionPolicy.java:is marked as nullable  At 
DateTieredCompactionPolicy.java:[line 238] |
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-02-26 |
| JIRA Patch URL | 

[jira] [Created] (HBASE-15343) add branch-1.3 to precommit branches

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15343:
---

 Summary: add branch-1.3 to precommit branches
 Key: HBASE-15343
 URL: https://issues.apache.org/jira/browse/HBASE-15343
 Project: HBase
  Issue Type: Sub-task
  Components: build
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
 Fix For: 1.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15344:
---

 Summary: add 1.3 to prereq tables in ref guide
 Key: HBASE-15344
 URL: https://issues.apache.org/jira/browse/HBASE-15344
 Project: HBase
  Issue Type: Sub-task
  Components: build
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-02-26 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15344:

Component/s: (was: build)
 documentation

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15345) add branch-1.3 post-commit builds

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15345:
---

 Summary: add branch-1.3 post-commit builds
 Key: HBASE-15345
 URL: https://issues.apache.org/jira/browse/HBASE-15345
 Project: HBase
  Issue Type: Sub-task
  Components: build
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15346) add 1.3 RM to docs

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15346:
---

 Summary: add 1.3 RM to docs
 Key: HBASE-15346
 URL: https://issues.apache.org/jira/browse/HBASE-15346
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168771#comment-15168771
 ] 

Jianwei Cui commented on HBASE-15340:
-

[~anoop.hbase], thanks for your comment, I get your point:). Yes, the case you 
mentioned will happen. The page https://hbase.apache.org/acid-semantics.html 
explains the consistency guarantee for scan:
{code}
A scan is not a consistent view of a table. Scans do not exhibit snapshot 
isolation.

Rather, scans have the following properties:

1. Any row returned by the scan will be a consistent view (i.e. that version of 
the complete row existed at some point in time) [1]
2. A scan will always reflect a view of the data at least as new as the 
beginning of the scan. This satisfies the visibility guarantees enumerated 
below.
1. For example, if client A writes data X and then communicates via a side 
channel to client B, any scans started by client B will contain data at least 
as new as X.
2. A scan _must_ reflect all mutations committed prior to the construction 
of the scanner, and _may_ reflect some mutations committed subsequent to the 
construction of the scanner.
3. Scans must include all data written prior to the scan (except in the 
case where data is subsequently mutated, in which case it _may_ reflect the 
mutation)
{code}
It seems the consistent for scan only guarantee to read out data at least as 
new as the beginning of the scan, but no guarantee to whether read out data 
concurrently written or written after the beginning of the scan. 

At the end of the page:
{code}
[1] A consistent view is not guaranteed intra-row scanning -- i.e. fetching a 
portion of a row in one RPC then going back to fetch another portion of the row 
in a subsequent RPC. Intra-row scanning happens when you set a limit on how 
many values to return per Scan#next (See Scan#setBatch(int)).
{code}
It mentioned the problem of this jira that row-level consistent view is not 
guaranteed for intra-row scanning, so this is a known problem?

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15347) Update CHANGES.txt for 1.3

2016-02-26 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15347:
---

 Summary: Update CHANGES.txt for 1.3
 Key: HBASE-15347
 URL: https://issues.apache.org/jira/browse/HBASE-15347
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 1.3.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.3.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168785#comment-15168785
 ] 

Anoop Sam John commented on HBASE-15340:


Yep. This is a known issue then..  The solution of having a client aware 
readPnt will solve even that (?)  That work has to consider comparability as 
well. old client -> new RS and reverse.

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168805#comment-15168805
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 12m 54s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 31s 
{color} | {color:red} Patch generated 2 new checkstyle issues in hbase-server 
(total was 67, now 68). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 32s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluste

[jira] [Commented] (HBASE-15295) MutateTableAccess.multiMutate() does not get high priority causing a deadlock

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168807#comment-15168807
 ] 

Hadoop QA commented on HBASE-15295:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 18s 
{color} | {color:red} Patch generated 5 new checkstyle issues in hbase-client 
(total was 532, now 470). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-it in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 170m 18s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hbase-it in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 145m 39s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
5s {color} | {color:green} Patch does n

[jira] [Commented] (HBASE-15225) Connecting to HBase via newAPIHadoopRDD in PySpark gives org.apache.hadoop.hbase.client.RetriesExhaustedException

2016-02-26 Thread Sanjay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168809#comment-15168809
 ] 

Sanjay Kumar commented on HBASE-15225:
--

[~ted.m] - We recently upgraded to Hortonworks 2.3.4 (HBase - 1.1.2 / spark 
1.5.2). I built the code that you shared and tried the HBaseBulkGetExample. I 
see the error given below. Is it because of we have missed some configuration ? 
Have you seen something like this ? 

org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6744)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:628)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:987)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy26.getDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:909)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy27.getDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1029)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1355)
at 
org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:529)
at 
org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2041)
at 
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokensForNamenodes$1.apply(YarnSparkHadoopUtil.scala:126)
at 
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokensForNamenodes$1.apply(YarnSparkHadoopUtil.scala:123)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
at 
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokensForNamenodes(YarnSparkHadoopUtil.scala:123)
at 
org.apache.spark.deploy.yarn.Client.getTokenRenewalInterval(Client.scala:500)
at org.apache.spark.deploy.yarn.Client.setupLaunchEnv(Client.scala:533)
at 
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:633)
at 
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:123)
at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:523)
at 
$line137.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$HBaseBulkGetExample$.main(:94)
at 
$line139.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:83)
at 
$line139.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:88)
at 
$line139.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:90)
at 
$line139.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:92)
at 
$line139.$r

[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168829#comment-15168829
 ] 

Jianwei Cui commented on HBASE-15340:
-

After [HBASE-11544|https://issues.apache.org/jira/browse/HBASE-11544], the 
maxScannerResultSize of ClientScanner will be 2MB default, this will make 
server return partial result more easily when size limit reached, and this 
issue will happen even when the user not set batch for scan.  

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15340) Partial row result of scan may return data violates the row-level transaction

2016-02-26 Thread Jianwei Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168832#comment-15168832
 ] 

Jianwei Cui commented on HBASE-15340:
-

{quote}
The solution of having a client aware readPnt will solve even that(?)
{quota}
It seems work IMO, I will try to find whether there is any discussion about 
this issue.

> Partial row result of scan may return data violates the row-level transaction 
> --
>
> Key: HBASE-15340
> URL: https://issues.apache.org/jira/browse/HBASE-15340
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners, Transactions/MVCC
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>
> There are cases the region sever will return partial row result, such as the 
> client set batch for scan or configured size limit reached. In these 
> situations, the client may return data that violates the row-level 
> transaction to the application. The following steps show the problem:
> {code}
> // assume there is a test table 'test_table' with one family 'F' and one 
> region 'region'. 
> // meanwhile there are two region servers 'rsA' and 'rsB'.
> 1. Let 'region' firstly located in 'rsA' and put one row with two columns 
> 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value1', 'F:c2', 'value1'
> 2. Start a client to scan 'test_table', with scan.setBatch(1) and 
> scan.setCaching(1). The client will get one column as : {column='F:c1' and 
> value='value1'} in the first rpc call after scanner created, and the result 
> will be returned to application.
> 3. Before the client issues the next request, the 'region' was moved to 'rsB' 
> and accepted another mutations for the two columns 'c1' and 'c2' as:
> > put 'test_table', 'row', 'F:c1', 'value2', 'F:c2', 'value2'
> 4. Then, the client  will receive a RegionMovedException when issuing next 
> request and will retry to open scanner on 'rsB'. The newly opened scanner 
> will higher mvcc than old data so that could read out column as : { 
> column='F:c2' with value='value2'} and return the result to application.
>Therefore, the application will get data as:
> 'row'column='F:c1'   value='value1'
> 'row'column='F:c2',  value='value2'
>The returned data is combined from two different mutations and violates 
> the row-level transaction.
> {code}
> The reason is that the newly opened scanner after region moved will get a 
> different mvcc. I am not sure whether this result is by design for scan if 
> partial row result is allowed. However, such row result combined from 
> different transactions may make the application have unexpected state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15215) TestBlockEvictionFromClient is flaky in jdk1.7 build

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168858#comment-15168858
 ] 

Hudson commented on HBASE-15215:


FAILURE: Integrated in HBase-Trunk_matrix #740 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/740/])
HBASE-15215 TestBlockEvictionFromClient is flaky in jdk1.7 build (ramkrishna: 
rev 538815d82a62cbcc7aaccec0a3bc4e44cb925277)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestBlockEvictionFromClient.java


> TestBlockEvictionFromClient is flaky in jdk1.7 build
> 
>
> Key: HBASE-15215
> URL: https://issues.apache.org/jira/browse/HBASE-15215
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15215_offheap.patch
>
>
> This is the 2nd time I am noticing this. 
> {code}
> Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 76.187 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.client.TestBlockEvictionFromClient
> testReverseScanWithCompaction(org.apache.hadoop.hbase.client.TestBlockEvictionFromClient)
>   Time elapsed: 5.812 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testScanWithCompactionInternals(TestBlockEvictionFromClient.java:922)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testReverseScanWithCompaction(TestBlockEvictionFromClient.java:857)
> {code}
> Generally the jdk1.8 build does not have this failure. Need to investigate 
> the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168863#comment-15168863
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 11s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 19s 
{color} | {color:red} Patch generated 2 new checkstyle issues in hbase-server 
(total was 14, now 15). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 34s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 110m 35s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 57s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 258m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL |
|   | hadoop.hbase.regionserver.TestRegionServerMetrics |
|   | hadoop.hbase.TestStoc

[jira] [Commented] (HBASE-15296) Break out writer and reader from StoreFile

2016-02-26 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168896#comment-15168896
 ] 

Jonathan Hsieh commented on HBASE-15296:


Did a quick scan  -- some things to address beyond the yetus issues:

New classes: StoreFileWriter, StoreFileReader
* Add InterfaceAudience to the new classes (likely IA.Private)
* Add apache license to the new classes.

Let's do another rev where we fix that and the checkstyles/docs issues.





> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-15296-branch-1.1.patch, 
> HBASE-15296-branch-1.2.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-master.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168941#comment-15168941
 ] 

Hadoop QA commented on HBASE-15205:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 25s 
{color} | {color:red} Patch generated 3 new checkstyle issues in hbase-server 
(total was 445, now 444). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 44s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 51s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 179m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.TestStochasticBalancerJmxMetrics |
|   | hadoop.hbase.util.TestRegionSplitter |
|   | hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.hbase.io.encoding.TestChangingEncoding |
|   | org.apache.hadoop.hbase.snapshot.TestExportSnapshot |
|   | org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk |
|   | org.apache.hadoop.hbase.io.hfile.TestCacheO

[jira] [Updated] (HBASE-15265) Implement an asynchronous FSHLog

2016-02-26 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15265:
--
Attachment: HBASE-15265-v1.patch

Addressing the create retry and shutdown issue and also the comments on rb.

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169034#comment-15169034
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 8s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 19s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 67, now 68). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 55s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 43s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 28s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 214m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem ||

[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

Attachment: HBASE-15338-trunk-v6.diff

Fix the checkstyle error.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Anant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169060#comment-15169060
 ] 

Anant Sharma commented on HBASE-15322:
--

Its there in rt.jar and I have included this jar but the exception is still 
there

> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/avro-1.7.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-cli-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-codec-1.9.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-collections-3.2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-compress-1.4.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-configuration-1.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-daemon-1.0.13.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-digester-1.8.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-el-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-io-2.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-lang-2.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-logging-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math-2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math3-3.1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-net-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/disruptor-3.3.0.jar:/home/hduser/hbase/hbase-1

[jira] [Commented] (HBASE-15302) Reenable the other tests disabled by HBASE-14678

2016-02-26 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169125#comment-15169125
 ] 

Phil Yang commented on HBASE-15302:
---

This patch only adds tests and changes an interface, no logic changing. So I 
think it will not break any other tests, right?

> Reenable the other tests disabled by HBASE-14678
> 
>
> Key: HBASE-15302
> URL: https://issues.apache.org/jira/browse/HBASE-15302
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.2.1
>
> Attachments: HBASE-15302-branch-1-v1.txt, HBASE-15302-v1.txt, 
> HBASE-15302-v1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169141#comment-15169141
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 24s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 20s 
{color} | {color:red} Patch generated 2 new checkstyle issues in hbase-server 
(total was 67, now 68). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 131m 26s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 25s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 295m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.TestStochasticBalancerJmxMetrics |
|   | hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.

[jira] [Updated] (HBASE-15315) Remove always set super user call as high priority

2016-02-26 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HBASE-15315:
---
Status: Open  (was: Patch Available)

> Remove always set super user call as high priority
> --
>
> Key: HBASE-15315
> URL: https://issues.apache.org/jira/browse/HBASE-15315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HBASE-15315.001.patch
>
>
> Current implementation set superuser call as ADMIN_QOS, but we have many 
> customers use superuser to do normal table operation such as put/get data and 
> so on. If client put much data during region assignment, RPC from HMaster may 
> timeout because of no handle. so it is better to remove always set super user 
> call as high priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15315) Remove always set super user call as high priority

2016-02-26 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HBASE-15315:
---
Status: Patch Available  (was: Open)

re-trigger Hadoop QA

> Remove always set super user call as high priority
> --
>
> Key: HBASE-15315
> URL: https://issues.apache.org/jira/browse/HBASE-15315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HBASE-15315.001.patch
>
>
> Current implementation set superuser call as ADMIN_QOS, but we have many 
> customers use superuser to do normal table operation such as put/get data and 
> so on. If client put much data during region assignment, RPC from HMaster may 
> timeout because of no handle. so it is better to remove always set super user 
> call as high priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15332) Document how to take advantage of HDFS-6133 in HBase

2016-02-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169153#comment-15169153
 ] 

Sean Busbey commented on HBASE-15332:
-

+1, looks good. none of those unit tests should be impacted by this change. 
also, no need for tests.

it does occur to me that there's no test of maven site, as far as I can tell.

> Document how to take advantage of HDFS-6133 in HBase
> 
>
> Key: HBASE-15332
> URL: https://issues.apache.org/jira/browse/HBASE-15332
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-15332-v1.patch, HBASE-15332.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15243) Utilize the lowest seek value when all Filters in MUST_PASS_ONE FilterList return SEEK_NEXT_USING_HINT

2016-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169163#comment-15169163
 ] 

Ted Yu commented on HBASE-15243:


Ping [~stack]

> Utilize the lowest seek value when all Filters in MUST_PASS_ONE FilterList 
> return SEEK_NEXT_USING_HINT
> --
>
> Key: HBASE-15243
> URL: https://issues.apache.org/jira/browse/HBASE-15243
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15243-v1.txt, HBASE-15243-v2.txt, 
> HBASE-15243-v3.txt, HBASE-15243-v4.txt, HBASE-15243-v5.txt, 
> HBASE-15243-v6.txt, HBASE-15243-v7.txt
>
>
> As Preston Koprivica pointed out at the tail of HBASE-4394, when all filters 
> in a MUST_PASS_ONE FilterList return a SEEK_USING_NEXT_HINT code, we should 
> return SEEK_NEXT_USING_HINT from the FilterList to utilize the lowest seek 
> value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Add archive tiers for date based tiered compaction

2016-02-26 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169220#comment-15169220
 ] 

Dave Latham commented on HBASE-15339:
-

Duo, I'd love to understand this a little better.  The tiered compaction in 
HBASE-15181 has a max tier, so once data reaches that tier it never need be 
compacted again unless you force a major compaction.  The windows in that tier 
are fixed, based on epoch time, and their boundaries won't move.  They are not, 
however, aligned with the calendar, so if that is what you need, then you 
definitely need an enhancement.  I could imagine a config to use 
days/weeks/months/quarters/years for example instead of the simple epoch 
exponential tier schedule of HBASE-15181.  Can you elaborate on your needs and 
your proposal?

> Add archive tiers for date based tiered compaction
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169245#comment-15169245
 ] 

Anoop Sam John commented on HBASE-15338:


For data block cache on write, we have a global config.  In case of cache on 
read no global config.  There is no issue in adding one.  +1 on that.

+conf.getBoolean(CACHE_DATA_ON_READ_KEY, DEFAULT_CACHE_DATA_ON_READ)
+   && family.isBlockCacheEnabled(),

For other configs we have || condition btw global one and family specific.  Why 
this is different?   There was one issue with this discussion. I forgot which 
issue and whether we have closed that or not.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-02-26 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169263#comment-15169263
 ] 

Yu Li commented on HBASE-15324:
---

The latest HadoopQA report looks good.

[~eclark], mind take a look here since this relates to HBASE-13412? Thanks.

> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Anant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169301#comment-15169301
 ] 

Anant Sharma commented on HBASE-15322:
--

When I tried to build with testing, many of the unit tests are failing because 
of this exception . Like the following:

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was 
removed in 8.0
Running org.apache.hadoop.hbase.codec.TestKeyValueCodec
Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 1.288 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.codec.TestKeyValueCodec
testOne(org.apache.hadoop.hbase.codec.TestKeyValueCodec)  Time elapsed: 0.291 
sec  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
at org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
at 
org.apache.hadoop.hbase.codec.TestKeyValueCodec.testOne(TestKeyValueCodec.java:68)


> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduse

[jira] [Created] (HBASE-15348) Fix tests broken by recent metrics re-work

2016-02-26 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15348:
-

 Summary: Fix tests broken by recent metrics re-work
 Key: HBASE-15348
 URL: https://issues.apache.org/jira/browse/HBASE-15348
 Project: HBase
  Issue Type: Bug
  Components: metrics, test
Reporter: Elliott Clark


Counts are appoximate and go away. We should re-work the tests or test utils to 
make them work now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15322:
---
Priority: Critical  (was: Major)

> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>Priority: Critical
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/avro-1.7.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-cli-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-codec-1.9.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-collections-3.2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-compress-1.4.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-configuration-1.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-daemon-1.0.13.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-digester-1.8.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-el-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-io-2.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-lang-2.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-logging-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math-2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math3-3.1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-net-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/disruptor-3.3.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/findbugs-annotations-1.3.9-1.jar:/home/hduser/hbase/hbase-1

[jira] [Commented] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169315#comment-15169315
 ] 

Anoop Sam John commented on HBASE-15322:


Even if you have the sun Unsafe class and that is in ur classpath, I believe 
this class is what the system can not load.   Even if Unsafe is not available 
we should not get these exceptions..  But now we are getting..  I think I know 
the reason..  It is been some time we have broken this I fear !!   Started with 
one jira ticket and many following Jiras added to that... 

> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/avro-1.7.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-cli-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-codec-1.9.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-collections-3.2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-compress-1.4.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-configuration-1.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-daemon-1.0.13.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-digester-1.8.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-el-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-io-2.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-lang-2.6.jar:/home/hd

[jira] [Commented] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169318#comment-15169318
 ] 

ramkrishna.s.vasudevan commented on HBASE-15205:


The test failures are unrelated. Will commit this version of the patch _12.

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_12.patch, HBASE-15205_2.patch, HBASE-15205_3.patch, 
> HBASE-15205_4.patch, HBASE-15205_6.patch, HBASE-15205_6.patch, 
> HBASE-15205_7.patch, HBASE-15205_8.patch, HBASE-15205_9.patch, 
> ScopeWALEdits.jpg, ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15205:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks all for the reviews. Will keep watching the builds to be on the safer 
side. Committed to master.

> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_12.patch, HBASE-15205_2.patch, HBASE-15205_3.patch, 
> HBASE-15205_4.patch, HBASE-15205_6.patch, HBASE-15205_6.patch, 
> HBASE-15205_7.patch, HBASE-15205_8.patch, HBASE-15205_9.patch, 
> ScopeWALEdits.jpg, ScopeWALEdits_afterpatch.jpg
>
>
> After the byte[] and char[] the other top contributor for lot of GC (though 
> it is only 2.86%) is the UTF_8.newDecoder.
> This happens because for every WAL append we try to calculate the replication 
> scope associate with the families associated with the TableDescriptor. I 
> think per WAL append doing this is very costly and creates lot of garbage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169327#comment-15169327
 ] 

Hadoop QA commented on HBASE-15265:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-common 
(total was 1, now 2). {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 18s 
{color} | {color:red} Patch generated 25 new checkstyle issues in hbase-server 
(total was 111, now 113). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 34s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 43s 
{color} | {color:red} hbase-server-jdk1.8.0_72 with JDK v1.8.0_72 generated 3 
new issues (was 1, now 4). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 33s 
{color} | {color:red} hbase-server-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 
new issues (was 1, now 4). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 0s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hbase-common in the patch passed with JDK v1.7.0_95. 

[jira] [Commented] (HBASE-15348) Fix tests broken by recent metrics re-work

2016-02-26 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169335#comment-15169335
 ] 

Elliott Clark commented on HBASE-15348:
---

Pushed a test disable until the tests are fixed.

> Fix tests broken by recent metrics re-work
> --
>
> Key: HBASE-15348
> URL: https://issues.apache.org/jira/browse/HBASE-15348
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, test
>Reporter: Elliott Clark
>
> Counts are appoximate and go away. We should re-work the tests or test utils 
> to make them work now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15348) Fix tests broken by recent metrics re-work

2016-02-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-15348:
-

Assignee: Elliott Clark

> Fix tests broken by recent metrics re-work
> --
>
> Key: HBASE-15348
> URL: https://issues.apache.org/jira/browse/HBASE-15348
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, test
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> Counts are appoximate and go away. We should re-work the tests or test utils 
> to make them work now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15321) Ability to open a HRegion from hdfs snapshot.

2016-02-26 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-15321:
---
Status: Open  (was: Patch Available)

> Ability to open a HRegion from hdfs snapshot.
> -
>
> Key: HBASE-15321
> URL: https://issues.apache.org/jira/browse/HBASE-15321
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: churro morales
> Fix For: 2.0.0
>
> Attachments: HBASE-15321-v1.patch, HBASE-15321-v2.patch, 
> HBASE-15321.patch
>
>
> Now that hdfs snapshots are here, we started to run our mapreduce jobs over 
> hdfs snapshots.  The thing is, hdfs snapshots are read-only point-in-time 
> copies of the file system.  Thus we had to modify the section of code that 
> initialized the region internals in HRegion.   We have to skip cleanup of 
> certain directories if the HRegion is backed by a hdfs snapshot.  I have a 
> patch for trunk with some basic tests if folks are interested.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15321) Ability to open a HRegion from hdfs snapshot.

2016-02-26 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-15321:
---
Attachment: HBASE-15321-v2.patch

Fixed checkstyle issues, those failing tests don't look like they are related 
to this patch.  I'll kick off another Hadoop QA run.

> Ability to open a HRegion from hdfs snapshot.
> -
>
> Key: HBASE-15321
> URL: https://issues.apache.org/jira/browse/HBASE-15321
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: churro morales
> Fix For: 2.0.0
>
> Attachments: HBASE-15321-v1.patch, HBASE-15321-v2.patch, 
> HBASE-15321.patch
>
>
> Now that hdfs snapshots are here, we started to run our mapreduce jobs over 
> hdfs snapshots.  The thing is, hdfs snapshots are read-only point-in-time 
> copies of the file system.  Thus we had to modify the section of code that 
> initialized the region internals in HRegion.   We have to skip cleanup of 
> certain directories if the HRegion is backed by a hdfs snapshot.  I have a 
> patch for trunk with some basic tests if folks are interested.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15321) Ability to open a HRegion from hdfs snapshot.

2016-02-26 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-15321:
---
Status: Patch Available  (was: Open)

> Ability to open a HRegion from hdfs snapshot.
> -
>
> Key: HBASE-15321
> URL: https://issues.apache.org/jira/browse/HBASE-15321
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: churro morales
> Fix For: 2.0.0
>
> Attachments: HBASE-15321-v1.patch, HBASE-15321-v2.patch, 
> HBASE-15321.patch
>
>
> Now that hdfs snapshots are here, we started to run our mapreduce jobs over 
> hdfs snapshots.  The thing is, hdfs snapshots are read-only point-in-time 
> copies of the file system.  Thus we had to modify the section of code that 
> initialized the region internals in HRegion.   We have to skip cleanup of 
> certain directories if the HRegion is backed by a hdfs snapshot.  I have a 
> patch for trunk with some basic tests if folks are interested.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169371#comment-15169371
 ] 

Hadoop QA commented on HBASE-15338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 9s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 7s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 48s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 210m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.9.1 Server=1.9.1 Imag

[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169381#comment-15169381
 ] 

Phil Yang commented on HBASE-15325:
---

[~tedyu] Thank you for reviewing.
{quote}
What if allResultSkipped is true and the original condition, 
doneWithRegion(remainingResultSize, countdown, serverHasMoreResults)
&& (!partialResults.isEmpty() || possiblyNextScanner(countdown, values == 
null)), is false ?
{quote}
There is an assumption that if the cache is still empty after loadcache() the 
scan is done. Before this patch, when enter this do-while condition statement, 
we have two possible status: this region is done we need continue do-while loop 
and scan next region to satisfy the caching number or maxScannerResultSize; 
this region is not done, we must have loaded some data to cache so we can exit 
the loop. However, in the second status if we skip all cells because they have 
been seen before, the cache is still empty. So we can not exit this loop, we 
must scan one more time in this region. So we need allResultSkipped flag.

{quote}
Why is the above assignment needed ? rs and r reference the same Result, right ?
{quote}
My fault. I thought we should use the original size of Result to be 
estimatedHeapSizeOfResult. However I think we should make sure we can done 
loadcache() only when the cache's size is larger than maxScannerResultSize or 
we have enough number of results for caching or scan is done. So if we skip 
some cells because we have seen them, we should set estimatedHeapSizeOfResult 
to the size of filtered Result.

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169387#comment-15169387
 ] 

Ted Yu commented on HBASE-15325:


bq. if we skip some cells because we have seen them, we should set 
estimatedHeapSizeOfResult to the size of filtered Result.

I think so.

Looking forward to next patch.

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15332) Document how to take advantage of HDFS-6133 in HBase

2016-02-26 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-15332:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.

> Document how to take advantage of HDFS-6133 in HBase
> 
>
> Key: HBASE-15332
> URL: https://issues.apache.org/jira/browse/HBASE-15332
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-15332-v1.patch, HBASE-15332.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-v2.txt

upload v2 to fix bug and typo

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-02-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169393#comment-15169393
 ] 

stack commented on HBASE-15338:
---

Thank you all for persisting. There are too many configs in this area. You 
fellas seem to be doing a bit of weeding. Thats great.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2016-02-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169406#comment-15169406
 ] 

stack commented on HBASE-14918:
---

Move ByteRange into the prefix-tree module? If prefix-tree is enabled, 
offheaping will not work? Add warnings?

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
> Attachments: CellBlocksSegmentDesign.pdf, MSLABMove.patch
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15312) Update the dependences of pom for mini cluster in HBase Book

2016-02-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169411#comment-15169411
 ] 

stack commented on HBASE-15312:
---

+1

> Update the dependences of pom for mini cluster in HBase Book
> 
>
> Key: HBASE-15312
> URL: https://issues.apache.org/jira/browse/HBASE-15312
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15312-trunk-v1.diff, HBASE-15312-trunk-v2.diff
>
>
> In HBase book, the dependences of pom for mini cluster is outdated after 
> version 0.96.
> See: 
> http://hbase.apache.org/book.html#_integration_testing_with_an_hbase_mini_cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15261) Make Throwable t in DaughterOpener volatile

2016-02-26 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15261:

Status: Patch Available  (was: Open)

> Make Throwable t in DaughterOpener volatile
> ---
>
> Key: HBASE-15261
> URL: https://issues.apache.org/jira/browse/HBASE-15261
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-15261-001.patch
>
>
> In the region split process, daughter regions are opened in different 
> threads, Throwable t is set in these threads and it is checked in the calling 
> thread. Need to make it volatile so the checking will not miss any exceptions 
> from opening daughter regions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169454#comment-15169454
 ] 

Phil Yang commented on HBASE-15325:
---

[~cuijianwei] Agree with you, there is another bug.

I am wondering the correct definition of "isPartial".

In comment, 
{quote}
Partial results contain a subset of the cells for a row and should be combined 
with a result representing the remaining cells in that row to form a complete 
(non-partial) result.
{quote}

But in fact if a Result contains the last parts of cells, isPartial will return 
false, right?

And if we fix the batching bug as you said, we'll have an interesting situation 
that if we set batch to 1, we will get cells one by one and they are all 
partial even the last one because we will reach batch limit before the end of 
row judgement. Is it acceptable?

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15181) A simple implementation of date based tiered compaction

2016-02-26 Thread Clara Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-15181:

Attachment: HBASE-15181-master-v4.patch

> A simple implementation of date based tiered compaction
> ---
>
> Key: HBASE-15181
> URL: https://issues.apache.org/jira/browse/HBASE-15181
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Clara Xiong
>Assignee: Clara Xiong
> Fix For: 2.0.0, 1.3.0, 0.98.19
>
> Attachments: HBASE-15181-master-v1.patch, 
> HBASE-15181-master-v2.patch, HBASE-15181-master-v3.patch, 
> HBASE-15181-master-v4.patch, HBASE-15181-v1.patch, HBASE-15181-v2.patch
>
>
> This is a simple implementation of date-based tiered compaction similar to 
> Cassandra's for the following benefits:
> 1. Improve date-range-based scan by structuring store files in date-based 
> tiered layout.
> 2. Reduce compaction overhead.
> 3. Improve TTL efficiency.
> Perfect fit for the use cases that:
> 1. has mostly date-based date write and scan and a focus on the most recent 
> data. 
> 2. never or rarely deletes data.
> Out-of-order writes are handled gracefully. Time range overlapping among 
> store files is tolerated and the performance impact is minimized.
> Configuration can be set at hbase-site.xml or overriden at per-table or 
> per-column-famly level by hbase shell.
> Design spec is at 
> https://docs.google.com/document/d/1_AmlNb2N8Us1xICsTeGDLKIqL6T-oHoRLZ323MG_uy8/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15128) Disable region splits and merges switch in master

2016-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169495#comment-15169495
 ] 

Ted Yu commented on HBASE-15128:


I may have limited bandwidth in reviewing this issue.

Heng:
You can integrate.

> Disable region splits and merges switch in master
> -
>
> Key: HBASE-15128
> URL: https://issues.apache.org/jira/browse/HBASE-15128
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15128.patch, HBASE-15128_v1.patch, 
> HBASE-15128_v3.patch, HBASE-15128_v5.patch, HBASE-15128_v6.patch, 
> HBASE-15128_v7.patch, HBASE-15128_v8.patch
>
>
> In large clusters where region splits are frequent, and HBCK runs take 
> longer, the concurrent splits cause further problems in HBCK since HBCK 
> assumes a static state for the region partition map. We have just seen a case 
> where HBCK undo's a concurrently splitting region causing number of 
> inconsistencies to go up. 
> We can have a mode in master where splits and merges are disabled like the 
> balancer and catalog janitor switches. Master will reject the split requests 
> if regionservers decide to split. This switch can be turned on / off by the 
> admins and also automatically by HBCK while it is running (similar to 
> balancer switch being disabled by HBCK). 
> HBCK  should also disable the Catalog Janitor just in case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15187:
---
Attachment: HBASE-15187-branch-1.v13.patch

> Integrate CSRF prevention filter to REST gateway
> 
>
> Key: HBASE-15187
> URL: https://issues.apache.org/jira/browse/HBASE-15187
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: HBASE-15187-branch-1.v13.patch, HBASE-15187.v1.patch, 
> HBASE-15187.v10.patch, HBASE-15187.v11.patch, HBASE-15187.v12.patch, 
> HBASE-15187.v13.patch, HBASE-15187.v2.patch, HBASE-15187.v3.patch, 
> HBASE-15187.v4.patch, HBASE-15187.v5.patch, HBASE-15187.v6.patch, 
> HBASE-15187.v7.patch, HBASE-15187.v8.patch, HBASE-15187.v9.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.
> This issue tracks the integration of that filter into HBase REST gateway.
> From REST section of refguide:
> To delete a table, use a DELETE request with the /schema endpoint:
> http://example.com:8000/schema
> Suppose an attacker hosts a malicious web form on a domain under his control. 
> The form uses the DELETE action targeting a REST URL. Through social 
> engineering, the attacker tricks an authenticated user into accessing the 
> form and submitting it.
> The browser sends the HTTP DELETE request to the REST gateway.
> At REST gateway, the call is executed and user table is dropped



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169501#comment-15169501
 ] 

Phil Yang commented on HBASE-15325:
---

And I find that my tests may fail randomly with low probability on asserting 
"c4" in every partial tests. I used to think moveregion breaks the mvcc because 
the scanner is new and it will see edits after the old scanner created. But it 
is not always true, maybe because we may move the region from server A to B 
then to A again, the 3rd scanner may reuse the 1st scanner?

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169529#comment-15169529
 ] 

Nick Dimiduk commented on HBASE-15322:
--

You think you have a solution mighty [~anoop.hbase]?

> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>Priority: Critical
> Fix For: 1.1.4
>
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/avro-1.7.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-cli-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-codec-1.9.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-collections-3.2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-compress-1.4.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-configuration-1.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-daemon-1.0.13.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-digester-1.8.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-el-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-io-2.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-lang-2.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-logging-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math-2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math3-3.1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-net-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/disruptor-3.3

[jira] [Updated] (HBASE-15322) HBase 1.1.3 crashing

2016-02-26 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15322:
-
Fix Version/s: 1.1.4

> HBase 1.1.3 crashing
> 
>
> Key: HBASE-15322
> URL: https://issues.apache.org/jira/browse/HBASE-15322
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.1.3
> Environment: OS: Ubuntu 14.04/Ubuntu 15.10  
> JDK: OpenJDK8/OpenJDK9
>Reporter: Anant Sharma
>Priority: Critical
> Fix For: 1.1.4
>
>
> HBase crashes in standalone mode with the following log:
> __
> 2016-02-24 22:38:37,578 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
> at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
> at 
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:652)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:580)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:483)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:370)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:267)
> at org.apache.hadoop.hbase.HConstants.(HConstants.java:978)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.(HTableDescriptor.java:1488)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.(FSTableDescriptors.java:124)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:365)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
> __
> The class is in the hbase-common.jar and its there in the classpath as can be 
> seen from the log:
> _
> 2016-02-24 22:38:32,538 INFO  [main] util.ServerCommandLine: 
> env:CLASSPATH=/home/hduser/hbase/hbase-1.1.3:/home/hduser/hbase/hbase-1.1.3/lib/activation-1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/aopalliance-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/api-util-1.0.0-M20.jar:/home/hduser/hbase/hbase-1.1.3/lib/asm-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/avro-1.7.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-cli-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-codec-1.9.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-collections-3.2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-compress-1.4.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-configuration-1.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-daemon-1.0.13.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-digester-1.8.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-el-1.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-io-2.4.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-lang-2.6.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-logging-1.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math-2.2.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-math3-3.1.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/commons-net-3.1.jar:/home/hduser/hbase/hbase-1.1.3/lib/disruptor-3.3.0.jar:/home/hduser/hbase/hbase-1.1.3/lib/findbugs-annotations-1.3.9-1.jar:/home/hduse

[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-v3.txt

fix batching bug

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-02-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169631#comment-15169631
 ] 

Ted Yu commented on HBASE-15325:


Can you create review board request ?

First upload patch v1, publish the request and then upload patch v3.

It would be clearer what the difference is between the two versions.

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-02-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14030:
---
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v30.patch, HBASE-14030-v35.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15205) Do not find the replication scope for every WAL#append()

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169679#comment-15169679
 ] 

Hudson commented on HBASE-15205:


FAILURE: Integrated in HBase-Trunk_matrix #741 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/741/])
HBASE-15205 Do not find the replication scope for every WAL#append() 
(ramkrishna: rev 8f2bd06019869a1738bcfd66066737cdb7802ca8)
* hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBulkLoad.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProviderWithHLogKey.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ScopeWALEntryFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/wal/FaultyFSLog.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationWALReaderManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java


> Do not find the replication scope for every WAL#append()
> 
>
> Key: HBASE-15205
> URL: https://issues.apache.org/jira/browse/HBASE-15205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15204_6.patch, HBASE-15205.patch, 
> HBASE-15205_1.patch, HBASE-15205_10.patch, HBASE-15205_11.patch, 
> HBASE-15205_12.patch, HBASE-15205_2.patch, HBASE-15205_3.patch, 
> HBASE-15205_4.patch, HBASE-15205_6.patch, HBASE-15205_6.patch, 
> HBASE-15205_7.patch, HBASE-15205_8.patch, HBASE-15205_9.patch, 
> Scop

[jira] [Commented] (HBASE-15348) Fix tests broken by recent metrics re-work

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15169680#comment-15169680
 ] 

Hudson commented on HBASE-15348:


FAILURE: Integrated in HBase-Trunk_matrix #741 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/741/])
HBASE-15348 Disable metrics tests until fixed. (eclark: rev 
e88d94318321d40993953180368d33d24602a2ae)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestStochasticBalancerJmxMetrics.java


> Fix tests broken by recent metrics re-work
> --
>
> Key: HBASE-15348
> URL: https://issues.apache.org/jira/browse/HBASE-15348
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, test
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> Counts are appoximate and go away. We should re-work the tests or test utils 
> to make them work now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15349) Update surefire version to 2.19.1

2016-02-26 Thread Appy (JIRA)
Appy created HBASE-15349:


 Summary: Update surefire version to 2.19.1
 Key: HBASE-15349
 URL: https://issues.apache.org/jira/browse/HBASE-15349
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy


So that new properties like surefire.excludesFile and includesFile can be used 
to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15349) Update surefire version to 2.19.1

2016-02-26 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15349:
-
Status: Patch Available  (was: Open)

> Update surefire version to 2.19.1
> -
>
> Key: HBASE-15349
> URL: https://issues.apache.org/jira/browse/HBASE-15349
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-15349.patch
>
>
> So that new properties like surefire.excludesFile and includesFile can be 
> used to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15349) Update surefire version to 2.19.1

2016-02-26 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15349:
-
Attachment: HBASE-15349.patch

> Update surefire version to 2.19.1
> -
>
> Key: HBASE-15349
> URL: https://issues.apache.org/jira/browse/HBASE-15349
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-15349.patch
>
>
> So that new properties like surefire.excludesFile and includesFile can be 
> used to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >