[jira] [Commented] (HBASE-7663) [Per-KV security] Visibility labels

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822253#comment-13822253
 ] 

Hadoop QA commented on HBASE-7663:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613788/HBASE-7663_V8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7857//console

This message is automatically generated.

> [Per-KV security] Visibility labels
> ---
>
> Key: HBASE-7663
> URL: https://issues.apache.org/jira/browse/HBASE-7663
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-7663.patch, HBASE-7663_V2.patch, 
> HBASE-7663_V3.patch, HBASE-7663_V4.patch, HBASE-7663_V5.patch, 
> HBASE-7663_V6.patch, HBASE-7663_V7.patch, HBASE-7663_V8.patch
>
>
> Implement Accumulo-style visibility labels. Consider the following design 
> principles:
> - Coprocessor based implementation
> - Minimal to no changes to core code
> - Use KeyValue tags (HBASE-7448) to carry labels
> - Use OperationWithAttributes# {get,set}Attribute for handling visibility 
> labels in the API
> - Implement a new filter for evaluating visibility labels as KVs are streamed 
> through.
> This approach would be consistent in deployment and API details with other 
> per-KV security work, supporting environments where they might be both be 
> employed, even stacked on some tables.
> See the parent issue for more discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Chao Shi (JIRA)
Chao Shi created HBASE-9969:
---

 Summary: Improve KeyValueHeap using loser tree
 Key: HBASE-9969
 URL: https://issues.apache.org/jira/browse/HBASE-9969
 Project: HBase
  Issue Type: Improvement
Reporter: Chao Shi


LoserTree is the better data structure than binary heap. It saves half of the 
comparisons on each next(), though the time complexity is on O(logN).

Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
read from multiple HFiles in a single store, the other is merging results from 
multiple stores. This patch should improve the both cases whenever CPU is the 
bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).

All of the optimization work is done in KeyValueHeap and does not change its 
public interfaces. The new code looks more cleaner and simpler to understand.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9963:
---

Attachment: 9963.v3.patch

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.v1.patch, 9963.v2.patch, 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9963:
---

Status: Open  (was: Patch Available)

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.v1.patch, 9963.v2.patch, 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9963:
---

Status: Patch Available  (was: Open)

Tests work locally. v3 is a retry w/ the javadoc added.

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.v1.patch, 9963.v2.patch, 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-3680) Publish more metrics about mslab

2013-11-14 Thread Asaf Mesika (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822261#comment-13822261
 ] 

Asaf Mesika commented on HBASE-3680:


I fix you should reset fix-version field, since it's not really fixed and 
confusing.

> Publish more metrics about mslab
> 
>
> Key: HBASE-3680
> URL: https://issues.apache.org/jira/browse/HBASE-3680
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.1
>Reporter: Jean-Daniel Cryans
>Assignee: Todd Lipcon
> Fix For: 0.92.3
>
> Attachments: hbase-3680.txt, hbase-3680.txt
>
>
> We have been using mslab on all our clusters for a while now and it seems it 
> tends to OOME or send us into GC loops of death a lot more than it used to. 
> For example, one RS with mslab enabled and 7GB of heap died out of OOME this 
> afternoon; it had .55GB in the block cache and 2.03GB in the memstores which 
> doesn't account for much... but it could be that because of mslab a lot of 
> space was lost in those incomplete 2MB blocks and without metrics we can't 
> really tell. Compactions were running at the time of the OOME and I see block 
> cache activity. The average load on that cluster is 531.
> We should at least publish the total size of all those blocks and maybe even 
> take actions based on that (like force flushing).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Chao Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Shi updated HBASE-9969:


Attachment: kvheap-benchmark.txt
kvheap-benchmark.png
hbase-9969.patch

I ran two benchmarks:

A) KeyValueHeapBenchmark class included in the patch

It simply constructs a KeyValueHeap from several 
CollectionBackedKeyValueScanner and sees how many next/reseek calls per second.

||scanners|| lt-next || lt-reseek || pq-next || pq-reseek ||
|1|17543859.6|3058104|18181818.2|1798561.2|
|2|11299435|5102040.8|11173184.4|3053435.1|
|3|8547008.5|4854368.9|7915567.3|2859866.5|
|4|7936507.9|4866180|5891016.2|2507837|
|5|6711409.4|4739336.5|4748338.1|2296738.6|

"lt-" denotes LoserTree based KeyValueHeap.
"pq-" denotes PriorityQueue based KeyValueHeap.
A complete result (with up to 19 scanners) is attached.

B) ColumnPaginationFilter with offset=1M
I ran a mini-cluster and put a huge number of columns on a single row. Thes 
columns are uniformly written to several HFiles. Then query using 
ColumnPaginationFilter with offset = 1M. Blocks are cached, so it is CPU 
intensive. Qualifiers and values are 4 byte integers. Row key is "test_row". 
Blocks are not compressed.

The below table shows how long the scan takes.

|| hfiles || lt || pq ||
| 1 | 749.8 ms | 986.69 ms |
| 2 |1511.28 ms | 2190.97 ms |
| 3 |2392.8 ms | 4029.8 ms |
| 4 | 3318.8 ms | 5760.22 ms



> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, kvheap-benchmark.png, 
> kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9811) ColumnPaginationFilter is slow when offset is large

2013-11-14 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822264#comment-13822264
 ] 

Chao Shi commented on HBASE-9811:
-

HBASE-9969 is opened to improve performance of KeyValueHeap.

> ColumnPaginationFilter is slow when offset is large
> ---
>
> Key: HBASE-9811
> URL: https://issues.apache.org/jira/browse/HBASE-9811
> Project: HBase
>  Issue Type: Bug
>Reporter: Chao Shi
>
> Hi there, we are trying to migrate a app from MySQL to HBase. One kind of the 
> queries is pagination with large offset and small limit. We don't have too 
> many such queries and so both MySQL and HBase should survive. (MySQL has no 
> index for offset either.)
> When comparing the performance on both systems, we found something interest: 
> write ~1M values in a single row, and query with offset = 1M. So all values 
> should be scanned on RS side.
> When running the query on MySQL, the first query is pretty slow (more than 1 
> second) and then repeat the same query, it will become very low latency.
> HBase on the other hand, repeating the query does not help much (~1s 
> forever). I can confirm that all data are in block cache and all the time is 
> spent on in-memory data processing. (We have flushed data to disk.)
> I found "reseek" is the hot spot. It is caused by ColumnPaginationFilter 
> returning NEXT_COL. If I replace this line by returning SKIP (which causes to 
> call next rather than reseek), the latency is reduced to ~100ms.
> So I think there must be some room for optimization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Chao Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Shi updated HBASE-9969:


Status: Patch Available  (was: Open)

Submit to hadoop QA.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, kvheap-benchmark.png, 
> kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822285#comment-13822285
 ] 

Hadoop QA commented on HBASE-9969:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613807/kvheap-benchmark.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7859//console

This message is automatically generated.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, kvheap-benchmark.png, 
> kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9869:
---

Attachment: 9869.v2.patch

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9869:
---

Status: Open  (was: Patch Available)

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9869:
---

Status: Patch Available  (was: Open)

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Open  (was: Patch Available)

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Patch Available  (was: Open)

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Attachment: 9959-trunk.v2.patch

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822307#comment-13822307
 ] 

Hadoop QA commented on HBASE-9963:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613802/9963.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7858//console

This message is automatically generated.

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.v1.patch, 9963.v2.patch, 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8163) MemStoreChunkPool: An improvement for JAVA GC when using MSLAB

2013-11-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822308#comment-13822308
 ] 

Liang Xie commented on HBASE-8163:
--

I did a apple-to-apple test with this patch. Here is the test result just for 
refer:

Env: Xmx=Xms=8G,Xmn1G, memestore  lower/upper limit = 0.2/0.3
each test began with empty table, then write 20million records each with three 
fields, and each field has 200 bytes:
1)original config(-XX:PretenureSizeThreshold=4m)
YGC YGCTFGCFGCT GCT
 6970  318.592 80.884  319.476
2)set hbase.hregion.memstore.mslab.chunksize to 
4194320(-XX:PretenureSizeThreshold=4m)
YGC YGCTFGCFGCT GCT
 6973  253.891 80.522  254.413
3)set -XX:PretenureSizeThreshold=2097088(hbase.hregion.memstore.mslab.chunksize 
is by default 2M)
6960  260.642 81.427  262.069
4)set hbase.hregion.memstore.chunkpool.maxsize=0.6, means enable 
MemStoreChunkPool feature(log said maxCount=706), 
-XX:PretenureSizeThreshold=2097088(hbase.hregion.memstore.mslab.chunksize is by 
default 2M)
7028  258.598 20.401  258.999

To me, this MemStorechunkPool feature is useful for heavy FGC scenarios caused 
by write request.  If the YGC is the big hurt by write request, personally i 
recommend to config "hbase.hregion.memstore.mslab.chunksize" or 
"-XX:PretenureSizeThreshold", considering risk:)

Hope this test result will be helpful for some guys

> MemStoreChunkPool: An improvement for JAVA GC when using MSLAB
> --
>
> Key: HBASE-8163
> URL: https://issues.apache.org/jira/browse/HBASE-8163
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, regionserver
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.98.0, 0.95.0
>
> Attachments: hbase-0.95-8163v6.patch, hbase-8163v1.patch, 
> hbase-8163v2.patch, hbase-8163v3.patch, hbase-8163v4.patch, 
> hbase-8163v5.patch, hbase-8163v6.patch
>
>
> *Usage:*
> Disable pool(default): configure "hbase.hregion.memstore.chunkpool.maxsize" 
> as 0
> Enable pool: configure "hbase.hregion.memstore.chunkpool.maxsize" as a 
> percentage of global memstore size (between 0.0 and 1.0, recommend to set it 
> as the gap between min and max sizes of memstore, e.g. 0.5)
> *Background*:
> When we use mslab,we will copy the keyvalue together in a structure called 
> *MemStoreLAB$Chunk*, therefore we could decrease the heap fragment. 
> *Problem*:
> When one chunk is full, we will create a new chunk, and then the old chunk 
> will be reclaimed by JVM if no reference to it.
> Mostly the chunk object would be promoted when doing Young GC, cause 
> increasing the cost of YGC 
> When a Chunk object has no reference? It should meet two following condition:
> 1.Memstore which this chunk belongs to is flushed
> 2.No scanner is opening on the memstore which this chunk belongs to
> *Solution:*
> 1.Create a chunk pool to manage the no-reference chunks, instead of being 
> reclaimed by JVM
> 2.When a Chunk has no reference, put it back to the pool
> 3.The pool has a max capacity, it will skip the chunks when achieve the max 
> size
> 4.When we need new Chunk to store KeyValue, get it from the pool if exists, 
> else create new one by pool, so we could be able to reuse the old chunks
> *Test results:*
> Environment:
> hbase-version:0.94
> -Xms4G -Xmx4G -Xmn2G
> Row size=50 bytes, Value size=1024 bytes
> 50 concurrent theads per client, insert 10,000,000 rows
> Before:
> Avg write request per second:12953
> After testing, final result of jstat -gcutil :
> YGC YGCT FGC FGCT GCT 
> 747 36.503 48 2.492 38.995
> After:
> Avg write request per second:14025
> After testing, final result of jstat -gcutil :
> YGC YGCT FGC FGCT GCT 
> 711 20.344 4 0.284 20.628
> *Improvement: YGC 40+%; WPS 5%+*
> review board :
> https://reviews.apache.org/r/10056/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8163) MemStoreChunkPool: An improvement for JAVA GC when using MSLAB

2013-11-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822310#comment-13822310
 ] 

Liang Xie commented on HBASE-8163:
--

Of cause, per theory, the chunkpool solution should be a little better than my 
recommend config solution,  since it'll reuse the chunks to avoid more "new 
byte[]" invoke, more memset clear operation from hotspot.

> MemStoreChunkPool: An improvement for JAVA GC when using MSLAB
> --
>
> Key: HBASE-8163
> URL: https://issues.apache.org/jira/browse/HBASE-8163
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, regionserver
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.98.0, 0.95.0
>
> Attachments: hbase-0.95-8163v6.patch, hbase-8163v1.patch, 
> hbase-8163v2.patch, hbase-8163v3.patch, hbase-8163v4.patch, 
> hbase-8163v5.patch, hbase-8163v6.patch
>
>
> *Usage:*
> Disable pool(default): configure "hbase.hregion.memstore.chunkpool.maxsize" 
> as 0
> Enable pool: configure "hbase.hregion.memstore.chunkpool.maxsize" as a 
> percentage of global memstore size (between 0.0 and 1.0, recommend to set it 
> as the gap between min and max sizes of memstore, e.g. 0.5)
> *Background*:
> When we use mslab,we will copy the keyvalue together in a structure called 
> *MemStoreLAB$Chunk*, therefore we could decrease the heap fragment. 
> *Problem*:
> When one chunk is full, we will create a new chunk, and then the old chunk 
> will be reclaimed by JVM if no reference to it.
> Mostly the chunk object would be promoted when doing Young GC, cause 
> increasing the cost of YGC 
> When a Chunk object has no reference? It should meet two following condition:
> 1.Memstore which this chunk belongs to is flushed
> 2.No scanner is opening on the memstore which this chunk belongs to
> *Solution:*
> 1.Create a chunk pool to manage the no-reference chunks, instead of being 
> reclaimed by JVM
> 2.When a Chunk has no reference, put it back to the pool
> 3.The pool has a max capacity, it will skip the chunks when achieve the max 
> size
> 4.When we need new Chunk to store KeyValue, get it from the pool if exists, 
> else create new one by pool, so we could be able to reuse the old chunks
> *Test results:*
> Environment:
> hbase-version:0.94
> -Xms4G -Xmx4G -Xmn2G
> Row size=50 bytes, Value size=1024 bytes
> 50 concurrent theads per client, insert 10,000,000 rows
> Before:
> Avg write request per second:12953
> After testing, final result of jstat -gcutil :
> YGC YGCT FGC FGCT GCT 
> 747 36.503 48 2.492 38.995
> After:
> Avg write request per second:14025
> After testing, final result of jstat -gcutil :
> YGC YGCT FGC FGCT GCT 
> 711 20.344 4 0.284 20.628
> *Improvement: YGC 40+%; WPS 5%+*
> review board :
> https://reviews.apache.org/r/10056/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9963:
---

Attachment: 9963.96.v3.patch

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9963:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed, thanks for the review.
The javadoc warnings were not related to this patch, but I fixed them as well 
in the commit.

> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822357#comment-13822357
 ] 

Hadoop QA commented on HBASE-9869:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613812/9869.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7861//console

This message is automatically generated.

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5945:
-

Status: Patch Available  (was: Open)

> Reduce buffer copies in IPC server response path
> 
>
> Key: HBASE-5945
> URL: https://issues.apache.org/jira/browse/HBASE-5945
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 0.95.2
>Reporter: Todd Lipcon
>Assignee: stack
> Fix For: 0.96.1
>
> Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
> 5945-in-progress.patch, 5945v2.txt, buffer-copies.txt, even-fewer-copies.txt, 
> hbase-5495.txt
>
>
> The new PB code is sloppy with buffers and makes several needless copies. 
> This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822368#comment-13822368
 ] 

Nicolas Liochon commented on HBASE-9869:


bq. See any difference? Whats the math like? How large is the Map if 1M regions 
in it? We remove when region is bad but ones we don't access could stick around 
for ever. I suppose that serves us right if we don't access them. Folks like 
Lars are sensitive about clients being well-behaved especially when embedded in 
an apps server. I looked around for a simple LRU – the guava one – but we need 
SortedMap.


My feeling is that if you have a table with 1 million regions  you need to pay 
the price for this: i.e. have enough memory on the client.
Let me do the math & test it however.
We can as well do something in the middle: do a LRU on the first map, the one 
on the tableName.

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Chao Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Shi updated HBASE-9969:


Attachment: hbase-9969.patch

Repost the patch, as hadoop QA thought the benchmark result is the patch

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8143) HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

2013-11-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822373#comment-13822373
 ] 

stack commented on HBASE-8143:
--

[~enis] A review boss please.

[~xieliang007] Looks like Lars got you over in HDFS-5461

> HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM 
> --
>
> Key: HBASE-8143
> URL: https://issues.apache.org/jira/browse/HBASE-8143
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.98.0, 0.94.7, 0.95.0
>Reporter: Enis Soztutar
>Assignee: stack
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 8143.hbase-default.xml.txt, 8143doc.txt, 8143v2.txt, 
> OpenFileTest.java
>
>
> We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that 
> the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some 
> time, this causes OOM for the RSs. 
> Upon further investigation, I've found out that we end up with 200 regions, 
> each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal 
> allocates DirectBuffers, which is unlike HDFS 1 where there is no direct 
> buffer allocation. 
> It seems that there is no guards against the memory used by local buffers in 
> hdfs 2, and having a large number of open files causes multiple GB of memory 
> to be consumed from the RS process. 
> This issue is to further investigate what is going on. Whether we can limit 
> the memory usage in HDFS, or HBase, and/or document the setup. 
> Possible mitigation scenarios are: 
>  - Turn off SSR for Hadoop 2
>  - Ensure that there is enough unallocated memory for the RS based on 
> expected # of store files
>  - Ensure that there is lower number of regions per region server (hence 
> number of open files)
> Stack trace:
> {code}
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:632)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> at 
> org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
> at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
> at 
> org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
> at 
> org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)
> at 
> org.apache.hadoop.hb

[jira] [Created] (HBASE-9970) HBase BulkLoad, table is creating with the timestamp key also as a column to the table.

2013-11-14 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-9970:


 Summary: HBase BulkLoad, table is creating with the timestamp key 
also as a column to the table. 
 Key: HBASE-9970
 URL: https://issues.apache.org/jira/browse/HBASE-9970
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Y. SREENIVASULU REDDY
Assignee: Y. SREENIVASULU REDDY
 Fix For: 0.98.0, 0.96.1, 0.94.14


If BulkLoad job is running with out creating a table.
job itself will create the table if table is not found.
{quote}
if (!doesTableExist(tableName)) {
createTable(conf, tableName);
  }
{quote}
if columns contains timestamp also then table is creating with defined columns 
and timestamp key.
{quote}
eg: -Dimporttsv.columns=HBASE_ROW_KEY,HBASE_TS_KEY,d:num
{quote}

table is creating with the following columnFamilies.
'HBASE_TS_KEY' and 'd' 

while iterating timestamp key also need to avoid while describing the column 
descriptors.
{code}
private static void createTable(HBaseAdmin admin, String tableName, String[] 
columns)
  throws IOException {
HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName));
Set cfSet = new HashSet();
for (String aColumn : columns) {
  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)) continue;
  // we are only concerned with the first one (in case this is a cf:cq)
  cfSet.add(aColumn.split(":", 2)[0]);
}
for (String cf : cfSet) {
  HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toBytes(cf));
  htd.addFamily(hcd);
}
LOG.warn(format("Creating table '%s' with '%s' columns and default 
descriptors.",
  tableName, cfSet));
admin.createTable(htd);
  }
{code}

{quote}
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
===
--- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java 
(revision 1539967)
+++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java 
(working copy)
@@ -413,7 +413,8 @@
 HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName));
 Set cfSet = new HashSet();
 for (String aColumn : columns) {
-  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)) continue;
+  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
+  || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)) continue;
   // we are only concerned with the first one (in case this is a cf:cq)
   cfSet.add(aColumn.split(":", 2)[0]);
 }

{quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-5945) Reduce buffer copies in IPC server response path

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822472#comment-13822472
 ] 

Hadoop QA commented on HBASE-5945:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613551/5945v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:red}-1 hadoop2.0{color}.  The patch failed to compile against the 
hadoop 2.0 profile.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7862//console

This message is automatically generated.

> Reduce buffer copies in IPC server response path
> 
>
> Key: HBASE-5945
> URL: https://issues.apache.org/jira/browse/HBASE-5945
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 0.95.2
>Reporter: Todd Lipcon
>Assignee: stack
> Fix For: 0.96.1
>
> Attachments: 5945-in-progress.2.1.patch, 5945-in-progress.2.patch, 
> 5945-in-progress.patch, 5945v2.txt, buffer-copies.txt, even-fewer-copies.txt, 
> hbase-5495.txt
>
>
> The new PB code is sloppy with buffers and makes several needless copies. 
> This increases GC time a lot. A few simple changes can cut this back down.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9970) HBase BulkLoad, table is creating with the timestamp key also as a column to the table.

2013-11-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822484#comment-13822484
 ] 

Anoop Sam John commented on HBASE-9970:
---

Create a patch for 94 and Trunk and attach here Sreenivas. Good catch.

> HBase BulkLoad, table is creating with the timestamp key also as a column to 
> the table. 
> 
>
> Key: HBASE-9970
> URL: https://issues.apache.org/jira/browse/HBASE-9970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.11
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
>
> If BulkLoad job is running with out creating a table.
> job itself will create the table if table is not found.
> {quote}
> if (!doesTableExist(tableName)) {
> createTable(conf, tableName);
>   }
> {quote}
> if columns contains timestamp also then table is creating with defined 
> columns and timestamp key.
> {quote}
> eg: -Dimporttsv.columns=HBASE_ROW_KEY,HBASE_TS_KEY,d:num
> {quote}
> table is creating with the following columnFamilies.
> 'HBASE_TS_KEY' and 'd' 
> while iterating timestamp key also need to avoid while describing the column 
> descriptors.
> {code}
> private static void createTable(HBaseAdmin admin, String tableName, String[] 
> columns)
>   throws IOException {
> HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName));
> Set cfSet = new HashSet();
> for (String aColumn : columns) {
>   if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)) continue;
>   // we are only concerned with the first one (in case this is a cf:cq)
>   cfSet.add(aColumn.split(":", 2)[0]);
> }
> for (String cf : cfSet) {
>   HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toBytes(cf));
>   htd.addFamily(hcd);
> }
> LOG.warn(format("Creating table '%s' with '%s' columns and default 
> descriptors.",
>   tableName, cfSet));
> admin.createTable(htd);
>   }
> {code}
> {quote}
> Index: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
> ===
> --- 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java   
> (revision 1539967)
> +++ 
> hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java   
> (working copy)
> @@ -413,7 +413,8 @@
>  HTableDescriptor htd = new 
> HTableDescriptor(TableName.valueOf(tableName));
>  Set cfSet = new HashSet();
>  for (String aColumn : columns) {
> -  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)) continue;
> +  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
> +  || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)) continue;
>// we are only concerned with the first one (in case this is a cf:cq)
>cfSet.add(aColumn.split(":", 2)[0]);
>  }
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822500#comment-13822500
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in HBase-0.94-security #335 (See 
[https://builds.apache.org/job/HBase-0.94-security/335/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541642)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822501#comment-13822501
 ] 

Hudson commented on HBASE-4654:
---

SUCCESS: Integrated in HBase-0.94-security #335 (See 
[https://builds.apache.org/job/HBase-0.94-security/335/])
HBASE-4654 [replication] Add a check to make sure we don't replicate to 
ourselves (Demai Ni) (larsh: rev 1541806)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> [replication] Add a check to make sure we don't replicate to ourselves
> --
>
> Key: HBASE-4654
> URL: https://issues.apache.org/jira/browse/HBASE-4654
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
>Assignee: Demai Ni
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 4654-trunk.txt, HBASE-4654-0.94-v0.patch, 
> HBASE-4654-0.96-v0.patch, HBASE-4654-trunk-v0.patch, 
> HBASE-4654-trunk-v0.patch, HBASE-4654-trunk-v0.patch
>
>
> It's currently possible to add a peer for replication and point it to the 
> local cluster, which I believe could very well happen for those like us that 
> use only one ZK ensemble per DC so that only the root znode changes when you 
> want to set up replication intra-DC.
> I don't think comparing just the cluster ID would be enough because you would 
> normally use a different one for another cluster and nothing will block you 
> from pointing elsewhere.
> Comparing the ZK ensemble address doesn't work either when you have multiple 
> DNS entries that point at the same place.
> I think this could be resolved by looking up the master address in the 
> relevant znode as it should be exactly the same thing in the case where you 
> have the same cluster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822507#comment-13822507
 ] 

Hudson commented on HBASE-4654:
---

SUCCESS: Integrated in HBase-0.94 #1201 (See 
[https://builds.apache.org/job/HBase-0.94/1201/])
HBASE-4654 [replication] Add a check to make sure we don't replicate to 
ourselves (Demai Ni) (larsh: rev 1541806)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> [replication] Add a check to make sure we don't replicate to ourselves
> --
>
> Key: HBASE-4654
> URL: https://issues.apache.org/jira/browse/HBASE-4654
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
>Assignee: Demai Ni
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 4654-trunk.txt, HBASE-4654-0.94-v0.patch, 
> HBASE-4654-0.96-v0.patch, HBASE-4654-trunk-v0.patch, 
> HBASE-4654-trunk-v0.patch, HBASE-4654-trunk-v0.patch
>
>
> It's currently possible to add a peer for replication and point it to the 
> local cluster, which I believe could very well happen for those like us that 
> use only one ZK ensemble per DC so that only the root znode changes when you 
> want to set up replication intra-DC.
> I don't think comparing just the cluster ID would be enough because you would 
> normally use a different one for another cluster and nothing will block you 
> from pointing elsewhere.
> Comparing the ZK ensemble address doesn't work either when you have multiple 
> DNS entries that point at the same place.
> I think this could be resolved by looking up the master address in the 
> relevant znode as it should be exactly the same thing in the case where you 
> have the same cluster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822506#comment-13822506
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in HBase-0.94 #1201 (See 
[https://builds.apache.org/job/HBase-0.94/1201/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541642)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822529#comment-13822529
 ] 

Hadoop QA commented on HBASE-9969:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613822/hbase-9969.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7863//console

This message is automatically generated.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9962) Improve tag iteration

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822548#comment-13822548
 ] 

Hudson commented on HBASE-9962:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9962. Improve tag iteration (apurtell: rev 1541772)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java


> Improve tag iteration
> -
>
> Key: HBASE-9962
> URL: https://issues.apache.org/jira/browse/HBASE-9962
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.0
>
> Attachments: 9962.patch
>
>
> The tag iterator was moved out of KeyValue into CellUtil and marked as for 
> tests only. HBASE-7662 and HBASE-7663 will use it. The 'length' parameter was 
> made into a short, which is inconvenient for most callers. The methods on Tag 
> for getting tag data offset and length in the tag buffer were made default 
> scope so it's impossible outside of the package to find the tag data in the 
> backing buffer without calling Tag#asList, which might do some unwanted 
> object allocations. Tags#asList also inconveniently uses short for 'length'.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822551#comment-13822551
 ] 

Hudson commented on HBASE-9963:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (nkeywal: rev 
1541880)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822547#comment-13822547
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541644)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9710) Use the region name, not the encoded name, when region is not on current server

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822550#comment-13822550
 ] 

Hudson commented on HBASE-9710:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9710 Use the region name, not the encoded name, when region is not on 
current server (stack: rev 1541820)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Use the region name, not the encoded name, when region is not on current 
> server
> ---
>
> Key: HBASE-9710
> URL: https://issues.apache.org/jira/browse/HBASE-9710
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.2, 0.96.0
>Reporter: Benoit Sigoure
>Assignee: Benoit Sigoure
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 
> 0001-Log-the-region-name-instead-of-the-encoded-region-na.patch, 9710v2.txt, 
> 9710v2.txt, 9710v3.txt, 9710v4.txt
>
>
> When we throw a {{RegionOpeningException}} or a {{NotServingRegionException}} 
> we put the encoded region name in the exception, which isn't super useful.  I 
> propose putting the region name instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822549#comment-13822549
 ] 

Hudson commented on HBASE-9907:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9907 Rig to fake a cluster so can profile client behaviors (stack: rev 
1541703)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java


> Rig to fake a cluster so can profile client behaviors
> -
>
> Key: HBASE-9907
> URL: https://issues.apache.org/jira/browse/HBASE-9907
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt, 
> 9907v3.txt.1, 9907v4.096.txt, 9907v4.txt, 9907v4.txt
>
>
> Patch carried over from HBASE-9775 parent issue.  Adds to the 
> TestClientNoCluster#main a rig that allows faking many clients against a few 
> servers and the opposite.  Useful for studying client operation.
> Includes a few changes to pb makings to try and save on a few creations.
> Also has an edit of javadoc on how to create an HConnection and HTable trying 
> to be more forceful about pointing you in right direction ([~lhofhansl] -- 
> mind reviewing these javadoc changes?)
> I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
> to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9810) Global memstore size will be calculated wrongly if replaying recovered edits throws exception

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822554#comment-13822554
 ] 

Hudson commented on HBASE-9810:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9810 Global memstore size will be calculated wrongly if replaying 
recovered edits throws exception (zjushch: rev 1541783)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Global memstore size will be calculated wrongly if replaying recovered edits 
> throws exception
> -
>
> Key: HBASE-9810
> URL: https://issues.apache.org/jira/browse/HBASE-9810
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9810-trunk.patch
>
>
> Recently we encountered such a case in 0.94-version:
> Flush is triggered frequently because:
> {noformat}DEBUG org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush 
> thread woke up because memory above low water=14.4g
> {noformat}
> But, the real global memstore size is about 1g.
> It seems the global memstore size has been calculated wrongly.
> Through the logs, I find the following root cause log:
> {noformat}
> ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed 
> open of region=notifysub2_index,\x83\xDC^\xCD\xA3\x8A<\x
> E2\x8E\xE6\xAD!\xDC\xE8t\xED,1379148697072.46be7c2d71c555379278a7494df3015e., 
> starting to roll back the global memstore size.
> java.lang.NegativeArraySizeException
> at org.apache.hadoop.hbase.KeyValue.getFamily(KeyValue.java:1096)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2933)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2811)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:583)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:499)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3939)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3887)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> Browse the code of this part, seems a critial bug about global memstore size 
> when replaying recovered edits.
> (RegionServerAccounting#clearRegionReplayEditsSize is called  for each edit 
> file, it means the roll back size is smaller than actual when calling 
> RegionServerAccounting#rollbackRegionReplayEditsSize)
> Anyway,  the solution is easy as the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9757) Reenable fast region move in SlowDeterministicMonkey

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822553#comment-13822553
 ] 

Hudson commented on HBASE-9757:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9757 Reenable fast region move in SlowDeterministicMonkey (jxiang: rev 
1541811)
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeEncodingAction.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java


> Reenable fast region move in SlowDeterministicMonkey
> 
>
> Key: HBASE-9757
> URL: https://issues.apache.org/jira/browse/HBASE-9757
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 0.96-9757.patch, trunk-9757.patch, trunk-9757_v2.patch
>
>
> HBASE-9338 slows down the region move CM a little so that ITBLL is green for 
> 0.96.0 RC. We should revert the change and make sure the test is still green.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9960) Fix javadoc for CellUtil#tagsIterator()

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822555#comment-13822555
 ] 

Hudson commented on HBASE-9960:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9960. Fix javadoc for CellUtil#tagsIterator() (apurtell: rev 1541771)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java


> Fix javadoc for CellUtil#tagsIterator()
> ---
>
> Key: HBASE-9960
> URL: https://issues.apache.org/jira/browse/HBASE-9960
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9960.txt
>
>
> The @return tag has no arguments.
> {code}
>* @return
>*/
>   public static Iterator tagsIterator(final byte[] tags, final int 
> offset, final short length) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9958) Remove some array copy, change lock scope in locateRegion

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822546#comment-13822546
 ] 

Hudson commented on HBASE-9958:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9958 Remove some array copy, change lock scope in locateRegion (nkeywal: 
rev 1541688)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


> Remove some array copy, change lock scope in locateRegion
> -
>
> Key: HBASE-9958
> URL: https://issues.apache.org/jira/browse/HBASE-9958
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9958.v1.patch, 9958.v2.patch, 9958.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9870) HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822552#comment-13822552
 ] 

Hudson commented on HBASE-9870:
---

SUCCESS: Integrated in HBase-TRUNK #4680 (See 
[https://builds.apache.org/job/HBase-TRUNK/4680/])
HBASE-9870 HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format 
(jxiang: rev 1541629)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/EncodedSeekPerformanceTest.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MockStoreFile.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* 
/hbase/trunk/hbase-server/src

[jira] [Commented] (HBASE-6642) enable_all,disable_all,drop_all can call "list" command with regex directly.

2013-11-14 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822594#comment-13822594
 ] 

Jean-Marc Spaggiari commented on HBASE-6642:


Faced the same "issue" recently. Might be good to fix that.

Now, since Java and Ruby are not sending back the same result for a regex, I 
think we should the same API for the 2 filtering. To make sure it applies the 
same way on the list. Doing that will allow us to keep the current feature, but 
will fix it? Just a suggestion. Not sure if it's doable...

> enable_all,disable_all,drop_all can call "list" command with regex directly.
> 
>
> Key: HBASE-6642
> URL: https://issues.apache.org/jira/browse/HBASE-6642
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Y. SREENIVASULU REDDY
>Assignee: rajeshbabu
> Attachments: HBASE-6642_trunk.patch
>
>
> created few tables. then performing disable_all operation in shell prompt.
> but it is not performing operation successfully.
> {noformat}
> hbase(main):043:0> disable_all '*'
> table12
> zk0113
> zk0114
> Disable the above 3 tables (y/n)?
> y/
> 3 tables successfully disabled
> just it is showing the message but operation is not success.
> but the following way only performing successfully
> hbase(main):043:0> disable_all '*.*'
> table12
> zk0113
> zk0114
> Disable the above 3 tables (y/n)?
> y
> 3 tables successfully disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822603#comment-13822603
 ] 

Ted Yu commented on HBASE-9969:
---

Putting patch on review board would make reviewing easier.

For benchmark B, can you try ColumnPaginationFilter with smaller offset ?
Giving percentage improvement in the tables is desirable.
{code}
+ */
+public class LoserTree {
{code}
Please add annotation for audience.
{code}
+   * {@code tree[i]} where i > 0 stores the index to greater value between 
{@code value[tree[2*i]]}
+   * and {@code value[tree[2*i + 1]]}.
{code}
'value[' should be 'values[', right ?
{code}
+   * @return the index to the minimal elements.
{code}
elements -> element
{code}
+   * Pushes next value from the stream that we previously taken the minimal 
element from.
{code}
taken -> took
{code}
+   * Passes {@code NULL} to value if the stream has been reached EOF.
{code}
Remove 'been'
{code}
+if (index != topIndex()) {
+  throw new IllegalArgumentException("Only the top index can be updated");
{code}
Consider including index and topIndex in the exception message.
{code}
+if (value == null && values.get(index) != null) {
+  numOpenStreams--;
+  if (numOpenStreams < 0) {
{code}
In what condition would numOpenStreams become negative ?
{code}
+throw new AssertionError("numOpenStreams is negative: " + 
numOpenStreams);
{code}
Throw IllegalStateException.
{code}
+  public List getOpenStreamsForTesting() {
{code}
The above is used in KeyValueHeap, consider renaming.
{code}
+   * from bottom up to the root. Once it "loses", it stops there and the 
winner continues to fight to up.
+   */
+  private void update(int i) {
{code}
'fight to up' -> 'fight to top'
Please add @param for i.

KeyValueHeapBenchmark.java and TestLoserTree.java need license.


> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Open  (was: Patch Available)

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Status: Patch Available  (was: Open)

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.96.0, 0.98.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9959:
---

Attachment: 9959.v3.patch

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9959) Remove some array copy - server side

2013-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822643#comment-13822643
 ] 

Hadoop QA commented on HBASE-9959:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613876/9959.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestSplitTransaction
  org.apache.hadoop.hbase.io.hfile.TestHFileWriterV3
  org.apache.hadoop.hbase.io.hfile.TestHFileWriterV2
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransaction

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7864//console

This message is automatically generated.

> Remove some array copy - server side
> 
>
> Key: HBASE-9959
> URL: https://issues.apache.org/jira/browse/HBASE-9959
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs, regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9959-trunk.v1.patch, 9959-trunk.v2.patch, 
> 9959-trunk.v2.patch, 9959-trunk.v2.patch, 9959.v1.patch, 9959.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822669#comment-13822669
 ] 

Hudson commented on HBASE-4654:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-4654 [replication] Add a check to make sure we don't replicate to 
ourselves (Demai Ni) (larsh: rev 1541805)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> [replication] Add a check to make sure we don't replicate to ourselves
> --
>
> Key: HBASE-4654
> URL: https://issues.apache.org/jira/browse/HBASE-4654
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
>Assignee: Demai Ni
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 4654-trunk.txt, HBASE-4654-0.94-v0.patch, 
> HBASE-4654-0.96-v0.patch, HBASE-4654-trunk-v0.patch, 
> HBASE-4654-trunk-v0.patch, HBASE-4654-trunk-v0.patch
>
>
> It's currently possible to add a peer for replication and point it to the 
> local cluster, which I believe could very well happen for those like us that 
> use only one ZK ensemble per DC so that only the root znode changes when you 
> want to set up replication intra-DC.
> I don't think comparing just the cluster ID would be enough because you would 
> normally use a different one for another cluster and nothing will block you 
> from pointing elsewhere.
> Comparing the ZK ensemble address doesn't work either when you have multiple 
> DNS entries that point at the same place.
> I think this could be resolved by looking up the master address in the 
> relevant znode as it should be exactly the same thing in the case where you 
> have the same cluster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9810) Global memstore size will be calculated wrongly if replaying recovered edits throws exception

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822670#comment-13822670
 ] 

Hudson commented on HBASE-9810:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9810 Global memstore size will be calculated wrongly if replaying 
recovered edits throws exception (zjushch: rev 1541784)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Global memstore size will be calculated wrongly if replaying recovered edits 
> throws exception
> -
>
> Key: HBASE-9810
> URL: https://issues.apache.org/jira/browse/HBASE-9810
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9810-trunk.patch
>
>
> Recently we encountered such a case in 0.94-version:
> Flush is triggered frequently because:
> {noformat}DEBUG org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush 
> thread woke up because memory above low water=14.4g
> {noformat}
> But, the real global memstore size is about 1g.
> It seems the global memstore size has been calculated wrongly.
> Through the logs, I find the following root cause log:
> {noformat}
> ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed 
> open of region=notifysub2_index,\x83\xDC^\xCD\xA3\x8A<\x
> E2\x8E\xE6\xAD!\xDC\xE8t\xED,1379148697072.46be7c2d71c555379278a7494df3015e., 
> starting to roll back the global memstore size.
> java.lang.NegativeArraySizeException
> at org.apache.hadoop.hbase.KeyValue.getFamily(KeyValue.java:1096)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2933)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2811)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:583)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:499)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3939)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3887)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> Browse the code of this part, seems a critial bug about global memstore size 
> when replaying recovered edits.
> (RegionServerAccounting#clearRegionReplayEditsSize is called  for each edit 
> file, it means the roll back size is smaller than actual when calling 
> RegionServerAccounting#rollbackRegionReplayEditsSize)
> Anyway,  the solution is easy as the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9958) Remove some array copy, change lock scope in locateRegion

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822662#comment-13822662
 ] 

Hudson commented on HBASE-9958:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9958 Remove some array copy, change lock scope in locateRegion (nkeywal: 
rev 1541691)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


> Remove some array copy, change lock scope in locateRegion
> -
>
> Key: HBASE-9958
> URL: https://issues.apache.org/jira/browse/HBASE-9958
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9958.v1.patch, 9958.v2.patch, 9958.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822663#comment-13822663
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541643)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9870) HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822667#comment-13822667
 ] 

Hudson commented on HBASE-9870:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9870 HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format 
(jxiang: rev 1541627)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/EncodedSeekPerformanceTest.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
* /hbase/branches/0.96/hbase-shell/src/main/ruby/hbase/admin.rb
* /hbase/branches/0.96/src/main/docbkx/shell.xml


> HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format
> -
>
> Key: HBASE-9870
> URL: https://issues.apache.org/jira/browse/HBASE-9870
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: trunk-9870.patch, trunk-9870_v2.patch, 
> trunk-9870_v3.patch
>
>
> In this method, we have
> {code}
>  

[jira] [Commented] (HBASE-9710) Use the region name, not the encoded name, when region is not on current server

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822665#comment-13822665
 ] 

Hudson commented on HBASE-9710:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9710 Use the region name, not the encoded name, when region is not on 
current server (stack: rev 1541821)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Use the region name, not the encoded name, when region is not on current 
> server
> ---
>
> Key: HBASE-9710
> URL: https://issues.apache.org/jira/browse/HBASE-9710
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.2, 0.96.0
>Reporter: Benoit Sigoure
>Assignee: Benoit Sigoure
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 
> 0001-Log-the-region-name-instead-of-the-encoded-region-na.patch, 9710v2.txt, 
> 9710v2.txt, 9710v3.txt, 9710v4.txt
>
>
> When we throw a {{RegionOpeningException}} or a {{NotServingRegionException}} 
> we put the encoded region name in the exception, which isn't super useful.  I 
> propose putting the region name instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9757) Reenable fast region move in SlowDeterministicMonkey

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822668#comment-13822668
 ] 

Hudson commented on HBASE-9757:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9757 Reenable fast region move in SlowDeterministicMonkey (jxiang: rev 
1541812)
* /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeEncodingAction.java
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java


> Reenable fast region move in SlowDeterministicMonkey
> 
>
> Key: HBASE-9757
> URL: https://issues.apache.org/jira/browse/HBASE-9757
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 0.96-9757.patch, trunk-9757.patch, trunk-9757_v2.patch
>
>
> HBASE-9338 slows down the region move CM a little so that ITBLL is green for 
> 0.96.0 RC. We should revert the change and make sure the test is still green.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822664#comment-13822664
 ] 

Hudson commented on HBASE-9907:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9907 Rig to fake a cluster so can profile client behaviors (stack: rev 
1541708)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java


> Rig to fake a cluster so can profile client behaviors
> -
>
> Key: HBASE-9907
> URL: https://issues.apache.org/jira/browse/HBASE-9907
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt, 
> 9907v3.txt.1, 9907v4.096.txt, 9907v4.txt, 9907v4.txt
>
>
> Patch carried over from HBASE-9775 parent issue.  Adds to the 
> TestClientNoCluster#main a rig that allows faking many clients against a few 
> servers and the opposite.  Useful for studying client operation.
> Includes a few changes to pb makings to try and save on a few creations.
> Also has an edit of javadoc on how to create an HConnection and HTable trying 
> to be more forceful about pointing you in right direction ([~lhofhansl] -- 
> mind reviewing these javadoc changes?)
> I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
> to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822666#comment-13822666
 ] 

Hudson commented on HBASE-9963:
---

SUCCESS: Integrated in hbase-0.96 #189 (See 
[https://builds.apache.org/job/hbase-0.96/189/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (nkeywal: rev 
1541882)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8438) Extend bin/hbase to print a "mapreduce classpath"

2013-11-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822674#comment-13822674
 ] 

Nick Dimiduk commented on HBASE-8438:
-

I noticed the DEBUG output as well. It goes to stderr so I planned to pipe that 
to /dev/null from consuming shell scripts. I can look at changing the log level 
programmatically if you'd prefer.

hadoop-mapreduce-client-core.jar comes from the default Job values 
(mapOutputKeyClass, mapOutputValueClass, etc). I introduce a new method in 
HBASE-9165 that excludes these classes. We can switch this patch over to use 
that method if you prefer. Mind giving that one a review as well?

> Extend bin/hbase to print a "mapreduce classpath"
> -
>
> Key: HBASE-8438
> URL: https://issues.apache.org/jira/browse/HBASE-8438
> Project: HBase
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.94.6.1, 0.95.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch, 
> 0001-HBASE-8438-Extend-bin-hbase-to-print-a-minimal-class.patch
>
>
> For tools like pig and hive, blindly appending the full output of `bin/hbase 
> classpath` to their own CLASSPATH is excessive. They already build CLASSPATH 
> entries for hadoop. All they need from us is the delta entries, the 
> dependencies we require w/o hadoop and all of it's transitive deps. This is 
> also a kindness for Windows, where there's a shorter limit on the length of 
> commandline arguments.
> See also HIVE-2055 for additional discussion.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9971) Port HBASE-9963 and part of HBASE-9958 to 0.94

2013-11-14 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-9971:


 Summary: Port HBASE-9963 and part of HBASE-9958 to 0.94
 Key: HBASE-9971
 URL: https://issues.apache.org/jira/browse/HBASE-9971
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.94.14


Both of these have nice simple fixes that we should have in 0.94 as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9834) Minimize byte[] copies for 'smart' clients

2013-11-14 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822691#comment-13822691
 ] 

Jesse Yates commented on HBASE-9834:


Cool. Committing this afternoon (PST) unless there are any objections

> Minimize byte[] copies for 'smart' clients
> --
>
> Key: HBASE-9834
> URL: https://issues.apache.org/jira/browse/HBASE-9834
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.94.14
>
> Attachments: hbase-9834-0.94-v0.patch, hbase-9834-0.94-v1.patch, 
> hbase-9834-0.94-v2.patch, hbase-9834-0.94-v3.patch
>
>
> 'Smart' clients (e.g. phoenix) that have in-depth knowledge of HBase often 
> bemoan the extra byte[] copies that must be done when building multiple 
> puts/deletes. We should provide a mechanism by which they can minimize these 
> copies, but still remain wire compatible. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9869) Optimize HConnectionManager#getCachedLocation

2013-11-14 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822692#comment-13822692
 ] 

Nicolas Liochon commented on HBASE-9869:


I've done some tests with TestClientNoCluster with 1 regions

||#clients|#puts|time without the patch|time with the patch||
||1 client| 100 million| 94 seconds|65 seconds||
||2 clients| 50 million each| 82 seconds|56 seconds||
||5 clients| 20 million each| 105 seconds|66 seconds||

With 5 clients, we have 10 threads trying to insert as much as possible, so 
more clients means more context switches on more memory pressure (it's 
different if they have to wait for an answer from a server of course).
I need to do more tests with more regions. But so far so good I would say.

> Optimize HConnectionManager#getCachedLocation
> -
>
> Key: HBASE-9869
> URL: https://issues.apache.org/jira/browse/HBASE-9869
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9869.v1.patch, 9869.v1.patch, 9869.v2.patch
>
>
> It javadoc says: "TODO: This method during writing consumes 15% of CPU doing 
> lookup". This is still true, says Yourkit. With 0.96, we also spend more time 
> in these methods. We retry more, and the AsyncProcess calls it in parallel.
> I don't have the patch for this yet, but I will spend some time on it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9972) Make HBase export metrics to JMX by default, instead of to NullContext

2013-11-14 Thread Gaurav Menghani (JIRA)
Gaurav Menghani created HBASE-9972:
--

 Summary: Make HBase export metrics to JMX by default, instead of 
to NullContext
 Key: HBASE-9972
 URL: https://issues.apache.org/jira/browse/HBASE-9972
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Affects Versions: 0.89-fb
Reporter: Gaurav Menghani
 Fix For: 0.89-fb


I was debugging something in the swift branch, and found that HBase doesn't 
export to JMX by default. The JMX server is being spun-up anyways in single 
node setup, we might as well export the metrics to it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9972) Make HBase export metrics to JMX by default, instead of to NullContext

2013-11-14 Thread Gaurav Menghani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Menghani reassigned HBASE-9972:
--

Assignee: Gaurav Menghani

> Make HBase export metrics to JMX by default, instead of to NullContext
> --
>
> Key: HBASE-9972
> URL: https://issues.apache.org/jira/browse/HBASE-9972
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 0.89-fb
>Reporter: Gaurav Menghani
>Assignee: Gaurav Menghani
> Fix For: 0.89-fb
>
>
> I was debugging something in the swift branch, and found that HBase doesn't 
> export to JMX by default. The JMX server is being spun-up anyways in single 
> node setup, we might as well export the metrics to it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822760#comment-13822760
 ] 

Ted Yu commented on HBASE-9969:
---

{code}
* @return true if there are more keys, false if all scanners are done
*/
   public boolean next(List result, int limit) throws IOException {
...
+return loserTree.isEmpty();
   }
{code}
Should the return value from loserTree.isEmpty() be negated ?
{code}
+  if (isLazy && loserTree.getNumOfOpenStreams() > 1) {
 // If there is only one scanner left, we don't do lazy seek.
{code}
Please update comment above to match the condition.

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8465) Auto-drop rollback snapshot for snapshot restore

2013-11-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822777#comment-13822777
 ] 

Ted Yu commented on HBASE-8465:
---

{code}
+"hbase-failsafe-{snapshot.name}-{restore.timestamp}");
+  failSafeSnapshotSnapshotName = failSafeSnapshotSnapshotName
+.replace("{snapshot.name}", snapshotName)
+.replace("{table.name}", 
tableName.toString().replace(TableName.NAMESPACE_DELIM, '.'))
{code}
{table.name} doesn't appear in the default template. Please add comment in 
hbase-default.xml

> Auto-drop rollback snapshot for snapshot restore
> 
>
> Key: HBASE-8465
> URL: https://issues.apache.org/jira/browse/HBASE-8465
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 8465-trunk-v1.txt, 8465-trunk-v2.txt, 
> HBASE-8465-v3.patch, HBASE-8465-v4.patch
>
>
> Below is an excerpt from snapshot restore javadoc:
> {code}
>* Restore the specified snapshot on the original table. (The table must be 
> disabled)
>* Before restoring the table, a new snapshot with the current table state 
> is created.
>* In case of failure, the table will be rolled back to the its original 
> state.
> {code}
> We can improve the handling of rollbackSnapshot in two ways:
> 1. give better name to the rollbackSnapshot (adding 
> {code}'-for-rollback-'{code}). Currently the name is of the form:
> String rollbackSnapshot = snapshotName + "-" + 
> EnvironmentEdgeManager.currentTimeMillis();
> 2. drop rollbackSnapshot at the end of restoreSnapshot() if the restore is 
> successful. We can introduce new config param, named 
> 'hbase.snapshot.restore.drop.rollback', to keep compatibility with current 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-14 Thread Aleksandr Shulman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Shulman updated HBASE-9973:
-

Labels: acl  (was: )

> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>  Labels: acl
> Fix For: 0.96.1
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-14 Thread Aleksandr Shulman (JIRA)
Aleksandr Shulman created HBASE-9973:


 Summary: [ACL]: Users with 'Admin' ACL permission will lose 
permissions after upgrade to 0.96.x from 0.94.x or 0.92.x
 Key: HBASE-9973
 URL: https://issues.apache.org/jira/browse/HBASE-9973
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.96.0, 0.96.1
Reporter: Aleksandr Shulman
 Fix For: 0.96.1


In our testing, we have uncovered that the ACL permissions for users with the 
'A' credential do not hold after the upgrade to 0.96.x.

This is because in the ACL table, the entry for the admin user is a permission 
on the '_acl_' table with permission 'A'. However, because of the namespace 
transition, there is no longer an '_acl_' table. Therefore, that entry in the 
hbase:acl table is no longer valid.

Example:

{code}hbase(main):002:0> scan 'hbase:acl'
ROW   COLUMN+CELL   
 TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW  
 TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA
 _acl_column=l:root, timestamp=1384454767568, value=C   
 _acl_column=l:tableAdmin, timestamp=1384454788035, value=A 
 hbase:aclcolumn=l:root, timestamp=1384455875786, value=C   
{code}

In this case, the following entry becomes meaningless:
{code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
value=A {code}

As a result, 

Proposed fix:
I see the fix being relatively straightforward. As part of the migration, 
change any entries in the '_acl_' table with key '_acl_' into a new row with 
key 'hbase:acl', all else being the same. And the old entry would be deleted.

This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9973) [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade to 0.96.x from 0.94.x or 0.92.x

2013-11-14 Thread Aleksandr Shulman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Shulman updated HBASE-9973:
-

Assignee: Himanshu Vashishtha

> [ACL]: Users with 'Admin' ACL permission will lose permissions after upgrade 
> to 0.96.x from 0.94.x or 0.92.x
> 
>
> Key: HBASE-9973
> URL: https://issues.apache.org/jira/browse/HBASE-9973
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.96.0, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Himanshu Vashishtha
>  Labels: acl
> Fix For: 0.96.1
>
>
> In our testing, we have uncovered that the ACL permissions for users with the 
> 'A' credential do not hold after the upgrade to 0.96.x.
> This is because in the ACL table, the entry for the admin user is a 
> permission on the '_acl_' table with permission 'A'. However, because of the 
> namespace transition, there is no longer an '_acl_' table. Therefore, that 
> entry in the hbase:acl table is no longer valid.
> Example:
> {code}hbase(main):002:0> scan 'hbase:acl'
> ROW   COLUMN+CELL 
>   
>  TestTablecolumn=l:hdfs, timestamp=1384454830701, value=RW
>   
>  TestTablecolumn=l:root, timestamp=1384455875586, value=RWCA  
>   
>  _acl_column=l:root, timestamp=1384454767568, value=C 
>   
>  _acl_column=l:tableAdmin, timestamp=1384454788035, value=A   
>   
>  hbase:aclcolumn=l:root, timestamp=1384455875786, value=C 
>   
> {code}
> In this case, the following entry becomes meaningless:
> {code} _acl_column=l:tableAdmin, timestamp=1384454788035, 
> value=A {code}
> As a result, 
> Proposed fix:
> I see the fix being relatively straightforward. As part of the migration, 
> change any entries in the '_acl_' table with key '_acl_' into a new row with 
> key 'hbase:acl', all else being the same. And the old entry would be deleted.
> This can go into the standard migration script that we expect users to run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9908) [WINDOWS] Fix filesystem / classloader related unit tests

2013-11-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822817#comment-13822817
 ] 

Nick Dimiduk commented on HBASE-9908:
-

Cool. +1

> [WINDOWS] Fix filesystem / classloader related unit tests
> -
>
> Key: HBASE-9908
> URL: https://issues.apache.org/jira/browse/HBASE-9908
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9908_v1-addendum.patch, hbase-9908_v1.patch
>
>
> Some of the unit tests related to classloasing and filesystem are failing on 
> windows. 
> {code}
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testHBase3810
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLocalFS
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testPrivateClassLoader
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromRelativeLibDirInJar
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLibDirInJar
> org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromHDFS
> org.apache.hadoop.hbase.backup.TestHFileArchiving.testCleaningRace
> org.apache.hadoop.hbase.regionserver.wal.TestDurability.testDurability
> org.apache.hadoop.hbase.regionserver.wal.TestHLog.testMaintainOrderWithConcurrentWrites
> org.apache.hadoop.hbase.security.access.TestAccessController.testBulkLoad
> org.apache.hadoop.hbase.regionserver.TestHRegion.testRecoveredEditsReplayCompaction
> org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testRecoveredEditsReplayCompaction
> org.apache.hadoop.hbase.util.TestFSUtils.testRenameAndSetModifyTime
> {code}
> The root causes are: 
>  - Using local file name for referring to hdfs paths (HBASE-6830)
>  - Classloader using the wrong file system 
>  - StoreFile readers not being closed (for unfinished compaction)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9961) [WINDOWS] Multicast should bind to local address

2013-11-14 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822827#comment-13822827
 ] 

Nicolas Liochon commented on HBASE-9961:


+1

> [WINDOWS] Multicast should bind to local address
> 
>
> Key: HBASE-9961
> URL: https://issues.apache.org/jira/browse/HBASE-9961
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9961_v1.patch, hbase-9961_v2.patch
>
>
> Binding to a multicast address (such as "hbase.status.multicast.address.ip") 
> seems to be the preferred method on most unix systems and linux(2,3). At 
> least in RedHat, binding to multicast address might not filter out other 
> traffic coming to the same port, but for different multi cast groups (2)]. 
> However, on windows, you cannot bind to a non local (class D) address (1), 
> which seems to be correct according to the spec.
> # http://msdn.microsoft.com/en-us/library/ms737550%28v=vs.85%29.aspx
> # https://bugzilla.redhat.com/show_bug.cgi?id=231899
> # 
> http://stackoverflow.com/questions/10692956/what-does-it-mean-to-bind-a-multicast-udp-socket
> # https://issues.jboss.org/browse/JGRP-515
> The solution is to bind to mcast address on linux, but a local address on 
> windows. 
> TestHCM is also failing because of this. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HBASE-9834) Minimize byte[] copies for 'smart' clients

2013-11-14 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved HBASE-9834.


Resolution: Fixed

Committed to 0.94. thanks for the reviews all.

> Minimize byte[] copies for 'smart' clients
> --
>
> Key: HBASE-9834
> URL: https://issues.apache.org/jira/browse/HBASE-9834
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.94.14
>
> Attachments: hbase-9834-0.94-v0.patch, hbase-9834-0.94-v1.patch, 
> hbase-9834-0.94-v2.patch, hbase-9834-0.94-v3.patch
>
>
> 'Smart' clients (e.g. phoenix) that have in-depth knowledge of HBase often 
> bemoan the extra byte[] copies that must be done when building multiple 
> puts/deletes. We should provide a mechanism by which they can minimize these 
> copies, but still remain wire compatible. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9834) Minimize byte[] copies for 'smart' clients

2013-11-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822866#comment-13822866
 ] 

stack commented on HBASE-9834:
--

+1  Thanks for removing dup'd code.

> Minimize byte[] copies for 'smart' clients
> --
>
> Key: HBASE-9834
> URL: https://issues.apache.org/jira/browse/HBASE-9834
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.94.14
>
> Attachments: hbase-9834-0.94-v0.patch, hbase-9834-0.94-v1.patch, 
> hbase-9834-0.94-v2.patch, hbase-9834-0.94-v3.patch
>
>
> 'Smart' clients (e.g. phoenix) that have in-depth knowledge of HBase often 
> bemoan the extra byte[] copies that must be done when building multiple 
> puts/deletes. We should provide a mechanism by which they can minimize these 
> copies, but still remain wire compatible. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9969:


Assignee: Chao Shi

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chao Shi
>Assignee: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9958) Remove some array copy, change lock scope in locateRegion

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822882#comment-13822882
 ] 

Hudson commented on HBASE-9958:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9958 Remove some array copy, change lock scope in locateRegion (nkeywal: 
rev 1541688)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


> Remove some array copy, change lock scope in locateRegion
> -
>
> Key: HBASE-9958
> URL: https://issues.apache.org/jira/browse/HBASE-9958
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9958.v1.patch, 9958.v2.patch, 9958.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822883#comment-13822883
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541644)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9710) Use the region name, not the encoded name, when region is not on current server

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822886#comment-13822886
 ] 

Hudson commented on HBASE-9710:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9710 Use the region name, not the encoded name, when region is not on 
current server (stack: rev 1541820)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Use the region name, not the encoded name, when region is not on current 
> server
> ---
>
> Key: HBASE-9710
> URL: https://issues.apache.org/jira/browse/HBASE-9710
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.2, 0.96.0
>Reporter: Benoit Sigoure
>Assignee: Benoit Sigoure
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 
> 0001-Log-the-region-name-instead-of-the-encoded-region-na.patch, 9710v2.txt, 
> 9710v2.txt, 9710v3.txt, 9710v4.txt
>
>
> When we throw a {{RegionOpeningException}} or a {{NotServingRegionException}} 
> we put the encoded region name in the exception, which isn't super useful.  I 
> propose putting the region name instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9810) Global memstore size will be calculated wrongly if replaying recovered edits throws exception

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822890#comment-13822890
 ] 

Hudson commented on HBASE-9810:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9810 Global memstore size will be calculated wrongly if replaying 
recovered edits throws exception (zjushch: rev 1541783)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Global memstore size will be calculated wrongly if replaying recovered edits 
> throws exception
> -
>
> Key: HBASE-9810
> URL: https://issues.apache.org/jira/browse/HBASE-9810
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9810-trunk.patch
>
>
> Recently we encountered such a case in 0.94-version:
> Flush is triggered frequently because:
> {noformat}DEBUG org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush 
> thread woke up because memory above low water=14.4g
> {noformat}
> But, the real global memstore size is about 1g.
> It seems the global memstore size has been calculated wrongly.
> Through the logs, I find the following root cause log:
> {noformat}
> ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed 
> open of region=notifysub2_index,\x83\xDC^\xCD\xA3\x8A<\x
> E2\x8E\xE6\xAD!\xDC\xE8t\xED,1379148697072.46be7c2d71c555379278a7494df3015e., 
> starting to roll back the global memstore size.
> java.lang.NegativeArraySizeException
> at org.apache.hadoop.hbase.KeyValue.getFamily(KeyValue.java:1096)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2933)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2811)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:583)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:499)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3939)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3887)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> Browse the code of this part, seems a critial bug about global memstore size 
> when replaying recovered edits.
> (RegionServerAccounting#clearRegionReplayEditsSize is called  for each edit 
> file, it means the roll back size is smaller than actual when calling 
> RegionServerAccounting#rollbackRegionReplayEditsSize)
> Anyway,  the solution is easy as the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9960) Fix javadoc for CellUtil#tagsIterator()

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822891#comment-13822891
 ] 

Hudson commented on HBASE-9960:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9960. Fix javadoc for CellUtil#tagsIterator() (apurtell: rev 1541771)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java


> Fix javadoc for CellUtil#tagsIterator()
> ---
>
> Key: HBASE-9960
> URL: https://issues.apache.org/jira/browse/HBASE-9960
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9960.txt
>
>
> The @return tag has no arguments.
> {code}
>* @return
>*/
>   public static Iterator tagsIterator(final byte[] tags, final int 
> offset, final short length) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822885#comment-13822885
 ] 

Hudson commented on HBASE-9907:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9907 Rig to fake a cluster so can profile client behaviors (stack: rev 
1541703)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java


> Rig to fake a cluster so can profile client behaviors
> -
>
> Key: HBASE-9907
> URL: https://issues.apache.org/jira/browse/HBASE-9907
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt, 
> 9907v3.txt.1, 9907v4.096.txt, 9907v4.txt, 9907v4.txt
>
>
> Patch carried over from HBASE-9775 parent issue.  Adds to the 
> TestClientNoCluster#main a rig that allows faking many clients against a few 
> servers and the opposite.  Useful for studying client operation.
> Includes a few changes to pb makings to try and save on a few creations.
> Also has an edit of javadoc on how to create an HConnection and HTable trying 
> to be more forceful about pointing you in right direction ([~lhofhansl] -- 
> mind reviewing these javadoc changes?)
> I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
> to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9757) Reenable fast region move in SlowDeterministicMonkey

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822889#comment-13822889
 ] 

Hudson commented on HBASE-9757:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9757 Reenable fast region move in SlowDeterministicMonkey (jxiang: rev 
1541811)
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeEncodingAction.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java


> Reenable fast region move in SlowDeterministicMonkey
> 
>
> Key: HBASE-9757
> URL: https://issues.apache.org/jira/browse/HBASE-9757
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 0.96-9757.patch, trunk-9757.patch, trunk-9757_v2.patch
>
>
> HBASE-9338 slows down the region move CM a little so that ITBLL is green for 
> 0.96.0 RC. We should revert the change and make sure the test is still green.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9870) HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822888#comment-13822888
 ] 

Hudson commented on HBASE-9870:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9870 HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format 
(jxiang: rev 1541629)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/EncodedSeekPerformanceTest.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MockStoreFile.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* 

[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822887#comment-13822887
 ] 

Hudson commented on HBASE-9963:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (nkeywal: rev 
1541880)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9962) Improve tag iteration

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822884#comment-13822884
 ] 

Hudson commented on HBASE-9962:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #837 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/837/])
HBASE-9962. Improve tag iteration (apurtell: rev 1541772)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java


> Improve tag iteration
> -
>
> Key: HBASE-9962
> URL: https://issues.apache.org/jira/browse/HBASE-9962
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.0
>
> Attachments: 9962.patch
>
>
> The tag iterator was moved out of KeyValue into CellUtil and marked as for 
> tests only. HBASE-7662 and HBASE-7663 will use it. The 'length' parameter was 
> made into a short, which is inconvenient for most callers. The methods on Tag 
> for getting tag data offset and length in the tag buffer were made default 
> scope so it's impossible outside of the package to find the tag data in the 
> backing buffer without calling Tag#asList, which might do some unwanted 
> object allocations. Tags#asList also inconveniently uses short for 'length'.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9969:


Component/s: regionserver

> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9459) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.94

2013-11-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822896#comment-13822896
 ] 

Nick Dimiduk commented on HBASE-9459:
-

We have HBASE-9484 for backporting to 0.96. Sorry, I haven't gotten to it 
either.

> Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.94
> --
>
> Key: HBASE-9459
> URL: https://issues.apache.org/jira/browse/HBASE-9459
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Reporter: Nick Dimiduk
>Assignee: Ivan A. Veselovsky
> Fix For: 0.94.15
>
> Attachments: HBASE-9459-0.94--n3.patch
>
>
> Do you want this test update backported? See HBASE-8534 for a 0.94 patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9907) Rig to fake a cluster so can profile client behaviors

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822909#comment-13822909
 ] 

Hudson commented on HBASE-9907:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9907 Rig to fake a cluster so can profile client behaviors (stack: rev 
1541708)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java


> Rig to fake a cluster so can profile client behaviors
> -
>
> Key: HBASE-9907
> URL: https://issues.apache.org/jira/browse/HBASE-9907
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt, 
> 9907v3.txt.1, 9907v4.096.txt, 9907v4.txt, 9907v4.txt
>
>
> Patch carried over from HBASE-9775 parent issue.  Adds to the 
> TestClientNoCluster#main a rig that allows faking many clients against a few 
> servers and the opposite.  Useful for studying client operation.
> Includes a few changes to pb makings to try and save on a few creations.
> Also has an edit of javadoc on how to create an HConnection and HTable trying 
> to be more forceful about pointing you in right direction ([~lhofhansl] -- 
> mind reviewing these javadoc changes?)
> I have a +1 already on this patch up in parent issue.  Will run by hadoopqa 
> to make sure all good before commit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9112) Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on TableMapper

2013-11-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9112:


Fix Version/s: 0.96.1
   0.98.0

> Custom TableInputFormat in initTableMapperJob throws ClassNoFoundException on 
> TableMapper
> -
>
> Key: HBASE-9112
> URL: https://issues.apache.org/jira/browse/HBASE-9112
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, mapreduce
>Affects Versions: 0.94.6.1
> Environment: CDH-4.3.0-1.cdh4.3.0.p0.22
>Reporter: Debanjan Bhattacharyya
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
>
> When using custom TableInputFormat in TableMapReduceUtil.initTableMapperJob 
> in the following way
> TableMapReduceUtil.initTableMapperJob("mytable", 
>   MyScan, 
>   MyMapper.class,
>   MyKey.class, 
>   MyValue.class, 
>   myJob,true,  
> MyTableInputFormat.class);
> I get error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.mapreduce.TableMapper
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> If I do not use the last two parameters, there is no error.
> What is going wrong here?
> Thanks
> Regards



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9870) HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822912#comment-13822912
 ] 

Hudson commented on HBASE-9870:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9870 HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format 
(jxiang: rev 1541627)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/EncodedSeekPerformanceTest.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
* /hbase/branches/0.96/hbase-shell/src/main/ruby/hbase/admin.rb
* /hbase/branches/0.96/src/main/docbkx/shell.xml


> HFileDataBlockEncoderImpl#diskToCacheFormat uses wrong format
> -
>
> Key: HBASE-9870
> URL: https://issues.apache.org/jira/browse/HBASE-9870
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: trunk-9870.patch, trunk-9870_v2.patch, 
> trunk-9870_v3.patch
>
>
> In this method, we h

[jira] [Commented] (HBASE-9958) Remove some array copy, change lock scope in locateRegion

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822907#comment-13822907
 ] 

Hudson commented on HBASE-9958:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9958 Remove some array copy, change lock scope in locateRegion (nkeywal: 
rev 1541691)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java


> Remove some array copy, change lock scope in locateRegion
> -
>
> Key: HBASE-9958
> URL: https://issues.apache.org/jira/browse/HBASE-9958
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9958.v1.patch, 9958.v2.patch, 9958.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9963) Remove the ReentrantReadWriteLock in the MemStore

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822911#comment-13822911
 ] 

Hudson commented on HBASE-9963:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9963 Remove the ReentrantReadWriteLock in the MemStore (nkeywal: rev 
1541882)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


> Remove the ReentrantReadWriteLock in the MemStore
> -
>
> Key: HBASE-9963
> URL: https://issues.apache.org/jira/browse/HBASE-9963
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9963.96.v3.patch, 9963.v1.patch, 9963.v2.patch, 
> 9963.v3.patch
>
>
> If I'm not wrong, the MemStore is always used from the HStore. The code in 
> HStore puts a lock before calling MemStore. So the lock in Memstore is 
> useless. 
> For example, in HStore
> {code}
>   @Override
>   public long upsert(Iterable cells, long readpoint) throws IOException 
> {
> this.lock.readLock().lock();
> try {
>   return this.memstore.upsert(cells, readpoint);
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> With this in MemStore
> {code}
>   public long upsert(Iterable cells, long readpoint) {
>this.lock.readLock().lock(); // <==Am I useful?
> try {
>   long size = 0;
>   for (Cell cell : cells) {
> size += upsert(cell, readpoint);
>   }
>   return size;
> } finally {
>   this.lock.readLock().unlock();
> }
>   }
> {code}
> I've checked, all the locks in MemStore are backed by a lock in HStore, the 
> only exception beeing
> {code}
>   void snapshot() {
> this.memstore.snapshot();
>   }
> {code}
> And I would say it's a bug. If it's confirm ([~lhofhansl], what do you 
> think?), I will add a lock there and remove all of them in MemStore. They do 
> appear in the profiling.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9710) Use the region name, not the encoded name, when region is not on current server

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822910#comment-13822910
 ] 

Hudson commented on HBASE-9710:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9710 Use the region name, not the encoded name, when region is not on 
current server (stack: rev 1541821)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Use the region name, not the encoded name, when region is not on current 
> server
> ---
>
> Key: HBASE-9710
> URL: https://issues.apache.org/jira/browse/HBASE-9710
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.2, 0.96.0
>Reporter: Benoit Sigoure
>Assignee: Benoit Sigoure
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 
> 0001-Log-the-region-name-instead-of-the-encoded-region-na.patch, 9710v2.txt, 
> 9710v2.txt, 9710v3.txt, 9710v4.txt
>
>
> When we throw a {{RegionOpeningException}} or a {{NotServingRegionException}} 
> we put the encoded region name in the exception, which isn't super useful.  I 
> propose putting the region name instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9810) Global memstore size will be calculated wrongly if replaying recovered edits throws exception

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822915#comment-13822915
 ] 

Hudson commented on HBASE-9810:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9810 Global memstore size will be calculated wrongly if replaying 
recovered edits throws exception (zjushch: rev 1541784)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Global memstore size will be calculated wrongly if replaying recovered edits 
> throws exception
> -
>
> Key: HBASE-9810
> URL: https://issues.apache.org/jira/browse/HBASE-9810
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1
>Reporter: chunhui shen
>Assignee: chunhui shen
>Priority: Critical
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9810-trunk.patch
>
>
> Recently we encountered such a case in 0.94-version:
> Flush is triggered frequently because:
> {noformat}DEBUG org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush 
> thread woke up because memory above low water=14.4g
> {noformat}
> But, the real global memstore size is about 1g.
> It seems the global memstore size has been calculated wrongly.
> Through the logs, I find the following root cause log:
> {noformat}
> ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed 
> open of region=notifysub2_index,\x83\xDC^\xCD\xA3\x8A<\x
> E2\x8E\xE6\xAD!\xDC\xE8t\xED,1379148697072.46be7c2d71c555379278a7494df3015e., 
> starting to roll back the global memstore size.
> java.lang.NegativeArraySizeException
> at org.apache.hadoop.hbase.KeyValue.getFamily(KeyValue.java:1096)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:2933)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2811)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:583)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:499)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3939)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3887)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> Browse the code of this part, seems a critial bug about global memstore size 
> when replaying recovered edits.
> (RegionServerAccounting#clearRegionReplayEditsSize is called  for each edit 
> file, it means the roll back size is smaller than actual when calling 
> RegionServerAccounting#rollbackRegionReplayEditsSize)
> Anyway,  the solution is easy as the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9849) [REST] Forbidden schema delete in read only mode

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822908#comment-13822908
 ] 

Hudson commented on HBASE-9849:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9849 [REST] Forbidden schema delete in read only mode (Julian Zhou) 
(larsh: rev 1541643)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java


> [REST] Forbidden schema delete in read only mode
> 
>
> Key: HBASE-9849
> URL: https://issues.apache.org/jira/browse/HBASE-9849
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.0, 0.94.14
>Reporter: Julian Zhou
>Assignee: Julian Zhou
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 9849-0.94-v0.patch, 9849-trunk-v0.patch
>
>
> If "hbase.rest.readonly" was set, all write operations should be forbidden 
> via REST, right? So table schema deletion should also be forbidden in 
> readonly mode?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822914#comment-13822914
 ] 

Hudson commented on HBASE-4654:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-4654 [replication] Add a check to make sure we don't replicate to 
ourselves (Demai Ni) (larsh: rev 1541805)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> [replication] Add a check to make sure we don't replicate to ourselves
> --
>
> Key: HBASE-4654
> URL: https://issues.apache.org/jira/browse/HBASE-4654
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
>Assignee: Demai Ni
> Fix For: 0.98.0, 0.96.1, 0.94.14
>
> Attachments: 4654-trunk.txt, HBASE-4654-0.94-v0.patch, 
> HBASE-4654-0.96-v0.patch, HBASE-4654-trunk-v0.patch, 
> HBASE-4654-trunk-v0.patch, HBASE-4654-trunk-v0.patch
>
>
> It's currently possible to add a peer for replication and point it to the 
> local cluster, which I believe could very well happen for those like us that 
> use only one ZK ensemble per DC so that only the root znode changes when you 
> want to set up replication intra-DC.
> I don't think comparing just the cluster ID would be enough because you would 
> normally use a different one for another cluster and nothing will block you 
> from pointing elsewhere.
> Comparing the ZK ensemble address doesn't work either when you have multiple 
> DNS entries that point at the same place.
> I think this could be resolved by looking up the master address in the 
> relevant znode as it should be exactly the same thing in the case where you 
> have the same cluster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9757) Reenable fast region move in SlowDeterministicMonkey

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822913#comment-13822913
 ] 

Hudson commented on HBASE-9757:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #119 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/119/])
HBASE-9757 Reenable fast region move in SlowDeterministicMonkey (jxiang: rev 
1541812)
* /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeEncodingAction.java
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/factories/SlowDeterministicMonkeyFactory.java
* 
/hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java


> Reenable fast region move in SlowDeterministicMonkey
> 
>
> Key: HBASE-9757
> URL: https://issues.apache.org/jira/browse/HBASE-9757
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 0.96-9757.patch, trunk-9757.patch, trunk-9757_v2.patch
>
>
> HBASE-9338 slows down the region move CM a little so that ITBLL is green for 
> 0.96.0 RC. We should revert the change and make sure the test is still green.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9854) initial documentation for stripe compactions

2013-11-14 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822928#comment-13822928
 ] 

Sergey Shelukhin commented on HBASE-9854:
-

Hmm... which part of doc should this go to?

> initial documentation for stripe compactions
> 
>
> Key: HBASE-9854
> URL: https://issues.apache.org/jira/browse/HBASE-9854
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Initial documentation for stripe compactions (distill from attached docs, 
> make up to date, put somewhere like book)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9854) initial documentation for stripe compactions

2013-11-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822936#comment-13822936
 ] 

Nick Dimiduk commented on HBASE-9854:
-

How about http://hbase.apache.org/book.html#compaction ?

> initial documentation for stripe compactions
> 
>
> Key: HBASE-9854
> URL: https://issues.apache.org/jira/browse/HBASE-9854
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Initial documentation for stripe compactions (distill from attached docs, 
> make up to date, put somewhere like book)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9969) Improve KeyValueHeap using loser tree

2013-11-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822944#comment-13822944
 ] 

stack commented on HBASE-9969:
--

[~stepinto] Very nice work.  Thank you for digging in on this thorny area.  
That is a nice provable improvement (numbers look great).  Thanks for putting 
up the graphic and the benchmarking tool.  I agree the code is now cleaner.



> Improve KeyValueHeap using loser tree
> -
>
> Key: HBASE-9969
> URL: https://issues.apache.org/jira/browse/HBASE-9969
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Chao Shi
>Assignee: Chao Shi
> Attachments: hbase-9969.patch, hbase-9969.patch, 
> kvheap-benchmark.png, kvheap-benchmark.txt
>
>
> LoserTree is the better data structure than binary heap. It saves half of the 
> comparisons on each next(), though the time complexity is on O(logN).
> Currently A scan or get will go through two KeyValueHeaps, one is merging KVs 
> read from multiple HFiles in a single store, the other is merging results 
> from multiple stores. This patch should improve the both cases whenever CPU 
> is the bottleneck (e.g. scan with filter over cached blocks, HBASE-9811).
> All of the optimization work is done in KeyValueHeap and does not change its 
> public interfaces. The new code looks more cleaner and simpler to understand.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >