[jira] [Updated] (HBASE-17957) Custom metrics of replicate endpoints don't prepend "source." to global metrics

2017-04-25 Thread Shinya Yoshida (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinya Yoshida updated HBASE-17957:
---
Attachment: HBASE-17957.master.001.patch

>  Custom metrics of replicate endpoints don't prepend "source." to global 
> metrics
> 
>
> Key: HBASE-17957
> URL: https://issues.apache.org/jira/browse/HBASE-17957
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 0.98.22
>Reporter: Shinya Yoshida
>Priority: Minor
> Attachments: HBASE-17957.master.001.patch
>
>
> Custom metrics for custom replication endpoints is introduced by 
> [HBASE-16448].
> The name of local custom metrics follows the "source.id.metricsName" format, 
> but the name of global custom metrics doesn't follow the "source.metricsName" 
> format.
> Ex:
> {code}
> // default metrics
> "source.2.shippedOps" : 1234, // peer local
> "source.shippedOps" : 12345, // global
> // custom metrics
> "source.1.failed.start" : 1, // peer local
> "failed.start" : 1, // global
> {code}
> When we consider that default metrics do so, it should be 
> "source.metricsName" like:
> {code}
> "source.1.failed.start" : 1,
> "source.failed.start" : 1,
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17957) Custom metrics of replicate endpoints don't prepend "source." to global metrics

2017-04-25 Thread Shinya Yoshida (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinya Yoshida updated HBASE-17957:
---
Status: Patch Available  (was: Open)

>  Custom metrics of replicate endpoints don't prepend "source." to global 
> metrics
> 
>
> Key: HBASE-17957
> URL: https://issues.apache.org/jira/browse/HBASE-17957
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.22, 2.0.0, 1.4.0
>Reporter: Shinya Yoshida
>Priority: Minor
> Attachments: HBASE-17957.master.001.patch
>
>
> Custom metrics for custom replication endpoints is introduced by 
> [HBASE-16448].
> The name of local custom metrics follows the "source.id.metricsName" format, 
> but the name of global custom metrics doesn't follow the "source.metricsName" 
> format.
> Ex:
> {code}
> // default metrics
> "source.2.shippedOps" : 1234, // peer local
> "source.shippedOps" : 12345, // global
> // custom metrics
> "source.1.failed.start" : 1, // peer local
> "failed.start" : 1, // global
> {code}
> When we consider that default metrics do so, it should be 
> "source.metricsName" like:
> {code}
> "source.1.failed.start" : 1,
> "source.failed.start" : 1,
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17263) Netty based rpc server impl

2017-04-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982478#comment-15982478
 ] 

stack commented on HBASE-17263:
---

Agree on commit. What we have to do to make this default?

>   Netty based rpc server impl
> -
>
> Key: HBASE-17263
> URL: https://issues.apache.org/jira/browse/HBASE-17263
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17263.patch, HBASE-17263_v2.patch, 
> HBASE-17263_v3.patch, HBASE-17263_v4.patch, HBASE-17263_v5.patch, 
> HBASE-17263_v6.patch, HBASE-17263_v7.patch, HBASE-17263_v8.patch, 
> HBASE-17263_v8.patch
>
>
> An RPC server with Netty4 implementation, which provide better performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982484#comment-15982484
 ] 

stack commented on HBASE-15583:
---

Skimmed. This is excellent. +1.

> Any HTableDescriptor we give out should be immutable
> 
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch, HBASE-15583.v1.patch, 
> HBASE-15583.v2.patch, HBASE-15583.v3.patch, HBASE-15583.v4.patch, 
> HBASE-15583.v5.patch, HBASE-15583.v6.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17953) VerifyReplication should read all versions other than the latest version by default.

2017-04-25 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17953:
-
Attachment: HBASE-17953.v1.patch

> VerifyReplication should read all versions other than the latest version by 
> default.
> 
>
> Key: HBASE-17953
> URL: https://issues.apache.org/jira/browse/HBASE-17953
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-17953.v1.patch
>
>
> Two  problems for VerifyReplication.java: 
> 1.  With the default versions ( DO NOT set --versions option) ,  we will  
> read the latest version other than all version,  because Scan.maxVersions is 
> 1.   Actuallly, I think we should compare all version by default.
> 2.   In logFailRowAndIncreaseCounter() method ,  we should set  
> familly/TimeRange/maxVersions attrs for a Get. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

2017-04-25 Thread Hanjie Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982530#comment-15982530
 ] 

Hanjie Gu commented on HBASE-7404:
--

I have a question:
Why does each size in BucketSizeInfo have one more 1KB? 
such as below code:(5KB, ..., 17KB, ..., 65KB, ...)
```
  // Default block size is 64K, so we choose more sizes near 64K, you'd better
  // reset it according to your cluster's block size distribution
  // TODO Support the view of block size distribution statistics
  private static final int DEFAULT_BUCKET_SIZES[] = { 4 * 1024 + 1024, 8 * 1024 
+ 1024,
  16 * 1024 + 1024, 32 * 1024 + 1024, 40 * 1024 + 1024, 48 * 1024 + 1024,
  56 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024,
  192 * 1024 + 1024, 256 * 1024 + 1024, 384 * 1024 + 1024,
  512 * 1024 + 1024 };
```

> Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
> --
>
> Key: HBASE-7404
> URL: https://issues.apache.org/jira/browse/HBASE-7404
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.94.3
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.95.0
>
> Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
> 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
> 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
> hbase-7404-94v2.patch, HBASE-7404-backport-0.94.patch, 
> hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket 
> Cache.pdf
>
>
> First, thanks @neil from Fusion-IO share the source code.
> Usage:
> 1.Use bucket cache as main memory cache, configured as the following:
> –"hbase.bucketcache.ioengine" "heap" (or "offheap" if using offheap memory to 
> cache block )
> –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
> max heap size)
> 2.Use bucket cache as a secondary cache, configured as the following:
> –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
> where to store the block data)
> –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
> means 1GB)
> –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
> See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
> What's Bucket Cache? 
> It could greatly decrease CMS and heap fragment by GC
> It support a large cache space for High Read Performance by using high speed 
> disk like Fusion-io
> 1.An implementation of block cache like LruBlockCache
> 2.Self manage blocks' storage position through Bucket Allocator
> 3.The cached blocks could be stored in the memory or file system
> 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
> combined with LruBlockCache to decrease CMS and fragment by GC.
> 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
> store block) to enlarge cache space
> How about SlabCache?
> We have studied and test SlabCache first, but the result is bad, because:
> 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
> of block size, especially using DataBlockEncoding
> 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
> and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
> it causes CMS and heap fragment don't get any better
> 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
> recommend using "heap" engine 
> See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

2017-04-25 Thread Hanjie Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982530#comment-15982530
 ] 

Hanjie Gu edited comment on HBASE-7404 at 4/25/17 8:00 AM:
---

I have a question:
Why does each size in BucketSizeInfo have one more 1KB? 
such as below code in BucketAllocator.java:(5KB, ..., 17KB, ..., 65KB, ...)
```
  // Default block size is 64K, so we choose more sizes near 64K, you'd better
  // reset it according to your cluster's block size distribution
  // TODO Support the view of block size distribution statistics
  private static final int DEFAULT_BUCKET_SIZES[] = { 4 * 1024 + 1024, 8 * 1024 
+ 1024,
  16 * 1024 + 1024, 32 * 1024 + 1024, 40 * 1024 + 1024, 48 * 1024 + 1024,
  56 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024,
  192 * 1024 + 1024, 256 * 1024 + 1024, 384 * 1024 + 1024,
  512 * 1024 + 1024 };
```


was (Author: jackgu):
I have a question:
Why does each size in BucketSizeInfo have one more 1KB? 
such as below code:(5KB, ..., 17KB, ..., 65KB, ...)
```
  // Default block size is 64K, so we choose more sizes near 64K, you'd better
  // reset it according to your cluster's block size distribution
  // TODO Support the view of block size distribution statistics
  private static final int DEFAULT_BUCKET_SIZES[] = { 4 * 1024 + 1024, 8 * 1024 
+ 1024,
  16 * 1024 + 1024, 32 * 1024 + 1024, 40 * 1024 + 1024, 48 * 1024 + 1024,
  56 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024,
  192 * 1024 + 1024, 256 * 1024 + 1024, 384 * 1024 + 1024,
  512 * 1024 + 1024 };
```

> Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
> --
>
> Key: HBASE-7404
> URL: https://issues.apache.org/jira/browse/HBASE-7404
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.94.3
>Reporter: chunhui shen
>Assignee: chunhui shen
> Fix For: 0.95.0
>
> Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
> 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
> 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
> hbase-7404-94v2.patch, HBASE-7404-backport-0.94.patch, 
> hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket 
> Cache.pdf
>
>
> First, thanks @neil from Fusion-IO share the source code.
> Usage:
> 1.Use bucket cache as main memory cache, configured as the following:
> –"hbase.bucketcache.ioengine" "heap" (or "offheap" if using offheap memory to 
> cache block )
> –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
> max heap size)
> 2.Use bucket cache as a secondary cache, configured as the following:
> –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
> where to store the block data)
> –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
> means 1GB)
> –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
> See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
> What's Bucket Cache? 
> It could greatly decrease CMS and heap fragment by GC
> It support a large cache space for High Read Performance by using high speed 
> disk like Fusion-io
> 1.An implementation of block cache like LruBlockCache
> 2.Self manage blocks' storage position through Bucket Allocator
> 3.The cached blocks could be stored in the memory or file system
> 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
> combined with LruBlockCache to decrease CMS and fragment by GC.
> 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
> store block) to enlarge cache space
> How about SlabCache?
> We have studied and test SlabCache first, but the result is bad, because:
> 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
> of block size, especially using DataBlockEncoding
> 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
> and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
> it causes CMS and heap fragment don't get any better
> 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
> recommend using "heap" engine 
> See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17957) Custom metrics of replicate endpoints don't prepend "source." to global metrics

2017-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982537#comment-15982537
 ] 

Ted Yu commented on HBASE-17957:


lgtm, pending QA.

>  Custom metrics of replicate endpoints don't prepend "source." to global 
> metrics
> 
>
> Key: HBASE-17957
> URL: https://issues.apache.org/jira/browse/HBASE-17957
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 0.98.22
>Reporter: Shinya Yoshida
>Priority: Minor
> Attachments: HBASE-17957.master.001.patch
>
>
> Custom metrics for custom replication endpoints is introduced by 
> [HBASE-16448].
> The name of local custom metrics follows the "source.id.metricsName" format, 
> but the name of global custom metrics doesn't follow the "source.metricsName" 
> format.
> Ex:
> {code}
> // default metrics
> "source.2.shippedOps" : 1234, // peer local
> "source.shippedOps" : 12345, // global
> // custom metrics
> "source.1.failed.start" : 1, // peer local
> "failed.start" : 1, // global
> {code}
> When we consider that default metrics do so, it should be 
> "source.metricsName" like:
> {code}
> "source.1.failed.start" : 1,
> "source.failed.start" : 1,
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17958) Avoid passing cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-17958:
--

 Summary: Avoid passing cell to ScanQueryMatcher when optimize SEEK 
to SKIP
 Key: HBASE-17958
 URL: https://issues.apache.org/jira/browse/HBASE-17958
 Project: HBase
  Issue Type: Bug
Reporter: Guanghao Zhang


{code}
ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
qcode = optimize(qcode, cell);
{code}
The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get wrong 
result when use some filter, etc. ColumnCountGetFilter. It just count the  
columns's number. If pass a same column to this filter, the count result will 
be wrong. So we should avoid passing cell to ScanQueryMatcher when optimize 
SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17958:
--
Summary: Avoid passing unexpected cell to ScanQueryMatcher when optimize 
SEEK to SKIP  (was: Avoid passing cell to ScanQueryMatcher when optimize SEEK 
to SKIP)

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17957) Custom metrics of replicate endpoints don't prepend "source." to global metrics

2017-04-25 Thread Shinya Yoshida (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982560#comment-15982560
 ] 

Shinya Yoshida commented on HBASE-17957:


Hi Ted,
Thank you for your review.

I'd like to make me assignee, but I couldn't see the "Assign to me" link under 
the Assignee, which I can see usually in another Jira.
Could you give me such permission to change assignee because I want to 
contribute continuously?

Best Regards,
Shinya Yoshida

>  Custom metrics of replicate endpoints don't prepend "source." to global 
> metrics
> 
>
> Key: HBASE-17957
> URL: https://issues.apache.org/jira/browse/HBASE-17957
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 0.98.22
>Reporter: Shinya Yoshida
>Priority: Minor
> Attachments: HBASE-17957.master.001.patch
>
>
> Custom metrics for custom replication endpoints is introduced by 
> [HBASE-16448].
> The name of local custom metrics follows the "source.id.metricsName" format, 
> but the name of global custom metrics doesn't follow the "source.metricsName" 
> format.
> Ex:
> {code}
> // default metrics
> "source.2.shippedOps" : 1234, // peer local
> "source.shippedOps" : 12345, // global
> // custom metrics
> "source.1.failed.start" : 1, // peer local
> "failed.start" : 1, // global
> {code}
> When we consider that default metrics do so, it should be 
> "source.metricsName" like:
> {code}
> "source.1.failed.start" : 1,
> "source.failed.start" : 1,
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982567#comment-15982567
 ] 

Duo Zhang commented on HBASE-17958:
---

Yeah this is really a bad practice. The filter returns SEEK_NEXT_ROW or 
SEEK_NEXT_COL but we may still pass the cell of the same row or same column to 
SQM. We have a strange optimization in SQM called stickyNextRow(which is really 
confusing to me when refactoring SQM,,,) so SEEK_NEXT_ROW usually works, but 
for SEEK_NEXT_COL there is no such optimization so it is broken...

In fact, if we decide that a skip is better than seek, then we should call 
heap.next() continuously until we reach the next row or next column, and then 
start to call SQM.match again. It is really confusing that SQM returns 
SEEK_NEXT_ROW or SEEK_NEXT_COL but it could still receive the cell from the 
same row or same column, right?

Thanks.

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17957) Custom metrics of replicate endpoints don't prepend "source." to global metrics

2017-04-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-17957:
--

Assignee: Shinya Yoshida

>  Custom metrics of replicate endpoints don't prepend "source." to global 
> metrics
> 
>
> Key: HBASE-17957
> URL: https://issues.apache.org/jira/browse/HBASE-17957
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 0.98.22
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Minor
> Attachments: HBASE-17957.master.001.patch
>
>
> Custom metrics for custom replication endpoints is introduced by 
> [HBASE-16448].
> The name of local custom metrics follows the "source.id.metricsName" format, 
> but the name of global custom metrics doesn't follow the "source.metricsName" 
> format.
> Ex:
> {code}
> // default metrics
> "source.2.shippedOps" : 1234, // peer local
> "source.shippedOps" : 12345, // global
> // custom metrics
> "source.1.failed.start" : 1, // peer local
> "failed.start" : 1, // global
> {code}
> When we consider that default metrics do so, it should be 
> "source.metricsName" like:
> {code}
> "source.1.failed.start" : 1,
> "source.failed.start" : 1,
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17947) Location of Examples.proto is wrong in comment of RowCountEndPoint.java

2017-04-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17947:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Location of Examples.proto is wrong in comment of RowCountEndPoint.java
> ---
>
> Key: HBASE-17947
> URL: https://issues.apache.org/jira/browse/HBASE-17947
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Trivial
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-17947.master.000.patch, 
> HBASE-17947.master.001.patch
>
>
> In the comment of RowCountEndpoint.java
> {code}
> /**
>  * Sample coprocessor endpoint exposing a Service interface for counting rows 
> and key values.
>  *
>  * 
>  * For the protocol buffer definition of the RowCountService, see the source 
> file located under
>  * hbase-server/src/main/protobuf/Examples.proto.
>  * 
>  */
> {code}
> Examples.proto is not under hbase-server/src/main/protobuf/, but under 
> hbase-examples/src/main/protobuf/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17951) Log the region's information in HRegion#checkResources

2017-04-25 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982634#comment-15982634
 ] 

Guangxu Cheng commented on HBASE-17951:
---

Thank you for your review.  [~stack] [~anoop.hbase] 

It is necessary to limit the logs.I will work on this.

> Log the region's information in HRegion#checkResources
> --
>
> Key: HBASE-17951
> URL: https://issues.apache.org/jira/browse/HBASE-17951
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17951-master-v1.patch
>
>
> We throw RegionTooBusyException in HRegion#checkResources if above memstore 
> limit. On the server side,we just increment the value of blockedRequestsCount 
> and do not print the region's information.
> I think we should also print the region's information in server-side, so that 
> we can easily find the busy regions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues

2017-04-25 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982639#comment-15982639
 ] 

Guangxu Cheng commented on HBASE-17928:
---

[~stack ] Thank you for your review.
I looked at HBASE-12446. The main purpose of HBASE-12446 is to abort running 
compactions.
This issue is to clear compaction queues regardless of the running compactions.
I am also interested in HBASE-12446.
Maybe we also need to implement the following command.

1. list compactions given regionserver 
2. abort compactions given regionserver and compaction id. 
3. abort all compactions given regionserver 
4. abort all compactions and clear both compaction queues given regionserver 
5. clear long/short/both compaction queues given regionserver 

> Shell tool to clear compact queues
> --
>
> Key: HBASE-17928
> URL: https://issues.apache.org/jira/browse/HBASE-17928
> Project: HBase
>  Issue Type: New Feature
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0
>
> Attachments: HBASE-17928-v1.patch, HBASE-17928-v2.patch, 
> HBASE-17928-v3.patch, HBASE-17928-v4.patch
>
>
> scenario:
> 1. Compact a table by mistake
> 2. Compact is not completed within the specified time period
> In this case, clearing the queue is a better choice, so as not to affect the 
> stability of the cluster



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-25 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982692#comment-15982692
 ] 

Chia-Ping Tsai commented on HBASE-15583:


Thanks. [~stack]

> Any HTableDescriptor we give out should be immutable
> 
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch, HBASE-15583.v1.patch, 
> HBASE-15583.v2.patch, HBASE-15583.v3.patch, HBASE-15583.v4.patch, 
> HBASE-15583.v5.patch, HBASE-15583.v6.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-15143:

Attachment: HBASE-15143-BM-0012.patch

> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its own group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-15143:

Status: Open  (was: Patch Available)

> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its own group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-15143:

Status: Patch Available  (was: Open)

> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its own group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread Balazs Meszaros (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982709#comment-15982709
 ] 

Balazs Meszaros commented on HBASE-15143:
-

I did a rebase, now it compiles.

> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its own group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982738#comment-15982738
 ] 

Anoop Sam John commented on HBASE-17958:


bq.In fact, if we decide that a skip is better than seek, then we should call 
heap.next() continuously until we reach the next row or next column, and then 
start to call SQM.match again.
So the problem u say is here only when the optimize logic change seek to a 
skip.  Ya u have a point then.  Add a test case here for the bug so it will be 
very clear?

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16436) Add the CellChunkMap variant

2017-04-25 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-16436:

Attachment: HBASE-16436-V07.patch

> Add the CellChunkMap variant
> 
>
> Key: HBASE-16436
> URL: https://issues.apache.org/jira/browse/HBASE-16436
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
> Attachments: HBASE-16436-V07.patch, HBASE-16436-V43.patch
>
>
> This sub-task is specifically to add the CellChunkMap created by [~anastas] 
> and [~eshcar] with specific tests and integrate it with the in memory 
> flush/compaction flow. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16436) Add the CellChunkMap variant

2017-04-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982749#comment-15982749
 ] 

Anastasia Braginsky commented on HBASE-16436:
-

Hey,

I have corrected the patch, the new one is uploaded and can be seen in the RB: 
https://reviews.apache.org/r/58698/
[~anoop.hbase], [~ram_krish], and everybody, please take a look.

Thanks,
Anastasia


> Add the CellChunkMap variant
> 
>
> Key: HBASE-16436
> URL: https://issues.apache.org/jira/browse/HBASE-16436
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
> Attachments: HBASE-16436-V07.patch, HBASE-16436-V43.patch
>
>
> This sub-task is specifically to add the CellChunkMap created by [~anastas] 
> and [~eshcar] with specific tests and integrate it with the in memory 
> flush/compaction flow. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982801#comment-15982801
 ] 

Duo Zhang commented on HBASE-17958:
---

[~zghaobac] hit the bug when implementing ColumnCountGetFilter. He will prepare 
a patch soon.

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-17958:
--

Assignee: Guanghao Zhang

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982809#comment-15982809
 ] 

Guanghao Zhang commented on HBASE-17958:


[~anoop.hbase] I found this problem when implement filter for HBASE-17125. I 
will upload a patch later. Thanks.

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17947) Location of Examples.proto is wrong in comment of RowCountEndPoint.java

2017-04-25 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982816#comment-15982816
 ] 

Xiang Li commented on HBASE-17947:
--

Thanks! Ted

> Location of Examples.proto is wrong in comment of RowCountEndPoint.java
> ---
>
> Key: HBASE-17947
> URL: https://issues.apache.org/jira/browse/HBASE-17947
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Trivial
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-17947.master.000.patch, 
> HBASE-17947.master.001.patch
>
>
> In the comment of RowCountEndpoint.java
> {code}
> /**
>  * Sample coprocessor endpoint exposing a Service interface for counting rows 
> and key values.
>  *
>  * 
>  * For the protocol buffer definition of the RowCountService, see the source 
> file located under
>  * hbase-server/src/main/protobuf/Examples.proto.
>  * 
>  */
> {code}
> Examples.proto is not under hbase-server/src/main/protobuf/, but under 
> hbase-examples/src/main/protobuf/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-04-25 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17343:

Attachment: HBASE-17343-V04.patch

> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, 
> HBASE-17343-V04.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-04-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982967#comment-15982967
 ] 

Anastasia Braginsky commented on HBASE-17343:
-

Hey,

I am putting the patch here with BASIC CompactingMemStore turned to be the 
default, just to pass through the resent QA. I am planning to commit it later.
[~stack], [~anoop.hbase], [~ram_krish], [~ebortnik], [~eshcar] any comments?

> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, 
> HBASE-17343-V04.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-04-25 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982979#comment-15982979
 ] 

Edward Bortnikov commented on HBASE-17343:
--

+1 on my side. 

> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, 
> HBASE-17343-V04.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-04-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983227#comment-15983227
 ] 

Anoop Sam John commented on HBASE-17343:


Did some PE perf testing with different value size (1K which is def in PE, 500 
bytes, 100 bytes)..   
When value size is lesser, the gain from Compaction is more..  Even on bigger 
sized cells case, there is no significant degrade.
Also I did test along with off heap memstore feature.. The gain is more when 
off heap also in place. (This is what my thinking from day one and it comes 
correct as per the tests)..  Off heap + Compacting Memstore comes as the best 
performance.
All tests use G1GC. I also did test with global memstore size as 42% as well as 
60%.  In case of 42%, the Initial Heap Occupancy Percentage (IHOP) is 50% where 
as in next case it is 65%.  In the second case, some times I got long GC pause 
so that master thinks RS is died. 1 out of 3 times I get this.  In case of 50% 
IHOP, this seems not coming.  Remember in all these tests MSLAB is ON (which is 
the default behave).
I also did test with MSLAB off.   Seems this case is better performing than 
MSLAB ON cases.  Specially when the cell size is smaller. In those , I could 
not reproduce the long GC pause.

I have a code level comment, (will say abt that in another comment) and a fix 
there avoided the long GC issue in any case.. 

MSLAB ON vs OFF I tested specifically and seems when we have higher global 
memstore size % and IHOP, the impact is more (MSLAB ON is lesser performing). 
When the IHOP is set to be 50% (This defaults to 45%) , MSLAB ON is having bit 
better performance (Though not too much of diff).With G1GC, we have to 
check whether we should advice to turn off MSLAB..  Any way off heap memstore 
is implemented using off heap MSLAB pool.. So as such it is useful only. As I 
said, off heap memstore is better performing than on heap counter part and in 
memory compaction there makes things even better.

Did not make a perf graphs.. Will be coming up with that soon (in a day or 2)

> Make Compacting Memstore default in 2.0 with BASIC as the default type
> --
>
> Key: HBASE-17343
> URL: https://issues.apache.org/jira/browse/HBASE-17343
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, 
> HBASE-17343-V04.patch
>
>
> FYI [~anastas], [~eshcar] and [~ebortnik].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15143:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Adds a new Admin#listLocks, a panel on the procedures page to 
list procedure locks, and a list_locks command to the shell. Use it to see 
current state of procedure locking in Master process.
  Status: Resolved  (was: Patch Available)

Ran the failing tests locally and they pass for me. Need to figure why they 
fail so much on cluster.

Pushing since need this debugging.

Thanks [~balazs.meszaros] for persistence. Also, thanks for rebase. I was 
mangling it last night.

> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its own group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983308#comment-15983308
 ] 

Lars Hofhansl commented on HBASE-17958:
---

Do you guys have a unit test to show this?

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-04-25 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-17959:
--

 Summary: Canary timeout should be configurable on a per-table basis
 Key: HBASE-17959
 URL: https://issues.apache.org/jira/browse/HBASE-17959
 Project: HBase
  Issue Type: Improvement
  Components: canary
Reporter: Andrew Purtell
Priority: Minor


The Canary timeout should be configurable on a per-table basis, for cases where 
different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-04-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17959:
---
Description: The Canary read and write timeouts should be configurable on a 
per-table basis, for cases where different tables have different latency SLAs.  
 (was: The Canary timeout should be configurable on a per-table basis, for 
cases where different tables have different latency SLAs. )

> Canary timeout should be configurable on a per-table basis
> --
>
> Key: HBASE-17959
> URL: https://issues.apache.org/jira/browse/HBASE-17959
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: Andrew Purtell
>Priority: Minor
>
> The Canary read and write timeouts should be configurable on a per-table 
> basis, for cases where different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983320#comment-15983320
 ] 

Lars Hofhansl commented on HBASE-17958:
---

It is expected that the column trackers so the right thing. Filters that count 
the columns might be indeed be a problem, though. We should look at the column 
trackers in b this case.

(I do object slightly to "this is really a bad practise", though. Seeking is 
unduly expensive in HBase, especially when the seek just lands is on the same 
block anyway.)

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16841) Data loss in MOB files after cloning a snapshot and deleting that snapshot

2017-04-25 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16841:

Priority: Blocker  (was: Major)

> Data loss in MOB files after cloning a snapshot and deleting that snapshot
> --
>
> Key: HBASE-16841
> URL: https://issues.apache.org/jira/browse/HBASE-16841
> Project: HBase
>  Issue Type: Bug
>  Components: mob, snapshots
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16841.patch, HBASE-16841-V2.patch, 
> HBASE-16841-V3.patch, HBASE-16841-V4.patch, HBASE-16841-V5.patch, 
> HBASE-16841-V6.patch
>
>
> Running the following steps will probably lose MOB data when working with 
> snapshots.
> 1. Create a mob-enabled table by running create 't1', {NAME => 'f1', IS_MOB 
> => true, MOB_THRESHOLD => 0}.
> 2. Put millions of data.
> 3. Run {{snapshot 't1','t1_snapshot'}} to take a snapshot for this table t1.
> 4. Run {{clone_snapshot 't1_snapshot','t1_cloned'}} to clone this snapshot.
> 5. Run {{delete_snapshot 't1_snapshot'}} to delete this snapshot.
> 6. Run {{disable 't1'}} and {{delete 't1'}} to delete the table.
> 7. Now go to the archive directory of t1, the number of .link directories is 
> different from the number of hfiles which means some data will be lost after 
> the hfile cleaner runs.
> This is because, when taking a snapshot on a enabled mob table, each region 
> flushes itself and takes a snapshot, and the mob snapshot is taken only if 
> the current region is first region of the table. At that time, the flushing 
> of some regions might not be finished, and some mob files are not flushed to 
> disk yet. Eventually some mob files are not recorded in the snapshot manifest.
> To solve this, we need to take the mob snapshot at last after the snapshots 
> on all the online and offline regions are finished in 
> {{EnabledTableSnapshotHandler}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-04-25 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16942:
-
Attachment: HBASE-16942.master.011.patch

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE-16942.master.010.patch, HBASE-16942.master.011.patch, 
> HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983525#comment-15983525
 ] 

Andrew Purtell commented on HBASE-17448:


Great job [~ckulkarni]. 

We need to fix this:
{noformat}
Tests run: 6, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 11.094 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.TestInterfaceAudienceAnnotations
testInterfaceAudienceAnnotation(org.apache.hadoop.hbase.TestInterfaceAudienceAnnotations)
  Time elapsed: 0.761 sec  <<< FAILURE!
java.lang.AssertionError: All classes should have @InterfaceAudience annotation 
expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.hbase.TestInterfaceAudienceAnnotations.testInterfaceAudienceAnnotation(TestInterfaceAudienceAnnotations.java:322)
{noformat}

I'm not sure why this test is failing. Let me look into it. If it is a quick 
fix I will fix it upon commit later today. 

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 1.3.2
>
> Attachments: HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17448:
---
Attachment: HBASE-17448.patch

Interesting

{noformat}
2017-04-25 14:22:34,477 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(316): These are the classes that DO NOT 
have @InterfaceAudience annotation:
2017-04-25 14:22:34,477 INFO  [main] 
hbase.TestInterfaceAudienceAnnotations(318): interface 
org.apache.hadoop.hbase.metrics.PackageMarker
{noformat}

It's because this patch imports hadoop metrics dependencies into hbase-client 
for the first time. Attaching an amended patch that gets the test to pass. 

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 1.3.2
>
> Attachments: HBASE-17448.patch, HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17448:
---
Fix Version/s: (was: 1.3.2)
   2.0.0

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 2.0.0
>
> Attachments: HBASE-17448.patch, HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17448:
---
Attachment: HBASE-17448-branch-1.patch

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 2.0.0
>
> Attachments: HBASE-17448-branch-1.patch, HBASE-17448.patch, 
> HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17938) General fault - tolerance framework for backup/restore operations

2017-04-25 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17938:
--
Attachment: HBASE-17938-v1.patch

patch v1. cc: [~tedyu]

> General fault - tolerance framework for backup/restore operations
> -
>
> Key: HBASE-17938
> URL: https://issues.apache.org/jira/browse/HBASE-17938
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17938-v1.patch
>
>
> The framework must take care of all general types of failures during backup/ 
> restore and restore system to the original state in case of a failure.
> That won't solve all the possible issues  but we have a separate JIRAs for 
> them as a sub-tasks of HBASE-15277



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17938) General fault - tolerance framework for backup/restore operations

2017-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983725#comment-15983725
 ] 

Ted Yu commented on HBASE-17938:


{code}
+snapshotBackupSystemTable();
{code}
backup table is not in hbase namespace anymore, right ? Rename the above method 
and other new methods.
{code}
-LOG.debug("when deleting snapshot " + snapshotName, ioe);
+LOG.error("when deleting snapshot " + snapshotName, ioe);
{code}
What's the action for the above error ?



> General fault - tolerance framework for backup/restore operations
> -
>
> Key: HBASE-17938
> URL: https://issues.apache.org/jira/browse/HBASE-17938
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17938-v1.patch
>
>
> The framework must take care of all general types of failures during backup/ 
> restore and restore system to the original state in case of a failure.
> That won't solve all the possible issues  but we have a separate JIRAs for 
> them as a sub-tasks of HBASE-15277



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17960) IntegrationTestReplication fails in successive runs due to lack of appropriate cleanup

2017-04-25 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-17960:
-

 Summary: IntegrationTestReplication fails in successive runs due 
to lack of appropriate cleanup
 Key: HBASE-17960
 URL: https://issues.apache.org/jira/browse/HBASE-17960
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Reporter: Ashu Pachauri


The way ITR works right now is that it adds a peer named 'TestPeer' for the 
replication destination cluster. The name of the peer is same across runs.

Also, it removes the peer in the beginning of each run. However, it does not 
wait for the queues corresponding to the peer to get cleaned up (which is an 
asynchronous operation and can take 10s of seconds). This causes the next run 
to fail and so on.

The test setup should wait for a non-trivial amount of time to cleanup the 
queues corresponding to the peer.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17961) Allow passing cluster specific configs to the integration test replication

2017-04-25 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-17961:
-

 Summary: Allow passing cluster specific configs to the integration 
test replication
 Key: HBASE-17961
 URL: https://issues.apache.org/jira/browse/HBASE-17961
 Project: HBase
  Issue Type: Improvement
Reporter: Ashu Pachauri


Currently, ITR takes only the cluster key as input, which is insufficient to 
set it up for certain scenarios. 
One common scenario is when one of the clusters have kerberos security enabled, 
in which case, the kerberos credentials needed to authenticate on the source 
differ from that on the destination. 
We should be able to pass cluster specific config to ITR.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17938) General fault - tolerance framework for backup/restore operations

2017-04-25 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17938:
--
Attachment: HBASE-17938-v2.patch

v2

> General fault - tolerance framework for backup/restore operations
> -
>
> Key: HBASE-17938
> URL: https://issues.apache.org/jira/browse/HBASE-17938
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch
>
>
> The framework must take care of all general types of failures during backup/ 
> restore and restore system to the original state in case of a failure.
> That won't solve all the possible issues  but we have a separate JIRAs for 
> them as a sub-tasks of HBASE-15277



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983890#comment-15983890
 ] 

Hadoop QA commented on HBASE-17448:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 43s 
{color} | {color:red} Docker failed to build yetus/hbase:e01ee2f. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865012/HBASE-17448-branch-1.patch
 |
| JIRA Issue | HBASE-17448 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6563/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 2.0.0
>
> Attachments: HBASE-17448-branch-1.patch, HBASE-17448.patch, 
> HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983917#comment-15983917
 ] 

Duo Zhang commented on HBASE-17958:
---

The idea is good, and converting seek to skip is a common optimization.

The 'bad practice' here is the way we implement it. If the MatchCode is 
SEEK_NEXT_ROW or SEEK_NEXT_COL, no matter how you done the seek, either by a 
real seek or a sequence of skips, you should stop passing the cell in the same 
row or same column to the SQM, otherwise you should change the MatchCode to 
TRY_SEEK_NEXT_ROW and TRY_SEEK_NEXT_COL. It is really confusing.

That's my point.

Thanks.

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17956) Raw scan should ignore TTL

2017-04-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17956:
--
Attachment: (was: HBASE-17956.patch)

> Raw scan should ignore TTL
> --
>
> Key: HBASE-17956
> URL: https://issues.apache.org/jira/browse/HBASE-17956
> Project: HBase
>  Issue Type: Improvement
>  Components: scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> For now we will also test TTL to check if a cell is expired for raw scan. 
> Since we can even return delete marker for a raw scan, I think it is also 
> reasonable to eliminate the TTL check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17956) Raw scan should ignore TTL

2017-04-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17956:
--
Attachment: HBASE-17956.patch

Seems the QA bot is back. Let's try again.

> Raw scan should ignore TTL
> --
>
> Key: HBASE-17956
> URL: https://issues.apache.org/jira/browse/HBASE-17956
> Project: HBase
>  Issue Type: Improvement
>  Components: scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17956.patch
>
>
> For now we will also test TTL to check if a cell is expired for raw scan. 
> Since we can even return delete marker for a raw scan, I think it is also 
> reasonable to eliminate the TTL check.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17263) Netty based rpc server impl

2017-04-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17263:
--
Attachment: HBASE-17263_v8.patch

Retry.

>   Netty based rpc server impl
> -
>
> Key: HBASE-17263
> URL: https://issues.apache.org/jira/browse/HBASE-17263
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17263.patch, HBASE-17263_v2.patch, 
> HBASE-17263_v3.patch, HBASE-17263_v4.patch, HBASE-17263_v5.patch, 
> HBASE-17263_v6.patch, HBASE-17263_v7.patch, HBASE-17263_v8.patch, 
> HBASE-17263_v8.patch, HBASE-17263_v8.patch
>
>
> An RPC server with Netty4 implementation, which provide better performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-04-25 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16942:
-
Attachment: HBASE-16942.master.012.patch

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE-16942.master.010.patch, HBASE-16942.master.011.patch, 
> HBASE-16942.master.012.patch, HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17263) Netty based rpc server impl

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983965#comment-15983965
 ] 

Duo Zhang commented on HBASE-17263:
---

[~stack] For me I'd like to do some refactoring to split the big NettyRpcServer 
to several smaller pieces. And also we'd better have our own way to share the 
event loop group, instead of just use the GLOBAL_EVENT_EXECUTOR in netty.

And also ITBLL?

>   Netty based rpc server impl
> -
>
> Key: HBASE-17263
> URL: https://issues.apache.org/jira/browse/HBASE-17263
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17263.patch, HBASE-17263_v2.patch, 
> HBASE-17263_v3.patch, HBASE-17263_v4.patch, HBASE-17263_v5.patch, 
> HBASE-17263_v6.patch, HBASE-17263_v7.patch, HBASE-17263_v8.patch, 
> HBASE-17263_v8.patch, HBASE-17263_v8.patch
>
>
> An RPC server with Netty4 implementation, which provide better performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983966#comment-15983966
 ] 

Lars Hofhansl commented on HBASE-17958:
---

The problem is the order in which the filters and column trackers are executed.
The column trackers do their own comparisons and do the right thing if a SKIP 
bubbles up (checked that carefully) - unless a filter interferes.

On the other hand there _is no_ right order between the trackers and the 
filters. Sometimes you want one order, sometimes the other.

If in the SKIP case we call next until we hit the next row or column, we're 
doing the trackers' job.

Hmm... I guess there has to be some acceptance that filters are a fairly low 
level concept and expose you to some of the HBase inerts.

Especially counting filters are problematic since rows may have different 
number of columns and columns have varying numbers of versions. 

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17263) Netty based rpc server impl

2017-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983978#comment-15983978
 ] 

Ted Yu commented on HBASE-17263:


Does patch v8 work on secure cluster ?

Thanks

>   Netty based rpc server impl
> -
>
> Key: HBASE-17263
> URL: https://issues.apache.org/jira/browse/HBASE-17263
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17263.patch, HBASE-17263_v2.patch, 
> HBASE-17263_v3.patch, HBASE-17263_v4.patch, HBASE-17263_v5.patch, 
> HBASE-17263_v6.patch, HBASE-17263_v7.patch, HBASE-17263_v8.patch, 
> HBASE-17263_v8.patch, HBASE-17263_v8.patch
>
>
> An RPC server with Netty4 implementation, which provide better performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983994#comment-15983994
 ] 

Duo Zhang commented on HBASE-17958:
---

I do not think so.

Can you imagine that, you call a seek on an InputStream, and the InputStream 
may do nothing and still return the next byte to you and you need to guess 
whether it really done the seek? That's not natural. If you really want to do 
this, do not call it seek please.

And I do not get your point, why this is the job of the tracker? It is also 
strange that a ColumnTracker tells you that I have enough versions for this 
column so please seek to the next column or I have done with this row please 
seek to the next row, but you still pass the cells from the same column or same 
row to me and let me do the skip job? I can imagine that this question will be 
asked again and again by different people when they try to change the code in 
this area...

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17958:
---
Attachment: 0001-add-one-ut-testWithColumnCountGetFilter.patch

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: 0001-add-one-ut-testWithColumnCountGetFilter.patch
>
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984003#comment-15984003
 ] 

Guanghao Zhang commented on HBASE-17958:


Add a ut testWithColumnCountGetFilter to show this problem. When the scanner 
didn't optimize SEEK to SKIP, the result is right. But when scanner optimized 
SEEK to SKIP, the result is  wrong.

> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: 0001-add-one-ut-testWithColumnCountGetFilter.patch
>
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-04-25 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned HBASE-17959:


Assignee: Chinmay Kulkarni

> Canary timeout should be configurable on a per-table basis
> --
>
> Key: HBASE-17959
> URL: https://issues.apache.org/jira/browse/HBASE-17959
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> The Canary read and write timeouts should be configurable on a per-table 
> basis, for cases where different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17958) Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP

2017-04-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984014#comment-15984014
 ] 

Guanghao Zhang commented on HBASE-17958:


Code in ColumnCountGetFilter.
{code}
  public ReturnCode filterKeyValue(Cell v) {
this.count++;
return filterAllRemaining() ? ReturnCode.NEXT_COL : 
ReturnCode.INCLUDE_AND_NEXT_COL;
  }
{code}
When it return ReturnCode.NEXT_COL or ReturnCode.INCLUDE_AND_NEXT_COL, it means 
the filter hope the next cell will have different column. But the StoreScanner 
optimized SEEK to SKIP and still pass the same column to the filter, then the 
filter's result will be wrong.



> Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
> 
>
> Key: HBASE-17958
> URL: https://issues.apache.org/jira/browse/HBASE-17958
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: 0001-add-one-ut-testWithColumnCountGetFilter.patch
>
>
> {code}
> ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
> qcode = optimize(qcode, cell);
> {code}
> The optimize method may change the MatchCode from SEEK_NEXT_COL/SEEK_NEXT_ROW 
> to SKIP. But it still pass the next cell to ScanQueryMatcher. It will get 
> wrong result when use some filter, etc. ColumnCountGetFilter. It just count 
> the  columns's number. If pass a same column to this filter, the count result 
> will be wrong. So we should avoid passing cell to ScanQueryMatcher when 
> optimize SEEK to SKIP.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17951) Log the region's information in HRegion#checkResources

2017-04-25 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-17951:
--
Attachment: HBASE-17951-master-v2.patch

> Log the region's information in HRegion#checkResources
> --
>
> Key: HBASE-17951
> URL: https://issues.apache.org/jira/browse/HBASE-17951
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17951-master-v1.patch, HBASE-17951-master-v2.patch
>
>
> We throw RegionTooBusyException in HRegion#checkResources if above memstore 
> limit. On the server side,we just increment the value of blockedRequestsCount 
> and do not print the region's information.
> I think we should also print the region's information in server-side, so that 
> we can easily find the busy regions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17917) Use pread by default for all user scan

2017-04-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984083#comment-15984083
 ] 

Duo Zhang commented on HBASE-17917:
---

I misunderstood the meaning of checkReseek method, it will not reopen the old 
StoreFileScanners... So I need to implement my own method.

And I did find some problems for the implementation of checkReseek. Let me open 
new issues to track them.

Thanks.

> Use pread by default for all user scan
> --
>
> Key: HBASE-17917
> URL: https://issues.apache.org/jira/browse/HBASE-17917
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17917.patch
>
>
> As said in the parent issue. We need some benchmark here first.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984100#comment-15984100
 ] 

Hadoop QA commented on HBASE-16942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 35s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 152m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.master.TestMasterOperationsForRegionReplicas |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestSecureExportSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864998/HBASE-16942.master.011.patch
 |
| JIRA Issue | HBASE-16942 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f90d59a5ab02 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2557506 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6561/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6561/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6561/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6561/console |
| Powe

[jira] [Assigned] (HBASE-15199) Move jruby jar so only on hbase-shell module classpath; currently globally available

2017-04-25 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li reassigned HBASE-15199:


Assignee: Xiang Li

> Move jruby jar so only on hbase-shell module classpath; currently globally 
> available
> 
>
> Key: HBASE-15199
> URL: https://issues.apache.org/jira/browse/HBASE-15199
> Project: HBase
>  Issue Type: Task
>  Components: dependencies, jruby, shell
>Reporter: stack
>Assignee: Xiang Li
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 15199.txt
>
>
> A suggestion that came up out of internal issue (filed by Mr Jan Van Besien) 
> was to move the scope of the jruby include down so it is only a dependency 
> for the hbase-shell. jruby jar brings in a bunch of dependencies (joda time 
> for example) which can clash with the includes of others. Our Sean suggests 
> that could be good to shut down exploit possibilities if jruby was not 
> globally available. Only downside I can think is that it may no longer be 
> available to our bin/*rb scripts if we move the jar but perhaps these can be 
> changed so they can find the ruby jar in new location.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984110#comment-15984110
 ] 

Hadoop QA commented on HBASE-17343:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 118m 3s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 164m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864963/HBASE-17343-V04.patch 
|
| JIRA Issue | HBASE-17343 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 71b2a85b8a46 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2557506 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6562/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6562/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6562/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6562/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |

[jira] [Commented] (HBASE-17951) Log the region's information in HRegion#checkResources

2017-04-25 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984127#comment-15984127
 ] 

Guangxu Cheng commented on HBASE-17951:
---

v2 takes the suggestion from [~anoop.hbase] 

> Log the region's information in HRegion#checkResources
> --
>
> Key: HBASE-17951
> URL: https://issues.apache.org/jira/browse/HBASE-17951
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17951-master-v1.patch, HBASE-17951-master-v2.patch
>
>
> We throw RegionTooBusyException in HRegion#checkResources if above memstore 
> limit. On the server side,we just increment the value of blockedRequestsCount 
> and do not print the region's information.
> I think we should also print the region's information in server-side, so that 
> we can easily find the busy regions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984141#comment-15984141
 ] 

Hadoop QA commented on HBASE-17928:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 26m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 46s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 27m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
62m 3s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 28s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 38s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 2m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 300m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionReplicaFailover |
|   | hadoop.hbase.regionserver.TestPerColumnFamilyFlush |
| 

[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues

2017-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984151#comment-15984151
 ] 

Ted Yu commented on HBASE-17928:


Guangxu:
See if the test failures were related or not.

> Shell tool to clear compact queues
> --
>
> Key: HBASE-17928
> URL: https://issues.apache.org/jira/browse/HBASE-17928
> Project: HBase
>  Issue Type: New Feature
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0
>
> Attachments: HBASE-17928-v1.patch, HBASE-17928-v2.patch, 
> HBASE-17928-v3.patch, HBASE-17928-v4.patch
>
>
> scenario:
> 1. Compact a table by mistake
> 2. Compact is not completed within the specified time period
> In this case, clearing the queue is a better choice, so as not to affect the 
> stability of the cluster



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984158#comment-15984158
 ] 

Hadoop QA commented on HBASE-16942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 53s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 143m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865048/HBASE-16942.master.012.patch
 |
| JIRA Issue | HBASE-16942 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a62cc61e08c1 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2557506 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6566/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6566/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6566/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6566/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message 

[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues

2017-04-25 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984162#comment-15984162
 ] 

Guangxu Cheng commented on HBASE-17928:
---

[~yuzhih...@gmail.com] All tests pass locally. These failed tests are unrelated 
to this jira.

> Shell tool to clear compact queues
> --
>
> Key: HBASE-17928
> URL: https://issues.apache.org/jira/browse/HBASE-17928
> Project: HBase
>  Issue Type: New Feature
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0
>
> Attachments: HBASE-17928-v1.patch, HBASE-17928-v2.patch, 
> HBASE-17928-v3.patch, HBASE-17928-v4.patch
>
>
> scenario:
> 1. Compact a table by mistake
> 2. Compact is not completed within the specified time period
> In this case, clearing the queue is a better choice, so as not to affect the 
> stability of the cluster



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17950) Write the chunkId also as Int instead of long into the first byte of the chunk

2017-04-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17950:
---
Status: Open  (was: Patch Available)

> Write the chunkId also as Int instead of long into the first byte of the chunk
> --
>
> Key: HBASE-17950
> URL: https://issues.apache.org/jira/browse/HBASE-17950
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17950_1.patch, HBASE-17950_1.patch, 
> HBASE-17950.patch
>
>
> I think some issue happened while updating the patch. The chunkId was 
> converted to int every where but while writing it has been written as a long. 
> Will push a small patch to change it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17950) Write the chunkId also as Int instead of long into the first byte of the chunk

2017-04-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17950:
---
Attachment: HBASE-17950_1.patch

Retry QA as I think is back again.

> Write the chunkId also as Int instead of long into the first byte of the chunk
> --
>
> Key: HBASE-17950
> URL: https://issues.apache.org/jira/browse/HBASE-17950
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17950_1.patch, HBASE-17950_1.patch, 
> HBASE-17950_1.patch, HBASE-17950.patch
>
>
> I think some issue happened while updating the patch. The chunkId was 
> converted to int every where but while writing it has been written as a long. 
> Will push a small patch to change it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17950) Write the chunkId also as Int instead of long into the first byte of the chunk

2017-04-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17950:
---
Status: Patch Available  (was: Open)

> Write the chunkId also as Int instead of long into the first byte of the chunk
> --
>
> Key: HBASE-17950
> URL: https://issues.apache.org/jira/browse/HBASE-17950
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17950_1.patch, HBASE-17950_1.patch, 
> HBASE-17950_1.patch, HBASE-17950.patch
>
>
> I think some issue happened while updating the patch. The chunkId was 
> converted to int every where but while writing it has been written as a long. 
> Will push a small patch to change it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17955) Commit Reviewboard comments from Vlad

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984177#comment-15984177
 ] 

Hadoop QA commented on HBASE-17955:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
31s {color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 16m 
32s {color} | {color:green} HBASE-16961 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
24s {color} | {color:green} HBASE-16961 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 5m 25s 
{color} | {color:red} hbase-protocol-shaded in HBASE-16961 has 24 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} HBASE-16961 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 16m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 2s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 18s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 23s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 27s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 30s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.5.2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 3m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 18m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 54s {color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s 
{color} | {color:green} hbase-hadoop2-com

[jira] [Commented] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984179#comment-15984179
 ] 

Hudson commented on HBASE-15143:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2905 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2905/])
HBASE-15143 Procedure v2 - Web UI displaying queues (stack: rev 
25575064154fe1cc7ff8970e8f15a3cff648f37a)
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureScheduler.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/LockServiceProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) hbase-server/src/main/resources/hbase-webapps/master/zk.jsp
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockNoopMasterServices.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* (edit) hbase-shell/src/main/ruby/shell/formatter.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (edit) hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/LockAndQueue.java
* (edit) hbase-protocol-shaded/src/main/protobuf/LockService.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
* (edit) hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) hbase-server/src/main/resources/hbase-webapps/master/snapshotsStats.jsp
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
* (edit) hbase-shell/src/main/ruby/shell/commands.rb
* (add) 
hbase-common/src/main/java/org/apache/hadoop/hbase/procedure2/LockInfo.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureScheduler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (add) hbase-shell/src/main/ruby/shell/commands/list_locks.rb
* (edit) hbase-server/src/main/resources/hbase-webapps/master/snapshot.jsp
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/SimpleProcedureScheduler.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/LockStatus.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/locking/LockProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ShortCircuitMasterConnection.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Master.proto
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
* (add) hbase-shell/src/test/ruby/shell/list_locks_test.rb
* (edit) hbase-shell/src/main/ruby/shell.rb
* (edit) hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp


> Procedure v2 - Web UI displaying queues
> ---
>
> Key: HBASE-15143
> URL: https://issues.apache.org/jira/browse/HBASE-15143
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, UI
>Reporter: Matteo Bertozzi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15143-BM-0001.patch, HBASE-15143-BM-0002.patch, 
> HBASE-15143-BM-0003.patch, HBASE-15143-BM-0004.patch, 
> HBASE-15143-BM-0005.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0006.patch, HBASE-15143-BM-0006.patch, 
> HBASE-15143-BM-0007.patch, HBASE-15143-BM-0008.patch, 
> HBASE-15143-BM-0009.patch, HBASE-15143-BM-0010.patch, 
> HBASE-15143-BM-0011.patch, HBASE-15143-BM-0011.patch, 
> HBASE-15143-BM-0012.patch, screenshot.png
>
>
> We can query MasterProcedureScheduler to display the various procedures and 
> who is holding table/region locks.
> Each procedure is in a TableQueue or ServerQueue, so it is easy to display 
> the procedures in its o

[jira] [Commented] (HBASE-17947) Location of Examples.proto is wrong in comment of RowCountEndPoint.java

2017-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984178#comment-15984178
 ] 

Hudson commented on HBASE-17947:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2905 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2905/])
HBASE-17947 Location of Examples.proto is wrong in comment of (tedyu: rev 
1367519cd0545c2854108cffab03ae7c79b6ef2c)
* (edit) 
hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java


> Location of Examples.proto is wrong in comment of RowCountEndPoint.java
> ---
>
> Key: HBASE-17947
> URL: https://issues.apache.org/jira/browse/HBASE-17947
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Trivial
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-17947.master.000.patch, 
> HBASE-17947.master.001.patch
>
>
> In the comment of RowCountEndpoint.java
> {code}
> /**
>  * Sample coprocessor endpoint exposing a Service interface for counting rows 
> and key values.
>  *
>  * 
>  * For the protocol buffer definition of the RowCountService, see the source 
> file located under
>  * hbase-server/src/main/protobuf/Examples.proto.
>  * 
>  */
> {code}
> Examples.proto is not under hbase-server/src/main/protobuf/, but under 
> hbase-examples/src/main/protobuf/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17956) Raw scan should ignore TTL

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984186#comment-15984186
 ] 

Hadoop QA commented on HBASE-17956:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 130m 10s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.snapshot.TestSecureExportSnapshot |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.master.TestSplitLogManager |
|   | org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865046/HBASE-17956.patch |
| JIRA Issue | HBASE-17956 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 7866a0fc2ed1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2557506 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6564/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6564/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6564/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Con

[jira] [Commented] (HBASE-17263) Netty based rpc server impl

2017-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984188#comment-15984188
 ] 

Hadoop QA commented on HBASE-17263:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 131m 23s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 177m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestSecureExportSnapshot |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.master.TestSplitLogManager |
|   | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer |
|   | org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865047/HBASE-17263_v8.patch |
| JIRA Issue | HBASE-17263 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3a5bb5cf2cb7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2557506 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6565/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6565/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommi

[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues

2017-04-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984193#comment-15984193
 ] 

Ted Yu commented on HBASE-17928:


Please attach patch for branch-1

> Shell tool to clear compact queues
> --
>
> Key: HBASE-17928
> URL: https://issues.apache.org/jira/browse/HBASE-17928
> Project: HBase
>  Issue Type: New Feature
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0
>
> Attachments: HBASE-17928-v1.patch, HBASE-17928-v2.patch, 
> HBASE-17928-v3.patch, HBASE-17928-v4.patch
>
>
> scenario:
> 1. Compact a table by mistake
> 2. Compact is not completed within the specified time period
> In this case, clearing the queue is a better choice, so as not to affect the 
> stability of the cluster



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)