[jira] [Commented] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread chenyang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835372#comment-16835372
 ] 

chenyang commented on HBASE-22324:
--

submit 0006.patch to close store, region and wal in TestMemStoreSegmentsIterator

>  loss a mass of data when the sequenceId of cells greater than Integer.Max, 
> because MemStoreMergerSegmentsIterator can not merge segments 
> --
>
> Key: HBASE-22324
> URL: https://issues.apache.org/jira/browse/HBASE-22324
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 2.1.0, 2.2.0
>Reporter: chenyang
>Priority: Blocker
>  Labels: patch
> Fix For: 2.1.0
>
> Attachments: HBASE-22324.branch-2.1.0005.patch, 
> HBASE-22324.branch-2.1.0006.patch
>
>
> if your memstore type is CompactingMemStore,MemStoreMergerSegmentsIterator 
> can not merge memstore segments when the seqId of cells greater than 
> Integer.Max, as a result, lossing a mass of data. the reason is that 
> MemStoreMergerSegmentsIterator use Integer.Max as readPt when create Scanner, 
>  but the seqId of cell  may be greater than Integer.MAX_VALUE,  it`s type is 
> long.   code as below:
> {code:java}
> public MemStoreMergerSegmentsIterator(List segments, 
> CellComparator comparator,
> int compactionKVMax) throws IOException {
>   super(compactionKVMax);
>   // create the list of scanners to traverse over all the data
>   // no dirty reads here as these are immutable segments
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners); 
> //bug, should use Long.MAX_VALUE
>   heap = new KeyValueHeap(scanners, comparator);
> }
> SegmentScanner.java code as below
> protected void updateCurrent() {
>   Cell startKV = current;
>   Cell next = null;
>   try {
> while (iter.hasNext()) {
>   next = iter.next();
>   // here, if seqId>readPoint(Integer.MAX_VALUE), never read cell, as a 
> result, lossing lots of cells
>   if (next.getSequenceId() <= this.readPoint) {
> current = next;
> return;// skip irrelevant versions
>   }
>   if (stopSkippingKVsIfNextRow &&   // for backwardSeek() stay in the
>   startKV != null &&// boundaries of a single row
>   segment.compareRows(next, startKV) > 0) {
> current = null;
> return;
>   }
> } // end of while
> current = null; // nothing found
>   } finally {
> if (next != null) {
>   // in all cases, remember the last KV we iterated to, needed for 
> reseek()
>   last = next;
> }
>   }
> }
> MemStoreCompactorSegmentsIterator has the same bug
> public MemStoreCompactorSegmentsIterator(List segments,
> CellComparator comparator, int compactionKVMax, HStore store) throws 
> IOException {
>   super(compactionKVMax);
>   List scanners = new ArrayList();
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners);   
> //bug, should use Long.MAX_VALUE
>   // build the scanner based on Query Matcher
>   // reinitialize the compacting scanner for each instance of iterator
>   compactingScanner = createScanner(store, scanners);
>   refillKVS();
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread chenyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenyang updated HBASE-22324:
-
Comment: was deleted

(was: submit 0006.patch to close store, region and wal in 
TestMemStoreSegmentsIterator)

>  loss a mass of data when the sequenceId of cells greater than Integer.Max, 
> because MemStoreMergerSegmentsIterator can not merge segments 
> --
>
> Key: HBASE-22324
> URL: https://issues.apache.org/jira/browse/HBASE-22324
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 2.1.0, 2.2.0
>Reporter: chenyang
>Priority: Blocker
>  Labels: patch
> Fix For: 2.1.0
>
> Attachments: HBASE-22324.branch-2.1.0005.patch, 
> HBASE-22324.branch-2.1.0006.patch
>
>
> if your memstore type is CompactingMemStore,MemStoreMergerSegmentsIterator 
> can not merge memstore segments when the seqId of cells greater than 
> Integer.Max, as a result, lossing a mass of data. the reason is that 
> MemStoreMergerSegmentsIterator use Integer.Max as readPt when create Scanner, 
>  but the seqId of cell  may be greater than Integer.MAX_VALUE,  it`s type is 
> long.   code as below:
> {code:java}
> public MemStoreMergerSegmentsIterator(List segments, 
> CellComparator comparator,
> int compactionKVMax) throws IOException {
>   super(compactionKVMax);
>   // create the list of scanners to traverse over all the data
>   // no dirty reads here as these are immutable segments
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners); 
> //bug, should use Long.MAX_VALUE
>   heap = new KeyValueHeap(scanners, comparator);
> }
> SegmentScanner.java code as below
> protected void updateCurrent() {
>   Cell startKV = current;
>   Cell next = null;
>   try {
> while (iter.hasNext()) {
>   next = iter.next();
>   // here, if seqId>readPoint(Integer.MAX_VALUE), never read cell, as a 
> result, lossing lots of cells
>   if (next.getSequenceId() <= this.readPoint) {
> current = next;
> return;// skip irrelevant versions
>   }
>   if (stopSkippingKVsIfNextRow &&   // for backwardSeek() stay in the
>   startKV != null &&// boundaries of a single row
>   segment.compareRows(next, startKV) > 0) {
> current = null;
> return;
>   }
> } // end of while
> current = null; // nothing found
>   } finally {
> if (next != null) {
>   // in all cases, remember the last KV we iterated to, needed for 
> reseek()
>   last = next;
> }
>   }
> }
> MemStoreCompactorSegmentsIterator has the same bug
> public MemStoreCompactorSegmentsIterator(List segments,
> CellComparator comparator, int compactionKVMax, HStore store) throws 
> IOException {
>   super(compactionKVMax);
>   List scanners = new ArrayList();
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners);   
> //bug, should use Long.MAX_VALUE
>   // build the scanner based on Query Matcher
>   // reinitialize the compacting scanner for each instance of iterator
>   compactingScanner = createScanner(store, scanners);
>   refillKVS();
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread chenyang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835374#comment-16835374
 ] 

chenyang commented on HBASE-22324:
--

submit 0006.patch to close store, region and wal in TestMemStoreSegmentsIterator

>  loss a mass of data when the sequenceId of cells greater than Integer.Max, 
> because MemStoreMergerSegmentsIterator can not merge segments 
> --
>
> Key: HBASE-22324
> URL: https://issues.apache.org/jira/browse/HBASE-22324
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 2.1.0, 2.2.0
>Reporter: chenyang
>Priority: Blocker
>  Labels: patch
> Fix For: 2.1.0
>
> Attachments: HBASE-22324.branch-2.1.0005.patch, 
> HBASE-22324.branch-2.1.0006.patch
>
>
> if your memstore type is CompactingMemStore,MemStoreMergerSegmentsIterator 
> can not merge memstore segments when the seqId of cells greater than 
> Integer.Max, as a result, lossing a mass of data. the reason is that 
> MemStoreMergerSegmentsIterator use Integer.Max as readPt when create Scanner, 
>  but the seqId of cell  may be greater than Integer.MAX_VALUE,  it`s type is 
> long.   code as below:
> {code:java}
> public MemStoreMergerSegmentsIterator(List segments, 
> CellComparator comparator,
> int compactionKVMax) throws IOException {
>   super(compactionKVMax);
>   // create the list of scanners to traverse over all the data
>   // no dirty reads here as these are immutable segments
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners); 
> //bug, should use Long.MAX_VALUE
>   heap = new KeyValueHeap(scanners, comparator);
> }
> SegmentScanner.java code as below
> protected void updateCurrent() {
>   Cell startKV = current;
>   Cell next = null;
>   try {
> while (iter.hasNext()) {
>   next = iter.next();
>   // here, if seqId>readPoint(Integer.MAX_VALUE), never read cell, as a 
> result, lossing lots of cells
>   if (next.getSequenceId() <= this.readPoint) {
> current = next;
> return;// skip irrelevant versions
>   }
>   if (stopSkippingKVsIfNextRow &&   // for backwardSeek() stay in the
>   startKV != null &&// boundaries of a single row
>   segment.compareRows(next, startKV) > 0) {
> current = null;
> return;
>   }
> } // end of while
> current = null; // nothing found
>   } finally {
> if (next != null) {
>   // in all cases, remember the last KV we iterated to, needed for 
> reseek()
>   last = next;
> }
>   }
> }
> MemStoreCompactorSegmentsIterator has the same bug
> public MemStoreCompactorSegmentsIterator(List segments,
> CellComparator comparator, int compactionKVMax, HStore store) throws 
> IOException {
>   super(compactionKVMax);
>   List scanners = new ArrayList();
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners);   
> //bug, should use Long.MAX_VALUE
>   // build the scanner based on Query Matcher
>   // reinitialize the compacting scanner for each instance of iterator
>   compactingScanner = createScanner(store, scanners);
>   refillKVS();
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread chenyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenyang updated HBASE-22324:
-
Comment: was deleted

(was: submit 0006.patch to close store, region and wal in 
TestMemStoreSegmentsIterator)

>  loss a mass of data when the sequenceId of cells greater than Integer.Max, 
> because MemStoreMergerSegmentsIterator can not merge segments 
> --
>
> Key: HBASE-22324
> URL: https://issues.apache.org/jira/browse/HBASE-22324
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 2.1.0, 2.2.0
>Reporter: chenyang
>Priority: Blocker
>  Labels: patch
> Fix For: 2.1.0
>
> Attachments: HBASE-22324.branch-2.1.0005.patch, 
> HBASE-22324.branch-2.1.0006.patch
>
>
> if your memstore type is CompactingMemStore,MemStoreMergerSegmentsIterator 
> can not merge memstore segments when the seqId of cells greater than 
> Integer.Max, as a result, lossing a mass of data. the reason is that 
> MemStoreMergerSegmentsIterator use Integer.Max as readPt when create Scanner, 
>  but the seqId of cell  may be greater than Integer.MAX_VALUE,  it`s type is 
> long.   code as below:
> {code:java}
> public MemStoreMergerSegmentsIterator(List segments, 
> CellComparator comparator,
> int compactionKVMax) throws IOException {
>   super(compactionKVMax);
>   // create the list of scanners to traverse over all the data
>   // no dirty reads here as these are immutable segments
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners); 
> //bug, should use Long.MAX_VALUE
>   heap = new KeyValueHeap(scanners, comparator);
> }
> SegmentScanner.java code as below
> protected void updateCurrent() {
>   Cell startKV = current;
>   Cell next = null;
>   try {
> while (iter.hasNext()) {
>   next = iter.next();
>   // here, if seqId>readPoint(Integer.MAX_VALUE), never read cell, as a 
> result, lossing lots of cells
>   if (next.getSequenceId() <= this.readPoint) {
> current = next;
> return;// skip irrelevant versions
>   }
>   if (stopSkippingKVsIfNextRow &&   // for backwardSeek() stay in the
>   startKV != null &&// boundaries of a single row
>   segment.compareRows(next, startKV) > 0) {
> current = null;
> return;
>   }
> } // end of while
> current = null; // nothing found
>   } finally {
> if (next != null) {
>   // in all cases, remember the last KV we iterated to, needed for 
> reseek()
>   last = next;
> }
>   }
> }
> MemStoreCompactorSegmentsIterator has the same bug
> public MemStoreCompactorSegmentsIterator(List segments,
> CellComparator comparator, int compactionKVMax, HStore store) throws 
> IOException {
>   super(compactionKVMax);
>   List scanners = new ArrayList();
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners);   
> //bug, should use Long.MAX_VALUE
>   // build the scanner based on Query Matcher
>   // reinitialize the compacting scanner for each instance of iterator
>   compactingScanner = createScanner(store, scanners);
>   refillKVS();
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22269) Consider simplifying the logic of BucketCache eviction.

2019-05-08 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835385#comment-16835385
 ] 

Zheng Hu commented on HBASE-22269:
--

Close this issue now.  If any problem, can reopen this later. 

> Consider simplifying the logic of BucketCache eviction.
> ---
>
> Key: HBASE-22269
> URL: https://issues.apache.org/jira/browse/HBASE-22269
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Priority: Major
>
> As discussed in review board: https://reviews.apache.org/r/70465 . [~Apache9] 
> has an comment: 
> bq. I think with the new reference counted framework, we do not need to treat 
> rpc reference specially? Just release the bucket from oldest to newest, until 
> we can find enough free space? We could know if the space has been freed from 
> the return value of release ? Can be a follow on issue, maybe.
> Now, we'll choose those non-RPC refered block to mark as evicted, maybe can 
> simplify the logic here , just as [~Apache9] said.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22269) Consider simplifying the logic of BucketCache eviction.

2019-05-08 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu resolved HBASE-22269.
--
Resolution: Won't Fix

> Consider simplifying the logic of BucketCache eviction.
> ---
>
> Key: HBASE-22269
> URL: https://issues.apache.org/jira/browse/HBASE-22269
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Priority: Major
>
> As discussed in review board: https://reviews.apache.org/r/70465 . [~Apache9] 
> has an comment: 
> bq. I think with the new reference counted framework, we do not need to treat 
> rpc reference specially? Just release the bucket from oldest to newest, until 
> we can find enough free space? We could know if the space has been freed from 
> the return value of release ? Can be a follow on issue, maybe.
> Now, we'll choose those non-RPC refered block to mark as evicted, maybe can 
> simplify the logic here , just as [~Apache9] said.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20851) Change rubocop config for max line length of 100

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835401#comment-16835401
 ] 

Hudson commented on HBASE-20851:


Results for branch branch-2.2
[build #243 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/243/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/243//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/243//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/243//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Change rubocop config for max line length of 100
> 
>
> Key: HBASE-20851
> URL: https://issues.apache.org/jira/browse/HBASE-20851
> Project: HBase
>  Issue Type: Bug
>  Components: community, shell
>Affects Versions: 2.0.1
>Reporter: Umesh Agashe
>Assignee: Murtaza Hassan
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> Existing ruby and Java code uses max line length of 100 characters. Change 
> rubocop config with:
> {code:java}
> Metrics/LineLength:
>   Max: 100
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22379 started by Jan Hentschel.
-
> Fix Markdown for "Voting on Release Candidates" in book
> ---
>
> Key: HBASE-22379
> URL: https://issues.apache.org/jira/browse/HBASE-22379
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
>
> The Markdown in the section "Voting on Release Candidates" of the HBase book 
> seems to be broken. It looks like that there should be a quote, which isn't 
> displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Jan Hentschel (JIRA)
Jan Hentschel created HBASE-22379:
-

 Summary: Fix Markdown for "Voting on Release Candidates" in book
 Key: HBASE-22379
 URL: https://issues.apache.org/jira/browse/HBASE-22379
 Project: HBase
  Issue Type: Improvement
  Components: community, documentation
Reporter: Jan Hentschel
Assignee: Jan Hentschel


The Markdown in the section "Voting on Release Candidates" of the HBase book 
seems to be broken. It looks like that there should be a quote, which isn't 
displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22380) break circle replication when doing bulkload

2019-05-08 Thread chenxu (JIRA)
chenxu created HBASE-22380:
--

 Summary: break circle replication when doing bulkload
 Key: HBASE-22380
 URL: https://issues.apache.org/jira/browse/HBASE-22380
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: chenxu


when enabled master-master bulkload replication, HFiles will be replicated 
circularly between two clusters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22380) break circle replication when doing bulkload

2019-05-08 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835445#comment-16835445
 ] 

Zheng Hu commented on HBASE-22380:
--

The WALEntry has no circle replication problem, because we attached a clusterId 
in it,  if the source find the target clusterId is the same as the clusterId in 
WALEntry, then it won't replicate this entry to sink.  for HFile,  seems we 
have no kind of clusterId in HFile, so has this bug.

> break circle replication when doing bulkload
> 
>
> Key: HBASE-22380
> URL: https://issues.apache.org/jira/browse/HBASE-22380
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: chenxu
>Priority: Critical
>
> when enabled master-master bulkload replication, HFiles will be replicated 
> circularly between two clusters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet opened a new pull request #226: HBASE-22379 Fixed Markdown in 'Voting on Release Candidates' section

2019-05-08 Thread GitBox
HorizonNet opened a new pull request #226: HBASE-22379 Fixed Markdown in 
'Voting on Release Candidates' section
URL: https://github.com/apache/hbase/pull/226
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22380) break circle replication when doing bulkload

2019-05-08 Thread chenxu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835452#comment-16835452
 ] 

chenxu commented on HBASE-22380:


make sense, IMO we can skip write BulkLoadMarker when doing bulkload on peer 
cluster, so the WALEntry will not come back again, and this can break the 
circle replication.

> break circle replication when doing bulkload
> 
>
> Key: HBASE-22380
> URL: https://issues.apache.org/jira/browse/HBASE-22380
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: chenxu
>Priority: Critical
>
> when enabled master-master bulkload replication, HFiles will be replicated 
> circularly between two clusters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22380) break circle replication when doing bulkload

2019-05-08 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835464#comment-16835464
 ] 

Zheng Hu commented on HBASE-22380:
--

bq. IMO we can skip write BulkLoadMarker when doing bulkload on peer cluster, 
so the WALEntry will not come back again, and this can break the circle 
replication.

What happen if we have the replication flow: clusterA -> (replicate) -> 
clusterB -> (replicate) -> clusterC.  the bulkloaded HFile won't be replicated 
to cluster C  ? 

> break circle replication when doing bulkload
> 
>
> Key: HBASE-22380
> URL: https://issues.apache.org/jira/browse/HBASE-22380
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: chenxu
>Priority: Critical
>
> when enabled master-master bulkload replication, HFiles will be replicated 
> circularly between two clusters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835470#comment-16835470
 ] 

HBase QA commented on HBASE-22324:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}127m 
44s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/270/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22324 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968162/HBASE-22324.branch-2.1.0006.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d02c3f53b311 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | branch-2.1 / c0b58a33c7 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/270/testReport/ |
| Max. process+thread count | 5237 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/270/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically g

[jira] [Commented] (HBASE-22380) break circle replication when doing bulkload

2019-05-08 Thread chenxu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835476#comment-16835476
 ] 

chenxu commented on HBASE-22380:


en... that's a problem

> break circle replication when doing bulkload
> 
>
> Key: HBASE-22380
> URL: https://issues.apache.org/jira/browse/HBASE-22380
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: chenxu
>Priority: Critical
>
> when enabled master-master bulkload replication, HFiles will be replicated 
> circularly between two clusters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #226: HBASE-22379 Fixed Markdown in 'Voting on Release Candidates' section

2019-05-08 Thread GitBox
Apache-HBase commented on issue #226: HBASE-22379 Fixed Markdown in 'Voting on 
Release Candidates' section
URL: https://github.com/apache/hbase/pull/226#issuecomment-490424814
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 290 | master passed |
   | 0 | refguide | 482 | branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 299 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | 0 | refguide | 508 | patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 18 | The patch does not generate ASF License warnings. |
   | | | 2000 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-226/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/226 |
   | Optional Tests |  dupname  asflicense  refguide  |
   | uname | Linux de5a1f0ab105 4.4.0-143-generic #169-Ubuntu SMP Thu Feb 7 
07:56:38 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / d9491c0b65 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-226/1/artifact/out/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-226/1/artifact/out/patch-site/book.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-226/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 merged pull request #214: HBASE-22072 High read/write intensive regions may cause long crash

2019-05-08 Thread GitBox
ramkrish86 merged pull request #214: HBASE-22072 High read/write intensive 
regions may cause long crash
URL: https://github.com/apache/hbase/pull/214
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22324) loss a mass of data when the sequenceId of cells greater than Integer.Max, because MemStoreMergerSegmentsIterator can not merge segments

2019-05-08 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835482#comment-16835482
 ] 

Zheng Hu commented on HBASE-22324:
--

Let me commit, Thanks [~HB-CY] for contributing. 

>  loss a mass of data when the sequenceId of cells greater than Integer.Max, 
> because MemStoreMergerSegmentsIterator can not merge segments 
> --
>
> Key: HBASE-22324
> URL: https://issues.apache.org/jira/browse/HBASE-22324
> Project: HBase
>  Issue Type: Bug
>  Components: in-memory-compaction
>Affects Versions: 2.1.0, 2.2.0
>Reporter: chenyang
>Priority: Blocker
>  Labels: patch
> Fix For: 2.1.0
>
> Attachments: HBASE-22324.branch-2.1.0005.patch, 
> HBASE-22324.branch-2.1.0006.patch
>
>
> if your memstore type is CompactingMemStore,MemStoreMergerSegmentsIterator 
> can not merge memstore segments when the seqId of cells greater than 
> Integer.Max, as a result, lossing a mass of data. the reason is that 
> MemStoreMergerSegmentsIterator use Integer.Max as readPt when create Scanner, 
>  but the seqId of cell  may be greater than Integer.MAX_VALUE,  it`s type is 
> long.   code as below:
> {code:java}
> public MemStoreMergerSegmentsIterator(List segments, 
> CellComparator comparator,
> int compactionKVMax) throws IOException {
>   super(compactionKVMax);
>   // create the list of scanners to traverse over all the data
>   // no dirty reads here as these are immutable segments
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners); 
> //bug, should use Long.MAX_VALUE
>   heap = new KeyValueHeap(scanners, comparator);
> }
> SegmentScanner.java code as below
> protected void updateCurrent() {
>   Cell startKV = current;
>   Cell next = null;
>   try {
> while (iter.hasNext()) {
>   next = iter.next();
>   // here, if seqId>readPoint(Integer.MAX_VALUE), never read cell, as a 
> result, lossing lots of cells
>   if (next.getSequenceId() <= this.readPoint) {
> current = next;
> return;// skip irrelevant versions
>   }
>   if (stopSkippingKVsIfNextRow &&   // for backwardSeek() stay in the
>   startKV != null &&// boundaries of a single row
>   segment.compareRows(next, startKV) > 0) {
> current = null;
> return;
>   }
> } // end of while
> current = null; // nothing found
>   } finally {
> if (next != null) {
>   // in all cases, remember the last KV we iterated to, needed for 
> reseek()
>   last = next;
> }
>   }
> }
> MemStoreCompactorSegmentsIterator has the same bug
> public MemStoreCompactorSegmentsIterator(List segments,
> CellComparator comparator, int compactionKVMax, HStore store) throws 
> IOException {
>   super(compactionKVMax);
>   List scanners = new ArrayList();
>   AbstractMemStore.addToScanners(segments, Integer.MAX_VALUE, scanners);   
> //bug, should use Long.MAX_VALUE
>   // build the scanner based on Query Matcher
>   // reinitialize the compacting scanner for each instance of iterator
>   compactingScanner = createScanner(store, scanners);
>   refillKVS();
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22311) Update community docs to recommend use of "Co-authored-by" in git commits

2019-05-08 Thread Norbert Kalmar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835497#comment-16835497
 ] 

Norbert Kalmar commented on HBASE-22311:


Hi Sean, I can take this jira up if you don't mind.

> Update community docs to recommend use of "Co-authored-by" in git commits
> -
>
> Key: HBASE-22311
> URL: https://issues.apache.org/jira/browse/HBASE-22311
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Sean Busbey
>Priority: Minor
>
> discussion on [\[DISCUSS\] switch from "Ammending-Author" to "Co-authored-by" 
> in commit messages|https://s.apache.org/ISs4] seems to have out in favor.
>  
> Updated section should include a brief explanation (that includes "multiple 
> authors" expressly instead of just the "fixed up this thing" that's there for 
> Amending-Author). It should also have pointers to the github feature 
> explanation. So long as those docs exist they're pretty good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22358) Change rubocop configuration for method length

2019-05-08 Thread Murtaza Hassan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22358 started by Murtaza Hassan.
--
> Change rubocop configuration for method length
> --
>
> Key: HBASE-22358
> URL: https://issues.apache.org/jira/browse/HBASE-22358
> Project: HBase
>  Issue Type: Improvement
>  Components: community, shell
>Reporter: Jan Hentschel
>Assignee: Murtaza Hassan
>Priority: Minor
>  Labels: beginner, beginners
>
> rubocop currently uses a maximum method length for the Ruby code of 10, which 
> is way too restrictive. In Checkstyle we're using 150 lines per method. Don't 
> know if it needs to be that much, but something between 50 and 75 seems to be 
> more realistic, especially for test cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22311) Update community docs to recommend use of "Co-authored-by" in git commits

2019-05-08 Thread Norbert Kalmar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835497#comment-16835497
 ] 

Norbert Kalmar edited comment on HBASE-22311 at 5/8/19 10:26 AM:
-

Hi [~busbey], I can take this jira up if you don't mind.


was (Author: nkalmar):
Hi Sean, I can take this jira up if you don't mind.

> Update community docs to recommend use of "Co-authored-by" in git commits
> -
>
> Key: HBASE-22311
> URL: https://issues.apache.org/jira/browse/HBASE-22311
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Sean Busbey
>Priority: Minor
>
> discussion on [\[DISCUSS\] switch from "Ammending-Author" to "Co-authored-by" 
> in commit messages|https://s.apache.org/ISs4] seems to have out in favor.
>  
> Updated section should include a brief explanation (that includes "multiple 
> authors" expressly instead of just the "fixed up this thing" that's there for 
> Amending-Author). It should also have pointers to the github feature 
> explanation. So long as those docs exist they're pretty good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835509#comment-16835509
 ] 

HBase QA commented on HBASE-20821:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m  
6s{color} | {color:blue} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}278m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}318m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.master.TestSplitWALManager |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/269/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-20821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968153/HBASE-20821.master.v002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 86d41b67851b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / d9491c0b65 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| D

[jira] [Commented] (HBASE-22360) Abort timer doesn't set when abort is called during graceful shutdown process

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835524#comment-16835524
 ] 

Hudson commented on HBASE-22360:


Results for branch master
[build #990 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/990/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/990//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Abort timer doesn't set when abort is called during graceful shutdown process
> -
>
> Key: HBASE-22360
> URL: https://issues.apache.org/jira/browse/HBASE-22360
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Bahram Chehrazy
>Assignee: Bahram Chehrazy
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: Set-the-abortMonitor-timer-in-the-abort-function-01.patch
>
>
> The abort timer only get set when the server is aborted. But if the server is 
> being gracefully stopped and something goes wrong causing an abort, the timer 
> may not get set, and the shutdown process could take a very long time or 
> completely stuck the server.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20851) Change rubocop config for max line length of 100

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835522#comment-16835522
 ] 

Hudson commented on HBASE-20851:


Results for branch master
[build #990 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/990/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/990//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Change rubocop config for max line length of 100
> 
>
> Key: HBASE-20851
> URL: https://issues.apache.org/jira/browse/HBASE-20851
> Project: HBase
>  Issue Type: Bug
>  Components: community, shell
>Affects Versions: 2.0.1
>Reporter: Umesh Agashe
>Assignee: Murtaza Hassan
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> Existing ruby and Java code uses max line length of 100 characters. Change 
> rubocop config with:
> {code:java}
> Metrics/LineLength:
>   Max: 100
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21777) "Tune compaction throughput" debug messages even when nothing has changed

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835523#comment-16835523
 ] 

Hudson commented on HBASE-21777:


Results for branch master
[build #990 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/990/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/990//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/990//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> "Tune compaction throughput" debug messages even when nothing has changed 
> --
>
> Key: HBASE-21777
> URL: https://issues.apache.org/jira/browse/HBASE-21777
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
>  Labels: branch-1
> Fix For: 3.0.0, 1.5.0, 2.2.1
>
>
> PressureAwareCompactionThroughputController will log "tune compaction 
> throughput" debug messages even when after consideration the re-tuning makes 
> no change to current settings. In that case it would be better not to log 
> anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22360) Abort timer doesn't set when abort is called during graceful shutdown process

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835521#comment-16835521
 ] 

Hudson commented on HBASE-22360:


Results for branch branch-2
[build #1875 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1875/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1875//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1875//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1875//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Abort timer doesn't set when abort is called during graceful shutdown process
> -
>
> Key: HBASE-22360
> URL: https://issues.apache.org/jira/browse/HBASE-22360
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Bahram Chehrazy
>Assignee: Bahram Chehrazy
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: Set-the-abortMonitor-timer-in-the-abort-function-01.patch
>
>
> The abort timer only get set when the server is aborted. But if the server is 
> being gracefully stopped and something goes wrong causing an abort, the timer 
> may not get set, and the shutdown process could take a very long time or 
> completely stuck the server.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread Shardul Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shardul Singh updated HBASE-20821:
--
Attachment: HBASE-20821.master.v003.patch

> Re-creating a dropped namespace and contained table inherits previously set 
> space quota settings
> 
>
> Key: HBASE-20821
> URL: https://issues.apache.org/jira/browse/HBASE-20821
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Nihal Jain
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20821.master.v001patch, 
> HBASE-20821.master.v002.patch, HBASE-20821.master.v003.patch
>
>
> As demonstarted in 
> [HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
>  re-creating a dropped namespace and contained table inherits previously set 
> space quota settings.
> *Steps:*
>  * Create a namespace and a table in it
>  * Set space quota on namespace
>  * Violate namespace quota
>  * Drop table and then namespace
>  * Re create same namespace and same table
>  * Put data into the table (more than the previosuly set namespace quota 
> limit)
> {code:java}
> private void setQuotaAndThenDropNamespace(final String namespace, 
> SpaceViolationPolicy policy)
> throws Exception {
>   Put put = new Put(Bytes.toBytes("to_reject"));
>   put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
> Bytes.toBytes("to"),
> Bytes.toBytes("reject"));
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   // Do puts until we violate space policy
>   final TableName tn = 
> writeUntilNSSpaceViolationAndVerifyViolation(namespace, policy, put);
>   // Now, drop the table
>   TEST_UTIL.deleteTable(tn);
>   LOG.debug("Successfully deleted table ", tn);
>   // Now, drop the namespace
>   TEST_UTIL.getAdmin().deleteNamespace(namespace);
>   LOG.debug("Successfully deleted the namespace ", namespace);
>   // Now re-create the namespace
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   LOG.debug("Successfully re-created the namespace ", namespace);
>   TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
>   LOG.debug("Successfully re-created table ", tn);
>   // Put some rows now: should not violate as namespace/quota was dropped
>   verifyNoViolation(policy, tn, put);
> }
> {code}
> *Expected*: SpaceQuota settings should not exist on the newly re-created 
> table and we should be able to put limit less data into the table
> *Actual:* We fail to put data into newly created table as SpaceQuota settings 
> (systematically created due to previously added namespace space quota) exist 
> on table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread Shardul Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835526#comment-16835526
 ] 

Shardul Singh commented on HBASE-20821:
---

TestCase Failures are passing locally...Submiting the patch again.

> Re-creating a dropped namespace and contained table inherits previously set 
> space quota settings
> 
>
> Key: HBASE-20821
> URL: https://issues.apache.org/jira/browse/HBASE-20821
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Nihal Jain
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20821.master.v001patch, 
> HBASE-20821.master.v002.patch, HBASE-20821.master.v003.patch
>
>
> As demonstarted in 
> [HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
>  re-creating a dropped namespace and contained table inherits previously set 
> space quota settings.
> *Steps:*
>  * Create a namespace and a table in it
>  * Set space quota on namespace
>  * Violate namespace quota
>  * Drop table and then namespace
>  * Re create same namespace and same table
>  * Put data into the table (more than the previosuly set namespace quota 
> limit)
> {code:java}
> private void setQuotaAndThenDropNamespace(final String namespace, 
> SpaceViolationPolicy policy)
> throws Exception {
>   Put put = new Put(Bytes.toBytes("to_reject"));
>   put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
> Bytes.toBytes("to"),
> Bytes.toBytes("reject"));
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   // Do puts until we violate space policy
>   final TableName tn = 
> writeUntilNSSpaceViolationAndVerifyViolation(namespace, policy, put);
>   // Now, drop the table
>   TEST_UTIL.deleteTable(tn);
>   LOG.debug("Successfully deleted table ", tn);
>   // Now, drop the namespace
>   TEST_UTIL.getAdmin().deleteNamespace(namespace);
>   LOG.debug("Successfully deleted the namespace ", namespace);
>   // Now re-create the namespace
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   LOG.debug("Successfully re-created the namespace ", namespace);
>   TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
>   LOG.debug("Successfully re-created table ", tn);
>   // Put some rows now: should not violate as namespace/quota was dropped
>   verifyNoViolation(policy, tn, put);
> }
> {code}
> *Expected*: SpaceQuota settings should not exist on the newly re-created 
> table and we should be able to put limit less data into the table
> *Actual:* We fail to put data into newly created table as SpaceQuota settings 
> (systematically created due to previously added namespace space quota) exist 
> on table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22381) The write request won't refresh its HConnection's local meta cache once an RegionServer got stuck

2019-05-08 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-22381:


 Summary: The write request won't refresh its HConnection's local 
meta cache once an RegionServer got stuck
 Key: HBASE-22381
 URL: https://issues.apache.org/jira/browse/HBASE-22381
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu
Assignee: Zheng Hu


In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //
   }
{code}

The isMetaClearingException won't consider the SocketTimeoutException.

{code}
  public static boolean isMetaClearingException(Throwable cur) {
cur = findException(cur);

if (cur == null) {
  return true;
}
return !isSpecialException(cur) || (cur instanceof RegionMovedException)
|| cur instanceof NotServingRegionException;
  }

  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
RpcThrottlingException
|| cur instanceof MultiActionResultTooLarge || cur instanceof 
RetryImmediatelyException
|| cur instanceof CallQueueTooBigException || cur instanceof 
CallDroppedException
|| cur instanceof NotServingRegionException || cur instanceof 
RequestTooBigException);
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22381) The write request won't refresh its HConnection's local meta cache once an RegionServer got stuck

2019-05-08 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22381:
-
Description: 
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //
   }
{code}

The isMetaClearingException won't consider the SocketTimeoutException, so the 
client would always request to the stuck server. 

{code}
  public static boolean isMetaClearingException(Throwable cur) {
cur = findException(cur);

if (cur == null) {
  return true;
}
return !isSpecialException(cur) || (cur instanceof RegionMovedException)
|| cur instanceof NotServingRegionException;
  }

  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
RpcThrottlingException
|| cur instanceof MultiActionResultTooLarge || cur instanceof 
RetryImmediatelyException
|| cur instanceof CallQueueTooBigException || cur instanceof 
CallDroppedException
|| cur instanceof NotServingRegionException || cur instanceof 
RequestTooBigException);
  }
{code}

But I'm afraid that  if we put the SocketTimeoutException into 
isSpecialException set,  we will increase the pressure of meta table, because 
there're other case we may encounter an SocketTimeoutException without any 
reigon moving,  if we clear cache , more request will be directed to meta 
table. 


  was:
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //

[jira] [Updated] (HBASE-22381) The write request won't refresh its HConnection's local meta cache once an RegionServer got stuck

2019-05-08 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22381:
-
Description: 
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //
   }
{code}

The isMetaClearingException won't consider the SocketTimeoutException, so the 
client would always request to the stuck server. 

{code}
  public static boolean isMetaClearingException(Throwable cur) {
cur = findException(cur);

if (cur == null) {
  return true;
}
return !isSpecialException(cur) || (cur instanceof RegionMovedException)
|| cur instanceof NotServingRegionException;
  }

  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
RpcThrottlingException
|| cur instanceof MultiActionResultTooLarge || cur instanceof 
RetryImmediatelyException
|| cur instanceof CallQueueTooBigException || cur instanceof 
CallDroppedException
|| cur instanceof NotServingRegionException || cur instanceof 
RequestTooBigException);
  }
{code}

The way to fix this would be add the SocketTimeoutException in 
isSpecialException. But I'm afraid that  if we put the SocketTimeoutException 
into isSpecialException set,  we will increase the pressure of meta table, 
because there're other case we may encounter an SocketTimeoutException without 
any reigon moving,  if we clear cache , more request will be directed to meta 
table. 


  was:
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, ro

[jira] [Updated] (HBASE-22381) The write request won't refresh its HConnection's local meta cache once an RegionServer got stuck

2019-05-08 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22381:
-
Description: 
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //
   }
{code}

The isMetaClearingException won't consider the SocketTimeoutException, so the 
client would always request to the stuck server. 

{code}
  public static boolean isMetaClearingException(Throwable cur) {
cur = findException(cur);

if (cur == null) {
  return true;
}
return !isSpecialException(cur) || (cur instanceof RegionMovedException)
|| cur instanceof NotServingRegionException;
  }

  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
RpcThrottlingException
|| cur instanceof MultiActionResultTooLarge || cur instanceof 
RetryImmediatelyException
|| cur instanceof CallQueueTooBigException || cur instanceof 
CallDroppedException
|| cur instanceof NotServingRegionException || cur instanceof 
RequestTooBigException);
  }
{code}

The way to fix this would be add the SocketTimeoutException in 
isSpecialException. But I'm afraid that  if we put the SocketTimeoutException 
into isSpecialException set,  we will increase the pressure of meta table, 
because there're other cases we may encounter an SocketTimeoutException without 
any reigon moving,  if we clear cache , more request will be directed to meta 
table. 


  was:
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, r

[jira] [Updated] (HBASE-22381) The write request won't refresh its HConnection's local meta cache once an RegionServer got stuck

2019-05-08 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22381:
-
Description: 
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName, row,
ClientExceptionsUtil.isMetaClearingException(t) ? null : t);
 //
   }
{code}

The isMetaClearingException won't consider the SocketTimeoutException, so the 
client would always request to the stuck server. 

{code}
  public static boolean isMetaClearingException(Throwable cur) {
cur = findException(cur);

if (cur == null) {
  return true;
}
return !isSpecialException(cur) || (cur instanceof RegionMovedException)
|| cur instanceof NotServingRegionException;
  }

  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
RpcThrottlingException
|| cur instanceof MultiActionResultTooLarge || cur instanceof 
RetryImmediatelyException
|| cur instanceof CallQueueTooBigException || cur instanceof 
CallDroppedException
|| cur instanceof NotServingRegionException || cur instanceof 
RequestTooBigException);
  }
{code}

The way to fix this would be adding the SocketTimeoutException in 
isSpecialException. But I'm afraid that  if we put the SocketTimeoutException 
into isSpecialException set,  we will increase the pressure of meta table, 
because there're other cases we may encounter an SocketTimeoutException without 
any reigon moving,  if we clear cache , more request will be directed to meta 
table. 


  was:
In production environment (Provided by [~xinxin fan] from Netease, HBase 
version: 1.2.6), we found a case: 
1. an RegionServer got stuck;
2. all requests are write requests, and  thrown an exception like this: 
{code}
Caused by: java.net.SocketTimeoutException: 15000 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.130.88.181:59049 
remote=hbase699.hz.163.org/10.120.192.76:60020] at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
java.io.FilterInputStream.read(FilterInputStream.java:133) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:558)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at 
java.io.BufferedInputStream.read(BufferedInputStream.java:265) at 
java.io.DataInputStream.readInt(DataInputStream.java:387) at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
 at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
{code}
3.  all write request to the stuck region server never clear their client's 
local meta cache, and requested to the stuck server endlessly,   which lead to 
the availability < 100% in a long time.

I checked the code, and found that in our 
AsyncRequestFutureImpl#receiveGlobalFailure: 

{code}
  private void receiveGlobalFailure(
 //
  updateCachedLocations(server, regionName

[jira] [Created] (HBASE-22382) Refactor tests in TestFromClientSide

2019-05-08 Thread Andor Molnar (JIRA)
Andor Molnar created HBASE-22382:


 Summary: Refactor tests in TestFromClientSide
 Key: HBASE-22382
 URL: https://issues.apache.org/jira/browse/HBASE-22382
 Project: HBase
  Issue Type: Task
  Components: test
Reporter: Andor Molnar
Assignee: Andor Molnar


The following tests in {{TestFromClientSide}} needs to be refactored:

- {{testNull}} - should be several tests instead of one,

- {{testVersionLimits}} - is too long, should be split into multiple,

- {{testDeletesWithReverseScan}} - is too long, should be split into multiple



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] anmolnar commented on issue #193: HBASE-13798 TestFromClientSide* don't close the Table

2019-05-08 Thread GitBox
anmolnar commented on issue #193: HBASE-13798 TestFromClientSide* don't close 
the Table
URL: https://github.com/apache/hbase/pull/193#issuecomment-490468314
 
 
   @busbey @petersomogyi 
   I believe the remaining `findbugs` and `checkstyle` warnings are unrelated 
to my patch.
   New jira has been created for further refactorings: 
https://issues.apache.org/jira/browse/HBASE-22382


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on issue #193: HBASE-13798 TestFromClientSide* don't close the Table

2019-05-08 Thread GitBox
HorizonNet commented on issue #193: HBASE-13798 TestFromClientSide* don't close 
the Table
URL: https://github.com/apache/hbase/pull/193#issuecomment-490469733
 
 
   @anmolnar The build only shows newly introduced violations. Let's wait for 
the latest build if they got resolved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835589#comment-16835589
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #92 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/92/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/92//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/92//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/92//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22383) github PR integration for qabot needs spotbugs/findbugs

2019-05-08 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-22383:
---

 Summary: github PR integration for qabot needs spotbugs/findbugs
 Key: HBASE-22383
 URL: https://issues.apache.org/jira/browse/HBASE-22383
 Project: HBase
  Issue Type: Task
  Components: community, test
Reporter: Sean Busbey


PRs on github aren't getting findbugs run, which means we don't catch things 
until nightly.

e.g. https://github.com/apache/hbase/pull/216



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-21777) "Tune compaction throughput" debug messages even when nothing has changed

2019-05-08 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-21777:
-

This caused a findbugs regression in nightly:

https://builds.apache.org/job/HBase%20Nightly/job/master/990/artifact/output-jdk8-hadoop2/branch-findbugs-hbase-server-warnings.html

Please fix or revert.

It looks like github PR qabot doesn't have findbugs set up, I've filed 
HBASE-22383 for the gap.

> "Tune compaction throughput" debug messages even when nothing has changed 
> --
>
> Key: HBASE-21777
> URL: https://issues.apache.org/jira/browse/HBASE-21777
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
>  Labels: branch-1
> Fix For: 3.0.0, 1.5.0, 2.2.1
>
>
> PressureAwareCompactionThroughputController will log "tune compaction 
> throughput" debug messages even when after consideration the re-tuning makes 
> no change to current settings. In that case it would be better not to log 
> anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Reidddddd commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
Reidd commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282079017
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ShortCircuitMasterConnection.java
 ##
 @@ -31,140 +31,7 @@
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AccessControlProtos.RevokeResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.CoprocessorServiceRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.CoprocessorServiceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AbortProcedureRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AbortProcedureResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ClearDeadServersRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ClearDeadServersResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DecommissionRegionServersRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DecommissionRegionServersResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DisableTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DisableTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableCatalogJanitorRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableCatalogJanitorResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ExecProcedureRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ExecProcedureResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetClusterStatusRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetClusterStatusResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetCompletedSnapshotsRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetCompletedSnapshotsResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetLocksRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetLocksResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetNamespaceDescriptorRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetNamespaceDescriptorResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProcedureResultRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProcedureResultResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProceduresRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProceduresResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetSchemaAlterStatusRequest;
-import

[GitHub] [hbase] Reidddddd commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
Reidd commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282079732
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 ##
 @@ -158,140 +158,7 @@
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.LockServiceProtos.LockService;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AbortProcedureRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AbortProcedureResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AddColumnResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.AssignRegionResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.BalanceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ClearDeadServersRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ClearDeadServersResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateNamespaceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.CreateTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DecommissionRegionServersRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DecommissionRegionServersResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteColumnResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteNamespaceResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteSnapshotResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DeleteTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DisableTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.DisableTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableCatalogJanitorRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableCatalogJanitorResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableTableRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.EnableTableResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ExecProcedureRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.ExecProcedureResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetClusterStatusRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetClusterStatusResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetCompletedSnapshotsRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetCompletedSnapshotsResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetLocksRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetLocksResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetNamespaceDescriptorRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetNamespaceDescriptorResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProcedureResultRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProcedureResultResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProceduresRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetProceduresResponse;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetSchemaAlterStatusRequest;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generate

[GitHub] [hbase] Reidddddd commented on issue #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
Reidd commented on issue #225: HBASE-22377 Provide API to check the 
existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#issuecomment-490499341
 
 
   LGTM overall, I just left two comments about import check-style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] syedmurtazahassan opened a new pull request #227: HBASE-22358 Change rubocop configuration for method length

2019-05-08 Thread GitBox
syedmurtazahassan opened a new pull request #227: HBASE-22358 Change rubocop 
configuration for method length
URL: https://github.com/apache/hbase/pull/227
 
 
   Changed configuration and analyzed the code and the change in configuration 
works. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-22384) Formatting issues in administration section of book

2019-05-08 Thread Jan Hentschel (JIRA)
Jan Hentschel created HBASE-22384:
-

 Summary: Formatting issues in administration section of book
 Key: HBASE-22384
 URL: https://issues.apache.org/jira/browse/HBASE-22384
 Project: HBase
  Issue Type: Improvement
  Components: community, documentation
Reporter: Jan Hentschel
Assignee: Jan Hentschel


The administration section in the book (64.3.2. Administration) has some 
formatting issues. Due to that issues the list count is not accurate, as well 
as the indentation of some code snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-22384) Formatting issues in administration section of book

2019-05-08 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22384 started by Jan Hentschel.
-
> Formatting issues in administration section of book
> ---
>
> Key: HBASE-22384
> URL: https://issues.apache.org/jira/browse/HBASE-22384
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
>
> The administration section in the book (64.3.2. Administration) has some 
> formatting issues. Due to that issues the list count is not accurate, as well 
> as the indentation of some code snippets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet opened a new pull request #228: HBASE-22384 Fixed formatting issues in administration section of book

2019-05-08 Thread GitBox
HorizonNet opened a new pull request #228: HBASE-22384 Fixed formatting issues 
in administration section of book
URL: https://github.com/apache/hbase/pull/228
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835658#comment-16835658
 ] 

HBase QA commented on HBASE-20821:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
35s{color} | {color:blue} hbase-server in master has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}145m 
16s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/271/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-20821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968184/HBASE-20821.master.v003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6097dd1ae87d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 46fe9833a9 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/271/testReport/ |
| Max. process+thread count | 5199 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/271/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatica

[GitHub] [hbase] HorizonNet merged pull request #226: HBASE-22379 Fixed Markdown in 'Voting on Release Candidates' section

2019-05-08 Thread GitBox
HorizonNet merged pull request #226: HBASE-22379 Fixed Markdown in 'Voting on 
Release Candidates' section
URL: https://github.com/apache/hbase/pull/226
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #227: HBASE-22358 Change rubocop configuration for method length

2019-05-08 Thread GitBox
Apache-HBase commented on issue #227: HBASE-22358 Change rubocop configuration 
for method length
URL: https://github.com/apache/hbase/pull/227#issuecomment-490522333
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 1 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 408 | master passed |
   | +1 | javadoc | 49 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 412 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 486 | hbase-shell in the patch passed. |
   | +1 | asflicense | 11 | The patch does not generate ASF License warnings. |
   | | | 2075 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-227/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/227 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  yamllint  |
   | uname | Linux d6a6abbd4f30 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 46fe9833a9 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-227/1/testReport/
 |
   | Max. process+thread count | 2611 (vs. ulimit of 1) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-227/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835669#comment-16835669
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #216 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/216/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/216//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/216//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/216//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835679#comment-16835679
 ] 

ramkrishna.s.vasudevan commented on HBASE-22072:


Pushed to all the branch-2 lines. Need to rebase the patch for branch-1 series. 
Will resolve it once I push it there. Thanks for all the reviews.

> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22311) Update community docs to recommend use of "Co-authored-by" in git commits

2019-05-08 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-22311:
---

Assignee: Norbert Kalmar

that'd be great!

> Update community docs to recommend use of "Co-authored-by" in git commits
> -
>
> Key: HBASE-22311
> URL: https://issues.apache.org/jira/browse/HBASE-22311
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Sean Busbey
>Assignee: Norbert Kalmar
>Priority: Minor
>
> discussion on [\[DISCUSS\] switch from "Ammending-Author" to "Co-authored-by" 
> in commit messages|https://s.apache.org/ISs4] seems to have out in favor.
>  
> Updated section should include a brief explanation (that includes "multiple 
> authors" expressly instead of just the "fixed up this thing" that's there for 
> Amending-Author). It should also have pointers to the github feature 
> explanation. So long as those docs exist they're pretty good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21777) "Tune compaction throughput" debug messages even when nothing has changed

2019-05-08 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835759#comment-16835759
 ] 

Yu Li commented on HBASE-21777:
---

Pushed hotfix of the findbugs warning to all relative branches. Thanks for the 
reminder [~busbey]

> "Tune compaction throughput" debug messages even when nothing has changed 
> --
>
> Key: HBASE-21777
> URL: https://issues.apache.org/jira/browse/HBASE-21777
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
>  Labels: branch-1
> Fix For: 3.0.0, 1.5.0, 2.2.1
>
>
> PressureAwareCompactionThroughputController will log "tune compaction 
> throughput" debug messages even when after consideration the re-tuning makes 
> no change to current settings. In that case it would be better not to log 
> anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835760#comment-16835760
 ] 

Hudson commented on HBASE-22072:


Results for branch branch-2.1
[build #1122 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1122/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1122//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1122//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1122//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21777) "Tune compaction throughput" debug messages even when nothing has changed

2019-05-08 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li resolved HBASE-21777.
---
Resolution: Fixed

> "Tune compaction throughput" debug messages even when nothing has changed 
> --
>
> Key: HBASE-21777
> URL: https://issues.apache.org/jira/browse/HBASE-21777
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
>  Labels: branch-1
> Fix For: 3.0.0, 1.5.0, 2.2.1
>
>
> PressureAwareCompactionThroughputController will log "tune compaction 
> throughput" debug messages even when after consideration the re-tuning makes 
> no change to current settings. In that case it would be better not to log 
> anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22385) Consider "programmatic" HFiles

2019-05-08 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-22385:
-

 Summary: Consider "programmatic" HFiles
 Key: HBASE-22385
 URL: https://issues.apache.org/jira/browse/HBASE-22385
 Project: HBase
  Issue Type: Brainstorming
Reporter: Lars Hofhansl


For various use case (among other there is mass deletes) it would be great if 
HBase had a mechanism for programmatic HFiles. I.e. HFiles (with HFileScanner 
and Reader) that produce KeyValue just like any other old HFile, but the key 
values produced are generated or produced by some other means rather than being 
physically read from some storage medium.

In fact this could be a generalization for the various HFiles we have: (Normal) 
HFiles, HFileLinks, HalfStoreFiles, etc.

A simple way could be to allow for storing a classname into the HFile. Upon 
reading the HFile HBase would instantiate an instance of that class and that 
instance is responsible for all further interaction with that HFile. For normal 
HFiles it would just be the normal HFileReader.

(Remember this is Brainstorming)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22385) Consider "programmatic" HFiles

2019-05-08 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835763#comment-16835763
 ] 

Lars Hofhansl commented on HBASE-22385:
---

[~jisaac], what we chatted about.

> Consider "programmatic" HFiles
> --
>
> Key: HBASE-22385
> URL: https://issues.apache.org/jira/browse/HBASE-22385
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Major
>
> For various use case (among other there is mass deletes) it would be great if 
> HBase had a mechanism for programmatic HFiles. I.e. HFiles (with HFileScanner 
> and Reader) that produce KeyValue just like any other old HFile, but the key 
> values produced are generated or produced by some other means rather than 
> being physically read from some storage medium.
> In fact this could be a generalization for the various HFiles we have: 
> (Normal) HFiles, HFileLinks, HalfStoreFiles, etc.
> A simple way could be to allow for storing a classname into the HFile. Upon 
> reading the HFile HBase would instantiate an instance of that class and that 
> instance is responsible for all further interaction with that HFile. For 
> normal HFiles it would just be the normal HFileReader.
> (Remember this is Brainstorming)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22385) Consider "programmatic" HFiles

2019-05-08 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-22385:
--
Description: 
For various use cases (among others there is mass deletes) it would be great if 
HBase had a mechanism for programmatic HFiles. I.e. HFiles (with HFileScanner 
and Reader) that produce KeyValues just like any other old HFile, but the key 
values produced are generated or produced by some other means rather than being 
physically read from some storage medium.

In fact this could be a generalization for the various HFiles we have: (Normal) 
HFiles, HFileLinks, HalfStoreFiles, etc.

A simple way could be to allow for storing a classname into the HFile. Upon 
reading the HFile HBase would instantiate an instance of that class and that 
instance is responsible for all further interaction with that HFile. For normal 
HFiles it would just be the normal HFileReaderVx.

(Remember this is Brainstorming :) )

  was:
For various use case (among other there is mass deletes) it would be great if 
HBase had a mechanism for programmatic HFiles. I.e. HFiles (with HFileScanner 
and Reader) that produce KeyValue just like any other old HFile, but the key 
values produced are generated or produced by some other means rather than being 
physically read from some storage medium.

In fact this could be a generalization for the various HFiles we have: (Normal) 
HFiles, HFileLinks, HalfStoreFiles, etc.

A simple way could be to allow for storing a classname into the HFile. Upon 
reading the HFile HBase would instantiate an instance of that class and that 
instance is responsible for all further interaction with that HFile. For normal 
HFiles it would just be the normal HFileReader.

(Remember this is Brainstorming)


> Consider "programmatic" HFiles
> --
>
> Key: HBASE-22385
> URL: https://issues.apache.org/jira/browse/HBASE-22385
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Major
>
> For various use cases (among others there is mass deletes) it would be great 
> if HBase had a mechanism for programmatic HFiles. I.e. HFiles (with 
> HFileScanner and Reader) that produce KeyValues just like any other old 
> HFile, but the key values produced are generated or produced by some other 
> means rather than being physically read from some storage medium.
> In fact this could be a generalization for the various HFiles we have: 
> (Normal) HFiles, HFileLinks, HalfStoreFiles, etc.
> A simple way could be to allow for storing a classname into the HFile. Upon 
> reading the HFile HBase would instantiate an instance of that class and that 
> instance is responsible for all further interaction with that HFile. For 
> normal HFiles it would just be the normal HFileReaderVx.
> (Remember this is Brainstorming :) )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835776#comment-16835776
 ] 

Hudson commented on HBASE-22072:


Results for branch branch-2.0
[build #1572 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1572/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1572//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1572//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1572//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282182371
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 ##
 @@ -297,6 +297,24 @@ public void call(MasterObserver observer) throws 
IOException {
 });
   }
 
+  public void preListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
+oserver.preListNamespaces(this, namespaces);
 
 Review comment:
   ```suggestion
   observer.preListNamespaces(this, namespaces);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282182598
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 ##
 @@ -297,6 +297,24 @@ public void call(MasterObserver observer) throws 
IOException {
 });
   }
 
+  public void preListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
+oserver.preListNamespaces(this, namespaces);
+  }
+});
+  }
+
+  public void postListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
+oserver.postListNamespaces(this, namespaces);
 
 Review comment:
   ```suggestion
   observer.postListNamespaces(this, namespaces);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282182515
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 ##
 @@ -297,6 +297,24 @@ public void call(MasterObserver observer) throws 
IOException {
 });
   }
 
+  public void preListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
+oserver.preListNamespaces(this, namespaces);
+  }
+});
+  }
+
+  public void postListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
 
 Review comment:
   ```suggestion
 public void call(MasterObserver observer) throws IOException {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
jatsakthi commented on a change in pull request #225: HBASE-22377 Provide API 
to check the existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#discussion_r282180866
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
 ##
 @@ -297,6 +297,24 @@ public void call(MasterObserver observer) throws 
IOException {
 });
   }
 
+  public void preListNamespaces(final List namespaces) throws 
IOException {
+execOperation(coprocEnvironments.isEmpty() ? null : new 
MasterObserverOperation() {
+  @Override
+  public void call(MasterObserver oserver) throws IOException {
 
 Review comment:
   Just a minor spelling nit: 
   ```suggestion
 public void call(MasterObserver observer) throws IOException {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #193: HBASE-13798 TestFromClientSide* don't close the Table

2019-05-08 Thread GitBox
Apache-HBase commented on issue #193: HBASE-13798 TestFromClientSide* don't 
close the Table
URL: https://github.com/apache/hbase/pull/193#issuecomment-490598338
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 236 | master passed |
   | +1 | compile | 52 | master passed |
   | +1 | checkstyle | 69 | master passed |
   | +1 | shadedjars | 258 | branch has no errors when building our shaded 
downstream artifacts. |
   | -1 | findbugs | 178 | hbase-server in master has 1 extant Findbugs 
warnings. |
   | +1 | javadoc | 32 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 236 | the patch passed |
   | +1 | compile | 52 | the patch passed |
   | +1 | javac | 52 | the patch passed |
   | -1 | checkstyle | 69 | hbase-server: The patch generated 5 new + 54 
unchanged - 61 fixed = 59 total (was 115) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 262 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 479 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 190 | the patch passed |
   | +1 | javadoc | 33 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 17432 | hbase-server in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 19739 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
   |   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
   |   | hadoop.hbase.client.TestFromClientSide3 |
   |   | hadoop.hbase.client.TestAdmin1 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/193 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 11d43652cb7e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 46fe9833a9 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/artifact/out/branch-findbugs-hbase-server-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/testReport/
 |
   | Max. process+thread count | 5053 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-193/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on issue #225: HBASE-22377 Provide API to check the existence of a namespace which does not require ADMIN permissions

2019-05-08 Thread GitBox
jatsakthi commented on issue #225: HBASE-22377 Provide API to check the 
existence of a namespace which does not require ADMIN permissions
URL: https://github.com/apache/hbase/pull/225#issuecomment-490600821
 
 
   Also, would it make sense to add a comment somewhere in master or 
preListNamespaces or even the client side about making sure no accesscontroller 
check is skipped on purpose so that folks who try to introduce that change are 
aware of the the requirement of this issue? And how about putting in few tests 
cases around admin.listNamespaces() that checks this as well in the future as 
there aren't very many tests hitting the admin.listNamespaces codepath 
currently?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22385) Consider "programmatic" HFiles

2019-05-08 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-22385:
--
Description: 
For various use cases (among others there is mass deletes) it would be great if 
HBase had a mechanism for programmatic HFiles. I.e. HFiles (Reader) that 
produce KeyValues just like any other old HFile, but the key values produced 
are generated or produced by some other means rather than being physically read 
from some storage medium.

In fact this could be a generalization for the various HFiles we have: (Normal) 
HFiles, HFileLinks, HalfStoreFiles, etc.

A simple way could be to allow for storing a classname into the HFile. Upon 
reading the HFile HBase would instantiate an instance of that class and that 
instance is responsible for all further interaction with that HFile. For normal 
HFiles it would just be the normal HFileReaderVx. For that we'd also need to 
StoreFile.Reader into an interface (or a more basic base class) that can be 
properly implemented.

(Remember this is Brainstorming :) )

  was:
For various use cases (among others there is mass deletes) it would be great if 
HBase had a mechanism for programmatic HFiles. I.e. HFiles (with HFileScanner 
and Reader) that produce KeyValues just like any other old HFile, but the key 
values produced are generated or produced by some other means rather than being 
physically read from some storage medium.

In fact this could be a generalization for the various HFiles we have: (Normal) 
HFiles, HFileLinks, HalfStoreFiles, etc.

A simple way could be to allow for storing a classname into the HFile. Upon 
reading the HFile HBase would instantiate an instance of that class and that 
instance is responsible for all further interaction with that HFile. For normal 
HFiles it would just be the normal HFileReaderVx.

(Remember this is Brainstorming :) )


> Consider "programmatic" HFiles
> --
>
> Key: HBASE-22385
> URL: https://issues.apache.org/jira/browse/HBASE-22385
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Major
>
> For various use cases (among others there is mass deletes) it would be great 
> if HBase had a mechanism for programmatic HFiles. I.e. HFiles (Reader) that 
> produce KeyValues just like any other old HFile, but the key values produced 
> are generated or produced by some other means rather than being physically 
> read from some storage medium.
> In fact this could be a generalization for the various HFiles we have: 
> (Normal) HFiles, HFileLinks, HalfStoreFiles, etc.
> A simple way could be to allow for storing a classname into the HFile. Upon 
> reading the HFile HBase would instantiate an instance of that class and that 
> instance is responsible for all further interaction with that HFile. For 
> normal HFiles it would just be the normal HFileReaderVx. For that we'd also 
> need to StoreFile.Reader into an interface (or a more basic base class) that 
> can be properly implemented.
> (Remember this is Brainstorming :) )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835807#comment-16835807
 ] 

Hudson commented on HBASE-22072:


Results for branch branch-2.2
[build #244 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/244/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/244//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/244//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/244//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835820#comment-16835820
 ] 

Sakthi commented on HBASE-22264:


Ping [~busbey]. I think this one looks okay?

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835825#comment-16835825
 ] 

Sean Busbey commented on HBASE-22264:
-

I'm waiting on an assembly so I can poke around at the results.

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22369) CoprocessorClassLoader.init() throws a RuntimeException when it fails to create a directory

2019-05-08 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835824#comment-16835824
 ] 

Xu Cang commented on HBASE-22369:
-

This is the common way HBase code deals with situations that we don't want to 
proceed.

Such as

[https://github.com/apache/hbase/blob/025ddce868eb06b4072b5152c5ffae5a01e7ae30/hbase-server/src/test/java/org/apache/hadoop/hbase/util/HBaseHomePath.java#L39]

> CoprocessorClassLoader.init() throws a RuntimeException when it fails to 
> create a directory
> ---
>
> Key: HBASE-22369
> URL: https://issues.apache.org/jira/browse/HBASE-22369
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: eBugs
>Priority: Minor
>
> Dear HBase developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message seem to indicate different error 
> conditions.
>  
> Version: HBase-2.1.4 
> File: 
> HBASE-ROOT/hbase-common/src/java/org/apache/hbase/utils/CoprocessorClassLoader.java
> Line: 160-161
> {code:java}
> throw new RuntimeException("Failed to create local dir " + parentDirStr
>   + ", CoprocessorClassLoader failed to init");{code}
>  
> {{RuntimeException}} is usually used to represent errors in the program logic 
> (think of one of its subclasses, {{NullPointerException}}), while the error 
> message indicates that {{init()}} failed to create a directory. This mismatch 
> could be a problem. For example, the callers may miss the case where 
> {{init()}} fails to create a directory. Or, the callers trying to handle 
> other {{RuntimeException}} may accidentally (and incorrectly) handle the 
> directory creation failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22378) HBase Canary fails with TableNotFoundException when table deleted during Canary run

2019-05-08 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned HBASE-22378:
---

Assignee: Caroline

> HBase Canary fails with TableNotFoundException when table deleted during 
> Canary run
> ---
>
> Key: HBASE-22378
> URL: https://issues.apache.org/jira/browse/HBASE-22378
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
>
> In 1.3.2 branch-1, we saw a drastic increase in TableNotFoundExceptions 
> thrown by HBase Canary. We traced the issue back to Canary trying to call 
> isTableEnabled() on temporary tables that were deleted in the middle of the 
> Canary run.
> In this version of HBase Canary, Canary throws TableNotFoundException (and 
> then fails) if a table is deleted between admin.listTables() and 
> admin.tableEnabled() function calls in RegionMonitor's sniff() method. 
> Following the goal of RegionMonitor.sniff(), which is to query all existing 
> tables, in order to reduce noise we should skip over a table (i.e. don't 
> check if it was enabled, or do anything else with it at all) if it was 
> returned in listTables() but deleted before Canary can query it. Temporary 
> tables which are not meant to be kept should not throw 
> TableNotFoundExceptions which fail the Canary.
> Patch in progress:
> Add a call to admin.tableExists() before tableEnabled() on line 1244 in 
> RegionMonitor.sniff().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22385) Consider "programmatic" HFiles

2019-05-08 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835830#comment-16835830
 ] 

Sergey Shelukhin commented on HBASE-22385:
--

Could the refactoring be used to allow multi-level splits by splitting 
references (preferably via a new modified reference, not multi-level 
references)?

> Consider "programmatic" HFiles
> --
>
> Key: HBASE-22385
> URL: https://issues.apache.org/jira/browse/HBASE-22385
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Major
>
> For various use cases (among others there is mass deletes) it would be great 
> if HBase had a mechanism for programmatic HFiles. I.e. HFiles (Reader) that 
> produce KeyValues just like any other old HFile, but the key values produced 
> are generated or produced by some other means rather than being physically 
> read from some storage medium.
> In fact this could be a generalization for the various HFiles we have: 
> (Normal) HFiles, HFileLinks, HalfStoreFiles, etc.
> A simple way could be to allow for storing a classname into the HFile. Upon 
> reading the HFile HBase would instantiate an instance of that class and that 
> instance is responsible for all further interaction with that HFile. For 
> normal HFiles it would just be the normal HFileReaderVx. For that we'd also 
> need to StoreFile.Reader into an interface (or a more basic base class) that 
> can be properly implemented.
> (Remember this is Brainstorming :) )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835834#comment-16835834
 ] 

Hudson commented on HBASE-22072:


Results for branch branch-2
[build #1876 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1876/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1876//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1876//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1876//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22330) Backport HBASE-20724 (Sometimes some compacted storefiles are still opened after region failover) to branch-1

2019-05-08 Thread Abhishek Singh Chouhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835836#comment-16835836
 ] 

Abhishek Singh Chouhan commented on HBASE-22330:


Tests came up fine for 1.3. Planning to commit later today if no objections. 
1.3 patch differs only in the test class due to api differences. [~apurtell]

> Backport HBASE-20724 (Sometimes some compacted storefiles are still opened 
> after region failover) to branch-1
> -
>
> Key: HBASE-22330
> URL: https://issues.apache.org/jira/browse/HBASE-22330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Affects Versions: 1.5.0, 1.4.9, 1.3.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 1.5.0, 1.3.5, 1.4.11
>
> Attachments: HBASE-22330.branch-1.001.patch, 
> HBASE-22330.branch-1.002.patch, HBASE-22330.branch-1.3.001.patch
>
>
> There appears to be a race condition between close and split which when 
> combined with a side effect of HBASE-20704, leads to the parent region store 
> files getting archived and cleared while daughter regions still have 
> references to those parent region store files.
> Here is the timeline of events observed for an affected region:
>  # RS1 faces ZooKeeper connectivity issue for master node and starts shutting 
> itself down. As part of this it starts to close the store and clean up the 
> compacted files (File A)
>  # Master starts bulk assigning regions and assign parent region to RS2
>  # Region opens on RS2 and ends up opening compacted store file(s) (suspect 
> this is due to HBASE-20724)
>  # Now split happens and daughter regions open on RS2 and try to run a 
> compaction as part of post open
>  # Split request at this point is complete. However now archiving proceeds on 
> RS1 and ends up archiving the store file that is referenced by the daughter. 
> Compaction fails due to FileNotFoundException and all subsequent attempts to 
> open the region will fail until manual resolution.
> We think having HBASE-20724 would help in such situations since we won't end 
> up loading compacted store files in the first place. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread Shardul Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835859#comment-16835859
 ] 

Shardul Singh commented on HBASE-20821:
---

hi [~elserj]..can you please review this patch...thanks in advance? :-)

> Re-creating a dropped namespace and contained table inherits previously set 
> space quota settings
> 
>
> Key: HBASE-20821
> URL: https://issues.apache.org/jira/browse/HBASE-20821
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Nihal Jain
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20821.master.v001patch, 
> HBASE-20821.master.v002.patch, HBASE-20821.master.v003.patch
>
>
> As demonstarted in 
> [HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
>  re-creating a dropped namespace and contained table inherits previously set 
> space quota settings.
> *Steps:*
>  * Create a namespace and a table in it
>  * Set space quota on namespace
>  * Violate namespace quota
>  * Drop table and then namespace
>  * Re create same namespace and same table
>  * Put data into the table (more than the previosuly set namespace quota 
> limit)
> {code:java}
> private void setQuotaAndThenDropNamespace(final String namespace, 
> SpaceViolationPolicy policy)
> throws Exception {
>   Put put = new Put(Bytes.toBytes("to_reject"));
>   put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
> Bytes.toBytes("to"),
> Bytes.toBytes("reject"));
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   // Do puts until we violate space policy
>   final TableName tn = 
> writeUntilNSSpaceViolationAndVerifyViolation(namespace, policy, put);
>   // Now, drop the table
>   TEST_UTIL.deleteTable(tn);
>   LOG.debug("Successfully deleted table ", tn);
>   // Now, drop the namespace
>   TEST_UTIL.getAdmin().deleteNamespace(namespace);
>   LOG.debug("Successfully deleted the namespace ", namespace);
>   // Now re-create the namespace
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   LOG.debug("Successfully re-created the namespace ", namespace);
>   TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
>   LOG.debug("Successfully re-created table ", tn);
>   // Put some rows now: should not violate as namespace/quota was dropped
>   verifyNoViolation(policy, tn, put);
> }
> {code}
> *Expected*: SpaceQuota settings should not exist on the newly re-created 
> table and we should be able to put limit less data into the table
> *Actual:* We fail to put data into newly created table as SpaceQuota settings 
> (systematically created due to previously added namespace space quota) exist 
> on table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20821) Re-creating a dropped namespace and contained table inherits previously set space quota settings

2019-05-08 Thread Shardul Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835859#comment-16835859
 ] 

Shardul Singh edited comment on HBASE-20821 at 5/8/19 7:52 PM:
---

hi [~elserj]..can you please review this patch?...thanks in advance. :-)


was (Author: shardulsingh):
hi [~elserj]..can you please review this patch...thanks in advance? :-)

> Re-creating a dropped namespace and contained table inherits previously set 
> space quota settings
> 
>
> Key: HBASE-20821
> URL: https://issues.apache.org/jira/browse/HBASE-20821
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Nihal Jain
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20821.master.v001patch, 
> HBASE-20821.master.v002.patch, HBASE-20821.master.v003.patch
>
>
> As demonstarted in 
> [HBASE-20662.master.002.patch|https://issues.apache.org/jira/secure/attachment/12927187/HBASE-20662.master.002.patch]
>  re-creating a dropped namespace and contained table inherits previously set 
> space quota settings.
> *Steps:*
>  * Create a namespace and a table in it
>  * Set space quota on namespace
>  * Violate namespace quota
>  * Drop table and then namespace
>  * Re create same namespace and same table
>  * Put data into the table (more than the previosuly set namespace quota 
> limit)
> {code:java}
> private void setQuotaAndThenDropNamespace(final String namespace, 
> SpaceViolationPolicy policy)
> throws Exception {
>   Put put = new Put(Bytes.toBytes("to_reject"));
>   put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
> Bytes.toBytes("to"),
> Bytes.toBytes("reject"));
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   // Do puts until we violate space policy
>   final TableName tn = 
> writeUntilNSSpaceViolationAndVerifyViolation(namespace, policy, put);
>   // Now, drop the table
>   TEST_UTIL.deleteTable(tn);
>   LOG.debug("Successfully deleted table ", tn);
>   // Now, drop the namespace
>   TEST_UTIL.getAdmin().deleteNamespace(namespace);
>   LOG.debug("Successfully deleted the namespace ", namespace);
>   // Now re-create the namespace
>   createNamespaceIfNotExist(TEST_UTIL.getAdmin(), namespace);
>   LOG.debug("Successfully re-created the namespace ", namespace);
>   TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
>   LOG.debug("Successfully re-created table ", tn);
>   // Put some rows now: should not violate as namespace/quota was dropped
>   verifyNoViolation(policy, tn, put);
> }
> {code}
> *Expected*: SpaceQuota settings should not exist on the newly re-created 
> table and we should be able to put limit less data into the table
> *Actual:* We fail to put data into newly created table as SpaceQuota settings 
> (systematically created due to previously added namespace space quota) exist 
> on table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22072) High read/write intensive regions may cause long crash recovery

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835878#comment-16835878
 ] 

Hudson commented on HBASE-22072:


Results for branch master
[build #992 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/992/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/992//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/992//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/992//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> High read/write intensive regions may cause long crash recovery
> ---
>
> Key: HBASE-22072
> URL: https://issues.apache.org/jira/browse/HBASE-22072
> Project: HBase
>  Issue Type: Bug
>  Components: Performance, Recovery
>Affects Versions: 2.1.2
>Reporter: Pavel
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: compaction
> Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted 
> because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted 
> file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file 
> has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may 
> have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region 
> closing procedure, which ignores existing references and drop obsolete files. 
> It works fine unless consuming some extra hdfs space, but only in case of 
> normal region closing. If region server crashes than new region server, 
> responsible for that overfiling region, reads hdfs folder and try to deal 
> with all undeleted files, producing tons of storefiles, compaction tasks and 
> consuming abnormal amount of memory, wich may lead to OutOfMemory Exception 
> and further region servers crash. This stops writing to region because number 
> of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC 
> duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region 
> assign for ones with too many files.
> It could be nice if regionserver had a setting similar to 
> hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted 
> compacted files if number of files reaches this setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase-filesystem] busbey commented on issue #1: HBASE-22149. HBOSS: A FileSystem implementation to provide HBase's re…

2019-05-08 Thread GitBox
busbey commented on issue #1: HBASE-22149. HBOSS: A FileSystem implementation 
to provide HBase's re…
URL: https://github.com/apache/hbase-filesystem/pull/1#issuecomment-490646415
 
 
   I'd like to merge this and then work on changes it needs via follow-ons. 
Preferably anything needed before we start cutting releases will be 
appropriately labeled as a blocker.
   
   I'd expect the list of blockers to be very short, considering that I'm 
talking about releases that carry an alpha label. It looks like the lockListing 
discussion above would qualify.
   
   @wchevreuil and @mackrorysd what do y'all think? If you have more you'd like 
to do here on the PR that's fine by me as well. caveat that I'll need to circle 
back for the same checks I did before for a future merge-with-follow-ons call. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22378) HBase Canary fails with TableNotFoundException when table deleted during Canary run

2019-05-08 Thread Caroline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22378:
-
Attachment: HBASE-22378.000.patch
HBASE-22378.branch-1.001.patch
HBASE-22378.branch-2.001.patch
Status: Patch Available  (was: Open)

> HBase Canary fails with TableNotFoundException when table deleted during 
> Canary run
> ---
>
> Key: HBASE-22378
> URL: https://issues.apache.org/jira/browse/HBASE-22378
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 1.4.0, 1.3.0, 1.5.0
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22378.000.patch, HBASE-22378.branch-1.001.patch, 
> HBASE-22378.branch-2.001.patch
>
>
> In 1.3.2 branch-1, we saw a drastic increase in TableNotFoundExceptions 
> thrown by HBase Canary. We traced the issue back to Canary trying to call 
> isTableEnabled() on temporary tables that were deleted in the middle of the 
> Canary run.
> In this version of HBase Canary, Canary throws TableNotFoundException (and 
> then fails) if a table is deleted between admin.listTables() and 
> admin.tableEnabled() function calls in RegionMonitor's sniff() method. 
> Following the goal of RegionMonitor.sniff(), which is to query all existing 
> tables, in order to reduce noise we should skip over a table (i.e. don't 
> check if it was enabled, or do anything else with it at all) if it was 
> returned in listTables() but deleted before Canary can query it. Temporary 
> tables which are not meant to be kept should not throw 
> TableNotFoundExceptions which fail the Canary.
> Patch in progress:
> Add a call to admin.tableExists() before tableEnabled() on line 1244 in 
> RegionMonitor.sniff().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22264:

Attachment: run_ITD_with_REST_ClusterManager.log

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt, run_ITD_with_REST_ClusterManager.log
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835916#comment-16835916
 ] 

Sean Busbey commented on HBASE-22264:
-

okay I ran into a problem with missing classes in JDK11. log attached in 

https://issues.apache.org/jira/secure/attachment/12968228/run_ITD_with_REST_ClusterManager.log

passes fine when I run under jdk8. specifically I can run the cluster under 
jdk11 and run the IntegrationTestDriver under jdk8 and have it pass. I don't 
think the jdk used by the cluster matters.

I think it's because of this bit here:

{code}
diff --git a/hbase-assembly/src/main/assembly/hadoop-two-compat.xml 
b/hbase-assembly/src/main/assembly/hadoop-two-compat.xml
index 
05e2fc9565266e4b8f08167e8d58a026d1cbb77e..27492a7133b96f6b65e55ba4ba307cb6dc132005
 100644
--- a/hbase-assembly/src/main/assembly/hadoop-two-compat.xml
+++ b/hbase-assembly/src/main/assembly/hadoop-two-compat.xml
@@ -68,6 +68,7 @@
 
 
   com.sun.xml.ws:jaxws-ri
{code}

the CNFE we get is from some part of the jaxws reference implementation that's 
present in JDK8 and not around any more in JDK11.

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt, run_ITD_with_REST_ClusterManager.log
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22386) HBOSS: Limit depth that listing locks check for other locks

2019-05-08 Thread Sean Mackrory (JIRA)
Sean Mackrory created HBASE-22386:
-

 Summary: HBOSS: Limit depth that listing locks check for other 
locks
 Key: HBASE-22386
 URL: https://issues.apache.org/jira/browse/HBASE-22386
 Project: HBase
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


treeWriteLock will check all the way up and down the tree for locks. This is 
more aggressive than it needs to be, and integration testing has shown that 
there's significant contention when listing tables, and this is one of numerous 
operations that doesn't need to recursively lock the whole subtree. There's 
actually a number of operations that only need to lock up or down 1 level only, 
so let's start with listing: non-recursive listings don't need to care about 
what's going on more than 1 level below them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22330) Backport HBASE-20724 (Sometimes some compacted storefiles are still opened after region failover) to branch-1

2019-05-08 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835920#comment-16835920
 ] 

Andrew Purtell commented on HBASE-22330:


+1
Please commit

> Backport HBASE-20724 (Sometimes some compacted storefiles are still opened 
> after region failover) to branch-1
> -
>
> Key: HBASE-22330
> URL: https://issues.apache.org/jira/browse/HBASE-22330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, regionserver
>Affects Versions: 1.5.0, 1.4.9, 1.3.4
>Reporter: Andrew Purtell
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 1.5.0, 1.3.5, 1.4.11
>
> Attachments: HBASE-22330.branch-1.001.patch, 
> HBASE-22330.branch-1.002.patch, HBASE-22330.branch-1.3.001.patch
>
>
> There appears to be a race condition between close and split which when 
> combined with a side effect of HBASE-20704, leads to the parent region store 
> files getting archived and cleared while daughter regions still have 
> references to those parent region store files.
> Here is the timeline of events observed for an affected region:
>  # RS1 faces ZooKeeper connectivity issue for master node and starts shutting 
> itself down. As part of this it starts to close the store and clean up the 
> compacted files (File A)
>  # Master starts bulk assigning regions and assign parent region to RS2
>  # Region opens on RS2 and ends up opening compacted store file(s) (suspect 
> this is due to HBASE-20724)
>  # Now split happens and daughter regions open on RS2 and try to run a 
> compaction as part of post open
>  # Split request at this point is complete. However now archiving proceeds on 
> RS1 and ends up archiving the store file that is referenced by the daughter. 
> Compaction fails due to FileNotFoundException and all subsequent attempts to 
> open the region will fail until manual resolution.
> We think having HBASE-20724 would help in such situations since we won't end 
> up loading compacted store files in the first place. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase-filesystem] mackrorysd commented on issue #1: HBASE-22149. HBOSS: A FileSystem implementation to provide HBase's re…

2019-05-08 Thread GitBox
mackrorysd commented on issue #1: HBASE-22149. HBOSS: A FileSystem 
implementation to provide HBase's re…
URL: https://github.com/apache/hbase-filesystem/pull/1#issuecomment-490659715
 
 
   Sounds good to me, @busbey, thank you. I've filed 
https://issues.apache.org/jira/browse/HBASE-22386 for the listing depth issue, 
and will continue to file JIRAs for any follow-up work, unless anyone objects 
and wants to see something fixed in this PR specifically.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work started] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22184 started by Wei-Chiu Chuang.
---
> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835924#comment-16835924
 ] 

Sakthi commented on HBASE-22264:


The build-with-jdk11 profile brings in com.sun.xml.ws:jaxws-ri. 

If we want to build using 8 and run on 11 then we might end up not having this 
artifact as the build-with-jdk11 profile woudn't be invoked. Hence I think we 
need to move it out of profile but then exclude it from hbase-assembly and add 
it to lib/jdk11 so that whenever the runtime env is j11 then it's present in 
the classpath. What do you think Sean?

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt, run_ITD_with_REST_ClusterManager.log
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-08 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835925#comment-16835925
 ] 

Wei-Chiu Chuang commented on HBASE-22184:
-

Got side-tracked the past few weeks. Coming back to this.

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Separate out jars related to JDK 11 into a folder in /lib

2019-05-08 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835928#comment-16835928
 ] 

Sean Busbey commented on HBASE-22264:
-

under jdk 8 we should add it as a dependency specifically for hbase-assembly, 
since we only need it in the packaging for runtime. it'll already show up 
transitively when building with jdk11.

you're correct that under either build profile we should continue excluding it 
from the main {{lib}} directory and only add it to {{lib/jdk11}}

> Separate out jars related to JDK 11 into a folder in /lib
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Task
>  Components: java
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264.master.004.patch, hbase-22264.master.005.patch, 
> hbase-22264_jdks.txt, run_ITD_with_REST_ClusterManager.log
>
>
> UPDATE:
> Separate out the the jars related to JDK 11 and add control their addition to 
> the classpath using an environment variable or auto-detection of the jdk 
> version installed.
> OLD:
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22361) RegionServer could get stuck during shutdown process

2019-05-08 Thread Bahram Chehrazy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bahram Chehrazy updated HBASE-22361:

Attachment: Assign-and-close-event-handlers-should-remove-region-2.patch

> RegionServer could get stuck during shutdown process
> 
>
> Key: HBASE-22361
> URL: https://issues.apache.org/jira/browse/HBASE-22361
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Bahram Chehrazy
>Assignee: Bahram Chehrazy
>Priority: Major
> Attachments: 
> Assign-and-close-event-handlers-should-remove-region-2.patch, 
> Assign-and-close-event-handlers-should-remove-region.patch
>
>
> When the server is being aborted or stopped, the server waits for all online 
> regions to flush and close. And if a region is at the end of opening process, 
> the openEventHandler throws an exception. However, it fails to remove that 
> region from the on-line regions and fails to notify the master. This would 
> prevent the server shutdown loop to exit.
> Similarly, if regions that are being closed fail for any region, the 
> closeEventHandler throws but again, it fails to remove that region from the 
> on-line list.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22386) HBOSS: Limit depth that listing locks check for other locks

2019-05-08 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-22386:
--
Attachment: HBASE-22386.001.patch

> HBOSS: Limit depth that listing locks check for other locks
> ---
>
> Key: HBASE-22386
> URL: https://issues.apache.org/jira/browse/HBASE-22386
> Project: HBase
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HBASE-22386.001.patch
>
>
> treeWriteLock will check all the way up and down the tree for locks. This is 
> more aggressive than it needs to be, and integration testing has shown that 
> there's significant contention when listing tables, and this is one of 
> numerous operations that doesn't need to recursively lock the whole subtree. 
> There's actually a number of operations that only need to lock up or down 1 
> level only, so let's start with listing: non-recursive listings don't need to 
> care about what's going on more than 1 level below them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22386) HBOSS: Limit depth that listing locks check for other locks

2019-05-08 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835940#comment-16835940
 ] 

Sean Mackrory commented on HBASE-22386:
---

Attaching a patch built on https://github.com/apache/hbase-filesystem/pull/1. 
I'll try the whole dev-support/submit-patch.py once that's merged.

> HBOSS: Limit depth that listing locks check for other locks
> ---
>
> Key: HBASE-22386
> URL: https://issues.apache.org/jira/browse/HBASE-22386
> Project: HBase
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HBASE-22386.001.patch
>
>
> treeWriteLock will check all the way up and down the tree for locks. This is 
> more aggressive than it needs to be, and integration testing has shown that 
> there's significant contention when listing tables, and this is one of 
> numerous operations that doesn't need to recursively lock the whole subtree. 
> There's actually a number of operations that only need to lock up or down 1 
> level only, so let's start with listing: non-recursive listings don't need to 
> care about what's going on more than 1 level below them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Jan Hentschel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-22379.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.11
   1.3.5
   2.2.1
   2.1.5
   2.0.6
   2.3.0
   1.5.0
   3.0.0
 Release Note: Fixes the formatting of the "Voting on Release Candidates" 
to actually show the quote and code formatting of the RAT check.

> Fix Markdown for "Voting on Release Candidates" in book
> ---
>
> Key: HBASE-22379
> URL: https://issues.apache.org/jira/browse/HBASE-22379
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> The Markdown in the section "Voting on Release Candidates" of the HBase book 
> seems to be broken. It looks like that there should be a quote, which isn't 
> displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22376) master can fail to start w/NPE if lastflushedseqids file is empty

2019-05-08 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835951#comment-16835951
 ] 

Sergey Shelukhin commented on HBASE-22376:
--

[~psomogyi] [~Apache9] can you take a look? tiny fix. The caller of this code 
already catches and ignores IOEx, but in case of an empty file PB returns null 

> master can fail to start w/NPE if lastflushedseqids file is empty
> -
>
> Key: HBASE-22376
> URL: https://issues.apache.org/jira/browse/HBASE-22376
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-22376.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-22184:

Attachment: HBASE-22184.master.001.patch

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Attachments: HBASE-22184.master.001.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835953#comment-16835953
 ] 

Hudson commented on HBASE-22379:


Results for branch branch-2
[build #1877 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Fix Markdown for "Voting on Release Candidates" in book
> ---
>
> Key: HBASE-22379
> URL: https://issues.apache.org/jira/browse/HBASE-22379
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> The Markdown in the section "Voting on Release Candidates" of the HBase book 
> seems to be broken. It looks like that there should be a quote, which isn't 
> displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21777) "Tune compaction throughput" debug messages even when nothing has changed

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835954#comment-16835954
 ] 

Hudson commented on HBASE-21777:


Results for branch branch-2
[build #1877 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1877//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> "Tune compaction throughput" debug messages even when nothing has changed 
> --
>
> Key: HBASE-21777
> URL: https://issues.apache.org/jira/browse/HBASE-21777
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
>  Labels: branch-1
> Fix For: 3.0.0, 1.5.0, 2.2.1
>
>
> PressureAwareCompactionThroughputController will log "tune compaction 
> throughput" debug messages even when after consideration the re-tuning makes 
> no change to current settings. In that case it would be better not to log 
> anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835955#comment-16835955
 ] 

Hudson commented on HBASE-22379:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #551 (See 
[https://builds.apache.org/job/HBase-1.3-IT/551/])
HBASE-22379 Fixed Markdown in 'Voting on Release Candidates' section 
(jan.hentschel: 
[https://github.com/apache/hbase/commit/bb79656a545183d33c366428a4746f1ca2960c38])
* (edit) src/main/asciidoc/_chapters/developer.adoc


> Fix Markdown for "Voting on Release Candidates" in book
> ---
>
> Key: HBASE-22379
> URL: https://issues.apache.org/jira/browse/HBASE-22379
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> The Markdown in the section "Voting on Release Candidates" of the HBase book 
> seems to be broken. It looks like that there should be a quote, which isn't 
> displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22379) Fix Markdown for "Voting on Release Candidates" in book

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835956#comment-16835956
 ] 

Hudson commented on HBASE-22379:


Results for branch branch-2.2
[build #245 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/245/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/245//console].




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/245//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/245//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Fix Markdown for "Voting on Release Candidates" in book
> ---
>
> Key: HBASE-22379
> URL: https://issues.apache.org/jira/browse/HBASE-22379
> Project: HBase
>  Issue Type: Improvement
>  Components: community, documentation
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5, 1.4.11
>
>
> The Markdown in the section "Voting on Release Candidates" of the HBase book 
> seems to be broken. It looks like that there should be a quote, which isn't 
> displayed correctly. Same is true for the formatting of the Maven RAT command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-22184:

Status: Patch Available  (was: In Progress)

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Attachments: HBASE-22184.master.001.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >