[jira] [Updated] (HBASE-19472) Remove ArrayUtil Class

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19472:
---
Fix Version/s: 2.0.0-beta-1

> Remove ArrayUtil Class
> --
>
> Key: HBASE-19472
> URL: https://issues.apache.org/jira/browse/HBASE-19472
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19472.1.patch, HBASE-19472.2.patch, 
> HBASE-19472.3.patch, HBASE-19472.v4.patch
>
>
> Remove the class {{ArrayUtils}} from the project.  Most of it is not used, 
> and what little is, already exists in Apache Commons library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19472) Remove ArrayUtil Class

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19472:
---
Attachment: HBASE-19472.v4.patch

attach the v4 with trivial change in hbase-server module

> Remove ArrayUtil Class
> --
>
> Key: HBASE-19472
> URL: https://issues.apache.org/jira/browse/HBASE-19472
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19472.1.patch, HBASE-19472.2.patch, 
> HBASE-19472.3.patch, HBASE-19472.v4.patch
>
>
> Remove the class {{ArrayUtils}} from the project.  Most of it is not used, 
> and what little is, already exists in Apache Commons library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19427) Add TimeRange support into Append to optimize for counters

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19427:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for all reviews.

> Add TimeRange support into Append to optimize for counters
> --
>
> Key: HBASE-19427
> URL: https://issues.apache.org/jira/browse/HBASE-19427
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19427.v0.patch, HBASE-19427.v1.patch
>
>
> The time range in Increment is used to optimize the Get operation. This issue 
> try to port the feature from Increment to Append.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-12 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288833#comment-16288833
 ] 

Duo Zhang commented on HBASE-15536:
---

No, I haven't started to dig this one yet. Thanks [~ram_krish]. Just go ahead.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19504) Add TimeRange support into checkAndMutate

2017-12-12 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-19504:
--

 Summary: Add TimeRange support into checkAndMutate
 Key: HBASE-19504
 URL: https://issues.apache.org/jira/browse/HBASE-19504
 Project: HBase
  Issue Type: New Feature
Reporter: Chia-Ping Tsai
 Fix For: 2.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288830#comment-16288830
 ] 

Appy commented on HBASE-19489:
--

And, although this went in, if anyone is skeptical despite the supporting case 
made in description, we can always discuss it and pull it out if required.

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch, 
> test_change_in_hbase_common.master.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288827#comment-16288827
 ] 

ramkrishna.s.vasudevan commented on HBASE-15536:


bq.TestBlockEvictionFromClient, it also fails for me locally but do not know 
the reason yet. Will dig later.
I can check this [~Apache9] . Let me know if you have already not started 
checking this. 

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: test_change_in_hbase_common.master.patch

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch, 
> test_change_in_hbase_common.master.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288820#comment-16288820
 ] 

Appy commented on HBASE-19489:
--

Phewplaying with yetus is not trivial, especially when it runs inside 
docker.
But finally, found a way to get env variable inside the docker container. 
Here's the run: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10407/console
Actually, that run is running a hack where setting DOCKER_EXTRAARGS inside 
hadoopcheck_parse_args is making it work instead of the setter in 
hadoopcheck_docker_support (the right way). That's because yetus does this 
re-exec on detecting change in testing environment which skips calling plugins' 
X_docker_support functions (ref: 
https://github.com/apache/yetus/blob/2b91d243f7afbe89ec558fe09b7e33f90e065ac4/precommit/test-patch.sh#L1690).
The only way to test the real this is by checking it in. So pushing the change. 
Will revert quickly if it breaks stuff. Bear with me.

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288815#comment-16288815
 ] 

Chia-Ping Tsai commented on HBASE-19468:


bq.So there is no extra resource that is getting held up.
Seems the flusher have opened the related resource before notifying the 
scanner. I skimmed the construction of encoded seeker, and no such slow 
resource operation exist. 

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: (was: HBASE-19489.master.008.patch)

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19503) Fix TestWALOpenAfterDNRollingStart for AsyncFSWAL

2017-12-12 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19503:
-

 Summary: Fix TestWALOpenAfterDNRollingStart for AsyncFSWAL
 Key: HBASE-19503
 URL: https://issues.apache.org/jira/browse/HBASE-19503
 Project: HBase
  Issue Type: Bug
  Components: Replication, wal
Reporter: Duo Zhang
Assignee: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-12 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288809#comment-16288809
 ] 

Duo Zhang commented on HBASE-15536:
---

TestWALOpenAfterDNRollingStart is a problem. Let me open a issue to address it 
first.

And for TestBlockEvictionFromClient, it also fails for me locally but do not 
know the reason yet. Will dig later.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288801#comment-16288801
 ] 

Hadoop QA commented on HBASE-19483:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
4s{color} | {color:red} hbase-server: The patch generated 1 new + 76 unchanged 
- 0 fixed = 77 total (was 76) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
18s{color} | {color:red} root: The patch generated 1 new + 91 unchanged - 0 
fixed = 92 total (was 91) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}160m 
14s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}251m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19483 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901802/HBASE-19483.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cd565d6a2918 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master 

[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288800#comment-16288800
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10410/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch, 
> HBASE-19489.master.008.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19502) Make sure we have closed all StoreFileScanner if we fail to open any StoreFileScanners

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19502:
---
Attachment: HBASE-19502.branch-1.4.patch

> Make sure we have closed all StoreFileScanner if we fail to open any 
> StoreFileScanners
> --
>
> Key: HBASE-19502
> URL: https://issues.apache.org/jira/browse/HBASE-19502
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.1, 1.2.7, 1.3.3
>
> Attachments: HBASE-19502.branch-1.4.patch
>
>
> {code:title=StoreFileScanner.java}
>   public static List 
> getScannersForStoreFiles(Collection files,
>   boolean cacheBlocks, boolean usePread, boolean isCompaction, boolean 
> canUseDrop,
>   ScanQueryMatcher matcher, long readPt, boolean isPrimaryReplica) throws 
> IOException {
> List scanners = new 
> ArrayList(files.size());
> List sorted_files = new ArrayList<>(files);
> Collections.sort(sorted_files, StoreFile.Comparators.SEQ_ID);
> for (int i = 0; i < sorted_files.size(); i++) {
>   StoreFile.Reader r = sorted_files.get(i).createReader(canUseDrop);
>   r.setReplicaStoreFile(isPrimaryReplica);
>   StoreFileScanner scanner = r.getStoreFileScanner(cacheBlocks, usePread, 
> isCompaction, readPt,
> i, matcher != null ? !matcher.hasNullColumnInQuery() : false);
>   scanners.add(scanner);
> }
> return scanners;
>   }
> {code}
> The missed decrement of ref count will obstruct the cleanup of compacted 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19502) Make sure we have closed all StoreFileScanner if we fail to open any StoreFileScanners

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19502:
---
Status: Patch Available  (was: Open)

> Make sure we have closed all StoreFileScanner if we fail to open any 
> StoreFileScanners
> --
>
> Key: HBASE-19502
> URL: https://issues.apache.org/jira/browse/HBASE-19502
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.1, 1.2.7, 1.3.3
>
> Attachments: HBASE-19502.branch-1.4.patch
>
>
> {code:title=StoreFileScanner.java}
>   public static List 
> getScannersForStoreFiles(Collection files,
>   boolean cacheBlocks, boolean usePread, boolean isCompaction, boolean 
> canUseDrop,
>   ScanQueryMatcher matcher, long readPt, boolean isPrimaryReplica) throws 
> IOException {
> List scanners = new 
> ArrayList(files.size());
> List sorted_files = new ArrayList<>(files);
> Collections.sort(sorted_files, StoreFile.Comparators.SEQ_ID);
> for (int i = 0; i < sorted_files.size(); i++) {
>   StoreFile.Reader r = sorted_files.get(i).createReader(canUseDrop);
>   r.setReplicaStoreFile(isPrimaryReplica);
>   StoreFileScanner scanner = r.getStoreFileScanner(cacheBlocks, usePread, 
> isCompaction, readPt,
> i, matcher != null ? !matcher.hasNullColumnInQuery() : false);
>   scanners.add(scanner);
> }
> return scanners;
>   }
> {code}
> The missed decrement of ref count will obstruct the cleanup of compacted 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288792#comment-16288792
 ] 

stack commented on HBASE-19483:
---

Aint' this just wrong, having hbase-server know about rsgroup? The perm check 
should be baked into RSGroup and not done as CP pre/post? (its fine adding rs 
strings to the AC table that rsgroup in it...). What ye think?

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch, HBASE-19483.master.003.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288788#comment-16288788
 ] 

ramkrishna.s.vasudevan commented on HBASE-19468:


Thanks for the review. 
bq.Opening the reader in updateReaders will make flusher open the all storefile 
for all scanner. Doest it impact the performance?
Even before I thought of this patch i first checked the code . One thing is 
sure that just on a flush is done when we commit that file
{code}
private HStoreFile commitFile(Path path, long logCacheFlushId, MonitoredTask 
status)
  throws IOException {
{code}
In HStore#commitfile we open the reader. So the actual Store#getScanner() 
internally just creates the light weight storeFile scanner and the reader is 
not getting opened by this method. So there is no extra resource that is 
getting held up. Pls correct me if am missing something here. [~chia7712]?

Coming to this
bq. May be it was a get op and no other next() calls might happen? Even on Scan.
In case of gets() generally it is single RPC . There are no multi RPC I 
believe. So  even if you feel the memstore scanner got flushed to file scanner 
it will get closed by the close call.
For scans() let the next() not be called - say there was a lease expiry- even 
then we will be closing this scanner as part of close().

bq.That is why I was wondering whether we can have a similar way of update 
readers after compaction(like the flush) and clear these new files from list.. 
Oh ya we should have ways of notify
This also I first thought but felt it may be tricky to implement. Again for 
that from compaction we should call updateReader like how it was happening 
before. Hence I went with this simple way.
So on every compaction also again we should instruct the scanner to update its 
scanner list and to avoid that was the actual aim of this ref counting feature. 
Correct me if am wrong here. Anyway i will still see if there is a better way. 
(if any).



> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288790#comment-16288790
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10409/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch, 
> HBASE-19489.master.008.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: HBASE-19489.master.008.patch

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch, 
> HBASE-19489.master.008.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19502) Make sure we have closed all StoreFileScanner if we fail to open any StoreFileScanners

2017-12-12 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19502:
---
Fix Version/s: 1.2.7
   1.4.1

> Make sure we have closed all StoreFileScanner if we fail to open any 
> StoreFileScanners
> --
>
> Key: HBASE-19502
> URL: https://issues.apache.org/jira/browse/HBASE-19502
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.1, 1.2.7, 1.3.3
>
>
> {code:title=StoreFileScanner.java}
>   public static List 
> getScannersForStoreFiles(Collection files,
>   boolean cacheBlocks, boolean usePread, boolean isCompaction, boolean 
> canUseDrop,
>   ScanQueryMatcher matcher, long readPt, boolean isPrimaryReplica) throws 
> IOException {
> List scanners = new 
> ArrayList(files.size());
> List sorted_files = new ArrayList<>(files);
> Collections.sort(sorted_files, StoreFile.Comparators.SEQ_ID);
> for (int i = 0; i < sorted_files.size(); i++) {
>   StoreFile.Reader r = sorted_files.get(i).createReader(canUseDrop);
>   r.setReplicaStoreFile(isPrimaryReplica);
>   StoreFileScanner scanner = r.getStoreFileScanner(cacheBlocks, usePread, 
> isCompaction, readPt,
> i, matcher != null ? !matcher.hasNullColumnInQuery() : false);
>   scanners.add(scanner);
> }
> return scanners;
>   }
> {code}
> The missed decrement of ref count will obstruct the cleanup of compacted 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19502) Make sure we have closed all StoreFileScanner if we fail to open any StoreFileScanners

2017-12-12 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-19502:
--

 Summary: Make sure we have closed all StoreFileScanner if we fail 
to open any StoreFileScanners
 Key: HBASE-19502
 URL: https://issues.apache.org/jira/browse/HBASE-19502
 Project: HBase
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


{code:title=StoreFileScanner.java}
  public static List 
getScannersForStoreFiles(Collection files,
  boolean cacheBlocks, boolean usePread, boolean isCompaction, boolean 
canUseDrop,
  ScanQueryMatcher matcher, long readPt, boolean isPrimaryReplica) throws 
IOException {
List scanners = new 
ArrayList(files.size());
List sorted_files = new ArrayList<>(files);
Collections.sort(sorted_files, StoreFile.Comparators.SEQ_ID);
for (int i = 0; i < sorted_files.size(); i++) {
  StoreFile.Reader r = sorted_files.get(i).createReader(canUseDrop);
  r.setReplicaStoreFile(isPrimaryReplica);
  StoreFileScanner scanner = r.getStoreFileScanner(cacheBlocks, usePread, 
isCompaction, readPt,
i, matcher != null ? !matcher.hasNullColumnInQuery() : false);
  scanners.add(scanner);
}
return scanners;
  }
{code}
The missed decrement of ref count will obstruct the cleanup of compacted files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288784#comment-16288784
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10407/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288782#comment-16288782
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10406/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288777#comment-16288777
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10405/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: HBASE-19489.master.007.patch

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch, HBASE-19489.master.007.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18945) Make a IA.LimitedPrivate interface for CellComparator

2017-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288763#comment-16288763
 ] 

stack commented on HBASE-18945:
---

Thanks Anoop.

> Make a IA.LimitedPrivate interface for CellComparator
> -
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: 18945-addendum-branch-2.txt, HBASE-18495.patch, 
> HBASE-18945_2.patch, HBASE-18945_3.patch, HBASE-18945_4.patch, 
> HBASE-18945_5.patch, HBASE-18945_6.patch, HBASE-18945_6.patch, 
> HBASE-18945_7.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18945) Make a IA.LimitedPrivate interface for CellComparator

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18945:
--
Attachment: (was: HBASE-18945.master.001.patch)

> Make a IA.LimitedPrivate interface for CellComparator
> -
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: 18945-addendum-branch-2.txt, HBASE-18495.patch, 
> HBASE-18945_2.patch, HBASE-18945_3.patch, HBASE-18945_4.patch, 
> HBASE-18945_5.patch, HBASE-18945_6.patch, HBASE-18945_6.patch, 
> HBASE-18945_7.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18946:
--
Attachment: HBASE-18946.master.005.patch

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.master.004.patch, HBASE-18946.master.005.patch, 
> HBASE-18946.patch, HBASE-18946.patch, HBASE-18946_2.patch, 
> HBASE-18946_2.patch, HBASE-18946_simple_7.patch, HBASE-18946_simple_8.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-16890.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Resolving. I think you all -- [~Apache9] in particular -- fixed the perf diff.

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 
> (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, 
> classic.svg, contention.png, contention_defaultWAL.png, 
> ycsb_FSHlog.vs.Async.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288755#comment-16288755
 ] 

Anoop Sam John commented on HBASE-19468:


I think the assumption here is that eventually next() or so will be called and 
at that time scanner to be opened. So this is an eager open. But I do have a 
concern whether this is correct.  May be it was a get op and no other next() 
calls might happen? Even on Scan. Would be best to go by the old way of open 
scanner on the new files when really needed.  The eager open is to make the ref 
count incremented.  The patch is simple but not really in line with the way.  
That is why I was wondering whether we can have a similar way of update readers 
after compaction(like the flush) and clear these new files from list.. Oh ya we 
should have ways of notify

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18946:
--
Attachment: HBASE-18946.master.004.patch

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.master.004.patch, HBASE-18946.patch, HBASE-18946.patch, 
> HBASE-18946_2.patch, HBASE-18946_2.patch, HBASE-18946_simple_7.patch, 
> HBASE-18946_simple_8.patch, TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288754#comment-16288754
 ] 

stack commented on HBASE-18946:
---

.004 fixes tests and adds in [~ram_krish] 's test.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.master.004.patch, HBASE-18946.patch, HBASE-18946.patch, 
> HBASE-18946_2.patch, HBASE-18946_2.patch, HBASE-18946_simple_7.patch, 
> HBASE-18946_simple_8.patch, TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18946:
--
Attachment: HBASE-18946.master.003.patch

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.master.001.patch, 
> HBASE-18946.master.002.patch, HBASE-18946.master.003.patch, 
> HBASE-18946.patch, HBASE-18946.patch, HBASE-18946_2.patch, 
> HBASE-18946_2.patch, HBASE-18946_simple_7.patch, HBASE-18946_simple_8.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288749#comment-16288749
 ] 

Chia-Ping Tsai commented on HBASE-19468:


Opening the reader in {{updateReaders}} will make flusher open the all 
storefile for all scanner. Doest it impact the performance? 

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19500) Make RSGroupInfo immutable

2017-12-12 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288748#comment-16288748
 ] 

Duo Zhang commented on HBASE-19500:
---

Need some time to look at the parent issue first... If I can not reply in time 
please go ahead [~appy].

Thanks.

> Make RSGroupInfo immutable
> --
>
> Key: HBASE-19500
> URL: https://issues.apache.org/jira/browse/HBASE-19500
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>
> HBASE-19483 added CP hooks to expose RSGroupInfo.
> First, we should at least change [hbase-client] RSGroupInfo to immutable + 
> builder pattern like we have done for so many other things.
> What say [~Apache9]
> Then, few questions need figuring out:
> - Should hooks be allowed to change RSGroupInfo.
> Probably not? Then making it immutable would be necessary and sufficient
> - Can we remove {{if(((MasterEnvironment)getEnvironment()).supportGroupCPs) 
> }} in so many places since CP in 2.0 are already broken left and right (and 
> we'll have to solve legacy issue more holistically) What say [~anoop.hbase]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18945) Make a IA.LimitedPrivate interface for CellComparator

2017-12-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288745#comment-16288745
 ] 

Anoop Sam John commented on HBASE-18945:


Patch is a wrong place Stack?

> Make a IA.LimitedPrivate interface for CellComparator
> -
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: 18945-addendum-branch-2.txt, HBASE-18495.patch, 
> HBASE-18945.master.001.patch, HBASE-18945_2.patch, HBASE-18945_3.patch, 
> HBASE-18945_4.patch, HBASE-18945_5.patch, HBASE-18945_6.patch, 
> HBASE-18945_6.patch, HBASE-18945_7.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-12 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288746#comment-16288746
 ] 

Duo Zhang commented on HBASE-15536:
---

Let me check the failed UTs.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 15536.addendum2.enable.asyncfswal.by.default.2.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 
> 15536.addendum2.enable.asyncfswal.by.default.txt, 15536.minor.addendum.patch, 
> HBASE-15536-v1.patch, HBASE-15536-v2.patch, HBASE-15536-v3.patch, 
> HBASE-15536-v4.patch, HBASE-15536-v5.patch, HBASE-15536.patch, 
> latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18945) Make a IA.LimitedPrivate interface for CellComparator

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18945:
--
Attachment: HBASE-18945.master.001.patch

> Make a IA.LimitedPrivate interface for CellComparator
> -
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: 18945-addendum-branch-2.txt, HBASE-18495.patch, 
> HBASE-18945.master.001.patch, HBASE-18945_2.patch, HBASE-18945_3.patch, 
> HBASE-18945_4.patch, HBASE-18945_5.patch, HBASE-18945_6.patch, 
> HBASE-18945_6.patch, HBASE-18945_7.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288726#comment-16288726
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19501) [AMv2] Retain assignment across restarts

2017-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19501:
--
Attachment: HBASE-19501.patch

Parking patch here. Can't go in until HBASE-18946 is done.

> [AMv2] Retain assignment across restarts
> 
>
> Key: HBASE-19501
> URL: https://issues.apache.org/jira/browse/HBASE-19501
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19501.patch
>
>
> Working with replicas and the parent test in particular, I learned a few 
> interesting things:
>  # It is hard to test if we retain assignments because our little minicluster 
> gives RegionServers new ports on restart foiling our means of recognizing new 
> instance of a server by checking hostname+port (and ensuring the startcode is 
> larger).
>  # Some of our tests like the parent test depended on retaining assignment 
> across restarts.
>  # As said in parent issue, master used to be last to go down when we did a 
> controlled cluster shutdown. We lost that when we moved to AMv2.
>  # When we do a cluster shutdown, the RegionServers close down the Regions, 
> not the Master as is usual in AMv2 (Master wants to do all assign ops in 
> AMv2). This means that the Master is surprised when it gets notification of 
> CLOSE ops that it did not initiate. Usually on CLOSE, Master updates meta 
> with the CLOSE state. On cluster shutdown we are not doing this.
>  # So, on restart, we read meta and we see all regions still in OPEN state so 
> we think the cluster crashed down so we go and do ServerCrashProcedure. Which 
> hoses our ability to retain assign.
> Some experiments:
>  # I can make the Master stay up so it is last to go down
>  # This makes it so we no longer spew the logs with failed transition 
> messages because Master is not up to receive the CLOSE transitions.
>  # I hacked in means of telling minicluster ports it should use on start; 
> helps fake case of new RS instances
>  # It is hard to tell the difference between a clean shutdown and a crash 
> down. It is dangerous if we get the call wrong. Currently, given that we just 
> let ServerCrashProcedure deal with it -- the safest option -- one experiment 
> is that when it goes to assign the regions that were on the crashed server, 
> rather than round robin, instead we should look and see if new instance of 
> old location and if so, just give it al lthe regions. That'd retain locality. 
> This seems to work. Problem is that SCP is doing assignment. Ideally balancer 
> would do it.
> Let me put up a patch that retains assignment across restart (somehow).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: HBASE-19489.master.006.patch

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch, 
> HBASE-19489.master.006.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19213) Align check and mutate operations in Table and AsyncTable

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288721#comment-16288721
 ] 

Appy edited comment on HBASE-19213 at 12/13/17 5:01 AM:


[~psomogyi] Committed. Please add a release note (can take stuff from commit 
message) and resolve.
Thanks for trying the designs suggested. :)


was (Author: appy):
[~psomogyi] Committed. Please add a release note (can take stuff from commit 
message) and resolve.

> Align check and mutate operations in Table and AsyncTable
> -
>
> Key: HBASE-19213
> URL: https://issues.apache.org/jira/browse/HBASE-19213
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19213.branch-2.002.patch, 
> HBASE-19213.master.001.patch, HBASE-19213.master.001.patch, 
> HBASE-19213.master.002.patch
>
>
> Check and mutate methods are way different. Table has checkAndx methods (some 
> of them are deprecated), but AsyncTable has an interface called 
> CheckAndMutateBuilder and these kind of operations are handled through that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19213) Align check and mutate operations in Table and AsyncTable

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288721#comment-16288721
 ] 

Appy commented on HBASE-19213:
--

[~psomogyi] Committed. Please add a release note (can take stuff from commit 
message) and resolve.

> Align check and mutate operations in Table and AsyncTable
> -
>
> Key: HBASE-19213
> URL: https://issues.apache.org/jira/browse/HBASE-19213
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19213.branch-2.002.patch, 
> HBASE-19213.master.001.patch, HBASE-19213.master.001.patch, 
> HBASE-19213.master.002.patch
>
>
> Check and mutate methods are way different. Table has checkAndx methods (some 
> of them are deprecated), but AsyncTable has an interface called 
> CheckAndMutateBuilder and these kind of operations are handled through that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288715#comment-16288715
 ] 

Appy commented on HBASE-19483:
--

These are new ones, and not exposing any private class - RSGroupInfo is in 
hbase-client and public.

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch, HBASE-19483.master.003.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19501) [AMv2] Retain assignment across restarts

2017-12-12 Thread stack (JIRA)
stack created HBASE-19501:
-

 Summary: [AMv2] Retain assignment across restarts
 Key: HBASE-19501
 URL: https://issues.apache.org/jira/browse/HBASE-19501
 Project: HBase
  Issue Type: Sub-task
  Components: Region Assignment
Reporter: stack
Assignee: stack
 Fix For: 2.0.0-beta-1


Working with replicas and the parent test in particular, I learned a few 
interesting things:

 # It is hard to test if we retain assignments because our little minicluster 
gives RegionServers new ports on restart foiling our means of recognizing new 
instance of a server by checking hostname+port (and ensuring the startcode is 
larger).
 # Some of our tests like the parent test depended on retaining assignment 
across restarts.
 # As said in parent issue, master used to be last to go down when we did a 
controlled cluster shutdown. We lost that when we moved to AMv2.
 # When we do a cluster shutdown, the RegionServers close down the Regions, not 
the Master as is usual in AMv2 (Master wants to do all assign ops in AMv2). 
This means that the Master is surprised when it gets notification of CLOSE ops 
that it did not initiate. Usually on CLOSE, Master updates meta with the CLOSE 
state. On cluster shutdown we are not doing this.
 # So, on restart, we read meta and we see all regions still in OPEN state so 
we think the cluster crashed down so we go and do ServerCrashProcedure. Which 
hoses our ability to retain assign.

Some experiments:

 # I can make the Master stay up so it is last to go down
 # This makes it so we no longer spew the logs with failed transition messages 
because Master is not up to receive the CLOSE transitions.
 # I hacked in means of telling minicluster ports it should use on start; helps 
fake case of new RS instances
 # It is hard to tell the difference between a clean shutdown and a crash down. 
It is dangerous if we get the call wrong. Currently, given that we just let 
ServerCrashProcedure deal with it -- the safest option -- one experiment is 
that when it goes to assign the regions that were on the crashed server, rather 
than round robin, instead we should look and see if new instance of old 
location and if so, just give it al lthe regions. That'd retain locality. This 
seems to work. Problem is that SCP is doing assignment. Ideally balancer would 
do it.

Let me put up a patch that retains assignment across restart (somehow).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19497) Fix findbugs and error-prone warnings in hbase-common (branch-2)

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288567#comment-16288567
 ] 

Appy edited comment on HBASE-19497 at 12/13/17 4:49 AM:


Yes, need RB.
For now, the one comment i have is, instead of adding casting everywhere, I 
think we can change most variables to one type or another (depending on use).
Looking at few FIXED_OVERHEAD, don't seem like they need to be long.


was (Author: appy):
Yes, need RB.
For now, instead of adding casting everywhere, I think we can change most 
variables to one type or another (depending on use).
Looking at few FIXED_OVERHEAD, don't seem like they need to be long.

> Fix findbugs and error-prone warnings in hbase-common (branch-2)
> 
>
> Key: HBASE-19497
> URL: https://issues.apache.org/jira/browse/HBASE-19497
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19497.master.001.patch
>
>
> In hbase-common fix important findbugs and error-prone warnings on branch-2 / 
> master. Start with a forward port pass from HBASE-19239. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19452) Turn ON off heap Bucket Cache by default

2017-12-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288711#comment-16288711
 ] 

Anoop Sam John commented on HBASE-19452:


Test fail seems not related.
bq.TEST-org.apache.hadoop.hbase.master.assignment.TestSplitTableRegionProcedure.xml.[failed-to-read]
Checkstyle can be fixed on commit.
[~stack], [~ram_krish], [~zyork]  Ping for reviews

> Turn ON off heap Bucket Cache by default
> 
>
> Key: HBASE-19452
> URL: https://issues.apache.org/jira/browse/HBASE-19452
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19452.patch, HBASE-19452_V2.patch, 
> HBASE-19452_V3.patch
>
>
> BC's hbase.bucketcache.ioengine by default is empty now. Means now BC.
> Make this default to be 'offheap'.  And the default off heap size for the BC 
> also to be provided. This can be 8 GB?  Better to make it also a % of the 
> Xmx. Lets continue to be 40% of Xmx as LRU cache default size.
> When user has to disable BC, configure size as 0. An empty value of this 
> config will be treated as default size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19213) Align check and mutate operations in Table and AsyncTable

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288705#comment-16288705
 ] 

Appy edited comment on HBASE-19213 at 12/13/17 4:42 AM:


Committing to master and branch-2.

Btw, fyi [~psomogyi] - 
https://stackoverflow.com/questions/15905127/overridden-methods-in-javadoc.
I'll remove them on commit.
{noformat}
+  /**
+   * {@inheritDoc}
+   */
{noformat}


was (Author: appy):
Committing to master and branch-2.

Btw, fyi [~psomogyi] - 
https://stackoverflow.com/questions/15905127/overridden-methods-in-javadoc.
I'll remove then on commit.
{noformat}
+  /**
+   * {@inheritDoc}
+   */
{noformat}

> Align check and mutate operations in Table and AsyncTable
> -
>
> Key: HBASE-19213
> URL: https://issues.apache.org/jira/browse/HBASE-19213
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19213.branch-2.002.patch, 
> HBASE-19213.master.001.patch, HBASE-19213.master.001.patch, 
> HBASE-19213.master.002.patch
>
>
> Check and mutate methods are way different. Table has checkAndx methods (some 
> of them are deprecated), but AsyncTable has an interface called 
> CheckAndMutateBuilder and these kind of operations are handled through that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19213) Align check and mutate operations in Table and AsyncTable

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288705#comment-16288705
 ] 

Appy commented on HBASE-19213:
--

Committing to master and branch-2.

Btw, fyi [~psomogyi] - 
https://stackoverflow.com/questions/15905127/overridden-methods-in-javadoc.
I'll remove then on commit.
{noformat}
+  /**
+   * {@inheritDoc}
+   */
{noformat}

> Align check and mutate operations in Table and AsyncTable
> -
>
> Key: HBASE-19213
> URL: https://issues.apache.org/jira/browse/HBASE-19213
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19213.branch-2.002.patch, 
> HBASE-19213.master.001.patch, HBASE-19213.master.001.patch, 
> HBASE-19213.master.002.patch
>
>
> Check and mutate methods are way different. Table has checkAndx methods (some 
> of them are deprecated), but AsyncTable has an interface called 
> CheckAndMutateBuilder and these kind of operations are handled through that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288701#comment-16288701
 ] 

Anoop Sam John commented on HBASE-19483:


Seems we have to add so many pre/post hooks here.  May be no other way!   I 
believe when we did the CP cleanup considering not to expose private classes, 
some of the hooks related to listing were removed. Am sure some things around 
Procedure but not really remembering abt RS group.   cc [~stack]

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch, HBASE-19483.master.003.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19495) Fix failed ut TestShell

2017-12-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288700#comment-16288700
 ] 

Anoop Sam John commented on HBASE-19495:


Oh ya it is very fine to remove in RB file..  I saw that but did not really 
checked the TestShell..  
Ya the doc related many changes needed. Will work on all of those in one go.

> Fix failed ut TestShell
> ---
>
> Key: HBASE-19495
> URL: https://issues.apache.org/jira/browse/HBASE-19495
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19495.master.001.patch
>
>
> Failed on master branch. Need debug.
> [INFO] Running org.apache.hadoop.hbase.client.TestShell
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 722.737 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
> [ERROR] testRunShellTests(org.apache.hadoop.hbase.client.TestShell)  Time 
> elapsed: 699.473 s  <<< ERROR!
> org.jruby.embed.EvalFailedException: (RuntimeError) Shell unit tests failed. 
> Check output file for details.
>   at 
> org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:36)
> Caused by: org.jruby.exceptions.RaiseException: (RuntimeError) Shell unit 
> tests failed. Check output file for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18625) Splitting of region with replica, doesn't update region list in serverHolding. A server crash leads to overlap.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288687#comment-16288687
 ] 

Hadoop QA commented on HBASE-18625:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
17s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-18625 |
| JIRA Patch URL | 

[jira] [Updated] (HBASE-19373) Fix Checkstyle error in hbase-annotations

2017-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-19373:
-
Fix Version/s: (was: 1.1.13)

> Fix Checkstyle error in hbase-annotations
> -
>
> Key: HBASE-19373
> URL: https://issues.apache.org/jira/browse/HBASE-19373
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19373.master.001.patch
>
>
> Fix the remaining Checkstyle error regarding line length in the 
> *hbase-annotations* module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19422) using hadoop-profile property leads to confusing failures

2017-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-19422:
-
Fix Version/s: (was: 1.1.13)

> using hadoop-profile property leads to confusing failures
> -
>
> Key: HBASE-19422
> URL: https://issues.apache.org/jira/browse/HBASE-19422
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Mike Drob
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: 19422.v1.txt, HBASE-19422.patch
>
>
> When building master branch against hadoop 3 beta1,
> {code}
> mvn clean install -Dhadoop-profile=3.0 -Dhadoop-three.version=3.0.0-beta1 
> -Dhadoop-two.version=3.0.0-beta1 -DskipTests
> {code}
> I got:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> {code}
> Here is part of the dependency tree showing the dependency:
> {code}
> [INFO] org.apache.hbase:hbase-client:jar:3.0.0-SNAPSHOT
> ...
> [INFO] +- org.apache.hadoop:hadoop-auth:jar:3.0.0-beta1:compile
> ...
> [INFO] |  \- com.google.guava:guava:jar:11.0.2:compile
> [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile
> {code}
> We need to exclude jsr305 so that build succeed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288677#comment-16288677
 ] 

Hadoop QA commented on HBASE-19489:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
4s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}168m 36s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}257m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901779/HBASE-19489.master.005.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
xml  shadedjars  hadoopcheck  compile  |
| uname | Linux 4a392b0729cb 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| shellcheck | v0.4.4 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288664#comment-16288664
 ] 

Thiruvel Thirumoolan commented on HBASE-19468:
--

I didn't like the ref count approach to start with, but needed something simple 
to show the problem and demonstrate a fix. I wanted to rework on it. I prefer 
Ram's approach that doesn't touch counters directly, looks like both of us 
uploaded patch more or less same time and missed his. Gimme a couple of days, 
if its ok, just to cross check if anything else needs consideration.

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19394) Support multi-homing env for the publication of RS status with multicast (hbase.status.published)

2017-12-12 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288662#comment-16288662
 ] 

Toshihiro Suzuki commented on HBASE-19394:
--

[~stack] I don't really know their use case. But I guess they wanted to reduce 
a downtime when RS goes down by using this facility. Thanks.

> Support multi-homing env for the publication of RS status with multicast 
> (hbase.status.published) 
> --
>
> Key: HBASE-19394
> URL: https://issues.apache.org/jira/browse/HBASE-19394
> Project: HBase
>  Issue Type: Bug
>  Components: Client, master
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19394.patch
>
>
> Currently, when the publication feature is enabled 
> (hbase.status.published=true), it uses the interface which is found first:
> https://github.com/apache/hbase/blob/2e8bd0036dbdf3a99786e5531495d8d4cb51b86c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java#L268-L275
> This won't work when the host has the multiple network interfaces and the 
> unreachable one to the other nodes is selected. The interface which can be 
> used for the communication between cluster nodes should be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19492) Add EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS support to replication peer config

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288655#comment-16288655
 ] 

Hadoop QA commented on HBASE-19492:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} hbase-server: The patch generated 4 new + 8 unchanged - 
5 fixed = 12 total (was 13) {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 46 new + 305 unchanged - 6 fixed = 
351 total (was 311) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 39 new + 361 unchanged - 1 fixed = 
400 total (was 362) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
51m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
47s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  3s{color} 
| {color:red} hbase-shell in the patch failed. {color} |
| 

[jira] [Assigned] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan reassigned HBASE-19468:


Assignee: Thiruvel Thirumoolan

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19468) FNFE during scans and flushes

2017-12-12 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan reassigned HBASE-19468:


Assignee: (was: Thiruvel Thirumoolan)

> FNFE during scans and flushes
> -
>
> Key: HBASE-19468
> URL: https://issues.apache.org/jira/browse/HBASE-19468
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 1.3.1
>Reporter: Thiruvel Thirumoolan
>Priority: Critical
> Fix For: 2.0.0, 1.4.1, 1.5.0, 1.3.3
>
> Attachments: HBASE-19468-poc.patch, HBASE-19468_1.4.patch
>
>
> We see FNFE exceptions on our 1.3 clusters when scans and flushes happen at 
> the same time. This causes regionserver to throw a UnknownScannerException 
> and client retries.
> This happens during the following sequence:
> 1. Scanner open, client fetched some rows from regionserver and working on it
> 2. Flush happens and storeScanner is updated with flushed files 
> (StoreScanner.updateReaders())
> 3. Compaction happens on the region while scanner is still open
> 4. compaction discharger runs and cleans up the newly flushed file as we 
> don't have new scanners on it yet.
> 5. Client issues scan.next and during StoreScanner.resetScannerStack(), we 
> get a FNFE. RegionServer throws a UnknownScannerThe client retries in 1.3. 
> With branch-1.4, the scan fails with a DoNotRetryIOException.
> [~ram_krish], My proposal is to increment the reader count during 
> updateReaders() and decrement it during resetScannerStack(), so discharger 
> doesn't clean it up. Scan lease expiries also have to be taken care of. Am I 
> missing anything? Is there a better approach?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288637#comment-16288637
 ] 

Hadoop QA commented on HBASE-19489:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
46m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}154m 40s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901779/HBASE-19489.master.005.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
xml  shadedjars  hadoopcheck  compile  |
| uname | Linux 7c92cd4a414d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| shellcheck | v0.4.4 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10396/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288634#comment-16288634
 ] 

Hadoop QA commented on HBASE-19489:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
4s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}171m 15s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}257m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901773/HBASE-19489.master.004.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
xml  shadedjars  hadoopcheck  compile  |
| uname | Linux 7778003a97a3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| shellcheck | v0.4.4 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10393/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288628#comment-16288628
 ] 

Hadoop QA commented on HBASE-19489:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
53m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-beta1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}161m 28s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901773/HBASE-19489.master.004.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
xml  shadedjars  hadoopcheck  compile  |
| uname | Linux 583e37318786 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| shellcheck | v0.4.4 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10394/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Updated] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-19483:
--
Attachment: HBASE-19483.master.003.patch

Attach 003 patch as [~appy] suggestions.Thanks
1. Move ACL to [http://hbase.apache.org/book.html#_permissions]
2. Add more information to javadocs

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch, HBASE-19483.master.003.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19495) Fix failed ut TestShell

2017-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288618#comment-16288618
 ] 

stack commented on HBASE-19495:
---

Sorry about that [~zghaobac] Done now.

> Fix failed ut TestShell
> ---
>
> Key: HBASE-19495
> URL: https://issues.apache.org/jira/browse/HBASE-19495
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19495.master.001.patch
>
>
> Failed on master branch. Need debug.
> [INFO] Running org.apache.hadoop.hbase.client.TestShell
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 722.737 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
> [ERROR] testRunShellTests(org.apache.hadoop.hbase.client.TestShell)  Time 
> elapsed: 699.473 s  <<< ERROR!
> org.jruby.embed.EvalFailedException: (RuntimeError) Shell unit tests failed. 
> Check output file for details.
>   at 
> org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:36)
> Caused by: org.jruby.exceptions.RaiseException: (RuntimeError) Shell unit 
> tests failed. Check output file for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18625) Splitting of region with replica, doesn't update region list in serverHolding. A server crash leads to overlap.

2017-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18625:
-
Status: Patch Available  (was: Open)

submit patch for branch-1 first.

> Splitting of region with replica, doesn't update region list in 
> serverHolding. A server crash leads to overlap.
> ---
>
> Key: HBASE-18625
> URL: https://issues.apache.org/jira/browse/HBASE-18625
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.6
>Reporter: Igloo
>Assignee: huaxiang sun
> Fix For: 1.2.8
>
> Attachments: HBASE-18625-branch-1-v001.patch
>
>
> The situation can appear in following steps in release hbase1.2.6
> 1. create 'testtable', 'info', {REGION_REPLICATION=>2}
> 2. write somerecords into 'testtable'
> 3. split the table 'testtable'
> 4. after the spliting, the serverHoldings in RegionStates still holds the 
> regioninfo for the replica of parent region
> 5. restart the regionserver where the parent replica-region located
> 6. the offlined replica of parent region will be assigned in 
> ServerCrashProcedure. 
> hbase hbck 'testtable‘
> ERROR: Region { meta => null, hdfs => null, deployed => 
> qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe195b3cc4d08b2c078a687f6d
> ., replicaId => 1 } not in META, but deployed on 
> qabb-qa-hdp-hbase1,16020,1503022958093
>  18 ERROR: No regioninfo in Meta or HDFS. { meta => null, hdfs => null, 
> deployed => 
> qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe 
>195b3cc4d08b2c078a687f6d., replicaId => 1 }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18625) Splitting of region with replica, doesn't update region list in serverHolding. A server crash leads to overlap.

2017-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18625:
-
Attachment: HBASE-18625-branch-1-v001.patch

Attach the patch for branch-1 first. Will update with an unitest and patch for 
master branch later.
The root cause is that when replica parent is offlined, its state is not SPLIT 
and this does not trigger it to be removed from removeFromServerHoldings.

> Splitting of region with replica, doesn't update region list in 
> serverHolding. A server crash leads to overlap.
> ---
>
> Key: HBASE-18625
> URL: https://issues.apache.org/jira/browse/HBASE-18625
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.6
>Reporter: Igloo
>Assignee: huaxiang sun
> Fix For: 1.2.8
>
> Attachments: HBASE-18625-branch-1-v001.patch
>
>
> The situation can appear in following steps in release hbase1.2.6
> 1. create 'testtable', 'info', {REGION_REPLICATION=>2}
> 2. write somerecords into 'testtable'
> 3. split the table 'testtable'
> 4. after the spliting, the serverHoldings in RegionStates still holds the 
> regioninfo for the replica of parent region
> 5. restart the regionserver where the parent replica-region located
> 6. the offlined replica of parent region will be assigned in 
> ServerCrashProcedure. 
> hbase hbck 'testtable‘
> ERROR: Region { meta => null, hdfs => null, deployed => 
> qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe195b3cc4d08b2c078a687f6d
> ., replicaId => 1 } not in META, but deployed on 
> qabb-qa-hdp-hbase1,16020,1503022958093
>  18 ERROR: No regioninfo in Meta or HDFS. { meta => null, hdfs => null, 
> deployed => 
> qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe 
>195b3cc4d08b2c078a687f6d., replicaId => 1 }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19287) master hangs forever if RecoverMeta send assign meta region request to target server fail

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288581#comment-16288581
 ] 

Hadoop QA commented on HBASE-19287:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
45m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 
35s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901778/HBASE-19287-master-v3.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 129132157a88 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10395/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10395/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> master hangs forever if RecoverMeta send assign meta region request to target 
> server fail
> 

[jira] [Commented] (HBASE-19497) Fix findbugs and error-prone warnings in hbase-common (branch-2)

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288567#comment-16288567
 ] 

Appy commented on HBASE-19497:
--

Yes, need RB.
For now, instead of adding casting everywhere, I think we can change most 
variables to one type or another (depending on use).
Looking at few FIXED_OVERHEAD, don't seem like they need to be long.

> Fix findbugs and error-prone warnings in hbase-common (branch-2)
> 
>
> Key: HBASE-19497
> URL: https://issues.apache.org/jira/browse/HBASE-19497
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19497.master.001.patch
>
>
> In hbase-common fix important findbugs and error-prone warnings on branch-2 / 
> master. Start with a forward port pass from HBASE-19239. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19497) Fix findbugs and error-prone warnings in hbase-common (branch-2)

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288532#comment-16288532
 ] 

Appy commented on HBASE-19497:
--

Taking a look. Please upload to a RB too in case there are iterations (easy to 
see incremental diff)

> Fix findbugs and error-prone warnings in hbase-common (branch-2)
> 
>
> Key: HBASE-19497
> URL: https://issues.apache.org/jira/browse/HBASE-19497
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-4
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19497.master.001.patch
>
>
> In hbase-common fix important findbugs and error-prone warnings on branch-2 / 
> master. Start with a forward port pass from HBASE-19239. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19481) Enable Checkstyle in hbase-error-prone

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288523#comment-16288523
 ] 

Appy commented on HBASE-19481:
--

Let's make a small change in the only java file of that module and see if it 
works?

> Enable Checkstyle in hbase-error-prone
> --
>
> Key: HBASE-19481
> URL: https://issues.apache.org/jira/browse/HBASE-19481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Attachments: HBASE-19481.master.001.patch
>
>
> *hbase-error-prone* doesn't contain any Checkstyle errors. With that 
> Checkstyle can now be configured to fail on violations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19495) Fix failed ut TestShell

2017-12-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288508#comment-16288508
 ] 

Guanghao Zhang commented on HBASE-19495:


bq. Pushed.
Didn't push this to master branch, sir? [~stack]

> Fix failed ut TestShell
> ---
>
> Key: HBASE-19495
> URL: https://issues.apache.org/jira/browse/HBASE-19495
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19495.master.001.patch
>
>
> Failed on master branch. Need debug.
> [INFO] Running org.apache.hadoop.hbase.client.TestShell
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 722.737 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
> [ERROR] testRunShellTests(org.apache.hadoop.hbase.client.TestShell)  Time 
> elapsed: 699.473 s  <<< ERROR!
> org.jruby.embed.EvalFailedException: (RuntimeError) Shell unit tests failed. 
> Check output file for details.
>   at 
> org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:36)
> Caused by: org.jruby.exceptions.RaiseException: (RuntimeError) Shell unit 
> tests failed. Check output file for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19495) Fix failed ut TestShell

2017-12-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288507#comment-16288507
 ] 

Guanghao Zhang commented on HBASE-19495:


bq. I saw family.setCacheDataInL1 in RB but that wont make an issue as the call 
is still there but a noop.
If i am not wrong, setCacheDataInL1 was marked as deprecated and not work 
anymore. So I remove it directly...
BTW: I grep CACHE_DATA_IN_L1 and still see this in document. Seems the document 
change will be handled in HBASE-19438? So I didn't remove CACHE_DATA_IN_L1 from 
document.



> Fix failed ut TestShell
> ---
>
> Key: HBASE-19495
> URL: https://issues.apache.org/jira/browse/HBASE-19495
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19495.master.001.patch
>
>
> Failed on master branch. Need debug.
> [INFO] Running org.apache.hadoop.hbase.client.TestShell
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 722.737 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
> [ERROR] testRunShellTests(org.apache.hadoop.hbase.client.TestShell)  Time 
> elapsed: 699.473 s  <<< ERROR!
> org.jruby.embed.EvalFailedException: (RuntimeError) Shell unit tests failed. 
> Check output file for details.
>   at 
> org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:36)
> Caused by: org.jruby.exceptions.RaiseException: (RuntimeError) Shell unit 
> tests failed. Check output file for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18352) Enable TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas disabled by Proc-V2 AM in HBASE-14614

2017-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288505#comment-16288505
 ] 

stack commented on HBASE-18352:
---

This is an interesting issue. It depends on HBASE_18946 but apart from that 
this test turned up some interesting items:

 # On shutdown, we used to try and have Master go down last. We lost that 
facility adding in AMv2.
 # On shutdown, RS runs close of all regions its carrying. Part of close 
handling is updating the Master about the close. The Master didn't initiate the 
close as is usually the case so it doesn't know what to do w/ the incoming 
unaccounted-for close. I could notice cluster is going down and then go update 
meta... but that could be a pain given cluster is going down epecially if 1M 
regions on cluster.
 # On startup, if meta has region states as OPEN -- as would be the case here 
when the RS did the close and not the Master -- then we presumed it a crash 
down when it wasn't. Meant we lost old assignments across a restart.

So, let me put up a patch here that addresses above (though depends on 
HBASE-18946 going in first).

> Enable 
> TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas 
> disabled by Proc-V2 AM in HBASE-14614
> --
>
> Key: HBASE-18352
> URL: https://issues.apache.org/jira/browse/HBASE-18352
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: huaxiang sun
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946_1.patch
>
>
> The following replica tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled parts of...testCreateTableWithMultipleReplicas in 
> TestMasterOperationsForRegionReplicas There is an issue w/ assigning more 
> replicas if number of replicas is changed on us. See '/* DISABLED! FOR 
> NOW'.
> ** NOTE We moved fixing of the below two tests out to HBASE-19268
> - Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> - Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> This JIRA tracks the work to enable them (or modify/remove if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19481) Enable Checkstyle in hbase-error-prone

2017-12-12 Thread Jan Hentschel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288493#comment-16288493
 ] 

Jan Hentschel commented on HBASE-19481:
---

I think the reporting is already included through the parent POM. What was 
missing is that Checkstyle failes on an error. Currently most of the modules 
don't contain this configuration.

> Enable Checkstyle in hbase-error-prone
> --
>
> Key: HBASE-19481
> URL: https://issues.apache.org/jira/browse/HBASE-19481
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Attachments: HBASE-19481.master.001.patch
>
>
> *hbase-error-prone* doesn't contain any Checkstyle errors. With that 
> Checkstyle can now be configured to fail on violations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-17425) Fix calls to deprecated APIs in TestUpdateConfiguration

2017-12-12 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel resolved HBASE-17425.
---
Resolution: Fixed

Closing this one. All changes to 1.x branches are reverted.

> Fix calls to deprecated APIs in TestUpdateConfiguration
> ---
>
> Key: HBASE-17425
> URL: https://issues.apache.org/jira/browse/HBASE-17425
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-17425.master.001.patch
>
>
> Currently there are two calls to the deprecated method 
> {code:java}HBaseTestingUtil.getHBaseAdmin(){code} in 
> *TestUpdateConfiguration*. These calls should be changed to 
> {code:java}HBaseTestingUtil.getAdmin(){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19492) Add EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS support to replication peer config

2017-12-12 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19492:
---
Attachment: HBASE-19492.master.002.patch

Retry.

> Add EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS support to replication peer config
> -
>
> Key: HBASE-19492
> URL: https://issues.apache.org/jira/browse/HBASE-19492
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19492.master.001.patch, 
> HBASE-19492.master.002.patch, HBASE-19492.master.002.patch, 
> HBASE-19492.master.002.patch
>
>
> This is a follow-up issue after HBASE-16868. Copied the comments in 
> HBASE-16868.
> This replicate_all flag is useful to avoid misuse of replication peer config. 
> And on our cluster we have more config: EXCLUDE_NAMESPACE and 
> EXCLUDE_TABLECFS for replication peer. Let me tell more about our use case. 
> We have two online serve cluster and one offline cluster for MR/Spark job. 
> For online cluster, all tables will replicate to each other. And not all 
> tables will replicate to offline cluster, because not all tables need OLAP 
> job. We have hundreds of tables and if only one table don't need replicate to 
> offline cluster, then you will config a lot of tables in replication peer 
> config. So we add a new config option is EXCLUDE_TABLECFS. Then you only need 
> config one table (which don't need replicate) in EXCLUDE_TABLECFS.
> Then when the replicate_all flag is false, you can config NAMESPACE or 
> TABLECFS means which namespace/tables need replicate to peer cluster. When 
> replicate_all flag is true, you can config EXCLUDE_NAMESPACE or 
> EXCLUDE_TABLECFS means which namespace/tables can't replicate to peer cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288477#comment-16288477
 ] 

Appy commented on HBASE-19483:
--

{quote}
bq. Is there already a jira to discuss other questions
I don't think there is. Feel free to raise JIRA.
{quote}
HBASE-19500

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19500) Make RSGroupInfo immutable

2017-12-12 Thread Appy (JIRA)
Appy created HBASE-19500:


 Summary: Make RSGroupInfo immutable
 Key: HBASE-19500
 URL: https://issues.apache.org/jira/browse/HBASE-19500
 Project: HBase
  Issue Type: Bug
Reporter: Appy


HBASE-19483 added CP hooks to expose RSGroupInfo.
First, we should at least change [hbase-client] RSGroupInfo to immutable + 
builder pattern like we have done for so many other things.
What say [~Apache9]

Then, few questions need figuring out:
- Should hooks be allowed to change RSGroupInfo.
Probably not? Then making it immutable would be necessary and sufficient
- Can we remove {{if(((MasterEnvironment)getEnvironment()).supportGroupCPs) }} 
in so many places since CP in 2.0 are already broken left and right (and we'll 
have to solve legacy issue more holistically) What say [~anoop.hbase]?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288472#comment-16288472
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288461#comment-16288461
 ] 

Ted Yu commented on HBASE-19483:


bq. Is there already a jira to discuss other questions

I don't think there is. Feel free to raise JIRA.

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19499) RegionMover#stripMaster in RegionMover needs to handle HBASE-18511 gracefully

2017-12-12 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288459#comment-16288459
 ] 

Esteban Gutierrez commented on HBASE-19499:
---

In fact, what causes the RegionMover to fail is this:
{code}
7/12/12 11:00:28 ERROR util.RegionMover: Error while unloading regions
java.lang.Exception: Server host1.example.com:22001 is not in list of online 
servers(Offline/Incorrect)
at 
org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
at 
org.apache.hadoop.hbase.util.RegionMover.access$1500(RegionMover.java:78)
at 
org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:336)
at 
org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}


> RegionMover#stripMaster in RegionMover needs to handle HBASE-18511 gracefully
> -
>
> Key: HBASE-19499
> URL: https://issues.apache.org/jira/browse/HBASE-19499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>
> Probably this is the first of few issues found during some tests with 
> RegionMover. After HBASE-13014 we ship the new RegionMover tool but it 
> currently assumes that master will be hosting regions so it attempts to 
> remove master from the list and that causes an issue similar to this:
> {code}
> 17/12/12 11:01:06 WARN util.RegionMover: Could not remove master from list of 
> RS
> java.lang.Exception: Server host1.example.com:22001 is not in list of online 
> servers(Offline/Incorrect)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripMaster(RegionMover.java:757)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.access$1800(RegionMover.java:78)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:339)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Basicaly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288454#comment-16288454
 ] 

Appy commented on HBASE-19483:
--

bq. @param rsGroupInfo the group information
It'd be nice to have javadocs give more information than what the parameter 
name already conveys. "rsGroupInfo" already says that it's group information.
The right javadoc would be something like "RSGroupInfo to which the given table 
belongs." . Additionally, if they can be null, etc.
(We all need to get better at javadocs as a community, just trying to promote 
the change. Please look for same in your reviews :-))




> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-12 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288447#comment-16288447
 ] 

Josh Elser commented on HBASE-19289:


LGTM. I assume you're running these tests successfully using H3 on your local 
machine (as I don't think PreCommit is, right?) -- if that's the case, push it!

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch, 
> HBASE-19289.v2.patch, HBASE-19289.v3.patch, HBASE-19289.v4.patch, 
> HBASE-19289.v5.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-12 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288443#comment-16288443
 ] 

Josh Elser commented on HBASE-19289:


bq. Do you mind if we handle the bit about standalone cluster in a follow on 
issue? Currently we're blocking tests, and while that should definitely be a 
release blocker I'm not sure it needs to be addressed with equal urgency.

Not at all. Just wanted to make sure we didn't miss this aspect :)

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch, 
> HBASE-19289.v2.patch, HBASE-19289.v3.patch, HBASE-19289.v4.patch, 
> HBASE-19289.v5.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19499) RegionMover#stripMaster in RegionMover needs to handle HBASE-18511 gracefully

2017-12-12 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-19499:
--
Summary: RegionMover#stripMaster in RegionMover needs to handle HBASE-18511 
gracefully  (was: RegionMover#stripMaster is not longer necessary in 
RegionMover)

> RegionMover#stripMaster in RegionMover needs to handle HBASE-18511 gracefully
> -
>
> Key: HBASE-19499
> URL: https://issues.apache.org/jira/browse/HBASE-19499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>
> Probably this is the first of few issues found during some tests with 
> RegionMover. After HBASE-13014 we ship the new RegionMover tool but it 
> currently assumes that master will be hosting regions so it attempts to 
> remove master from the list and that causes an issue similar to this:
> {code}
> 17/12/12 11:01:06 WARN util.RegionMover: Could not remove master from list of 
> RS
> java.lang.Exception: Server host1.example.com:22001 is not in list of online 
> servers(Offline/Incorrect)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripMaster(RegionMover.java:757)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.access$1800(RegionMover.java:78)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:339)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Basicaly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19499) RegionMover#stripMaster is not longer necessary in RegionMover

2017-12-12 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288438#comment-16288438
 ] 

Esteban Gutierrez commented on HBASE-19499:
---

[~stack] pointed out to HBASE-18511. We cannot just remove that test, but we if 
{{hbase.balancer.tablesOnMaster.systemTablesOnly}} and 
{{hbase.balancer.tablesOnMaster}} are enabled we can be more flexible in order 
to avoid that failure.

> RegionMover#stripMaster is not longer necessary in RegionMover
> --
>
> Key: HBASE-19499
> URL: https://issues.apache.org/jira/browse/HBASE-19499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>
> Probably this is the first of few issues found during some tests with 
> RegionMover. After HBASE-13014 we ship the new RegionMover tool but it 
> currently assumes that master will be hosting regions so it attempts to 
> remove master from the list and that causes an issue similar to this:
> {code}
> 17/12/12 11:01:06 WARN util.RegionMover: Could not remove master from list of 
> RS
> java.lang.Exception: Server host1.example.com:22001 is not in list of online 
> servers(Offline/Incorrect)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripMaster(RegionMover.java:757)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.access$1800(RegionMover.java:78)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:339)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Basicaly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19499) RegionMover#stripMaster is not longer necessary in RegionMover

2017-12-12 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288438#comment-16288438
 ] 

Esteban Gutierrez edited comment on HBASE-19499 at 12/12/17 11:12 PM:
--

[~stack] pointed out to HBASE-18511. We cannot just remove that condition, but 
we if {{hbase.balancer.tablesOnMaster.systemTablesOnly}} and 
{{hbase.balancer.tablesOnMaster}} are enabled we can be more flexible in order 
to avoid that failure.


was (Author: esteban):
[~stack] pointed out to HBASE-18511. We cannot just remove that test, but we if 
{{hbase.balancer.tablesOnMaster.systemTablesOnly}} and 
{{hbase.balancer.tablesOnMaster}} are enabled we can be more flexible in order 
to avoid that failure.

> RegionMover#stripMaster is not longer necessary in RegionMover
> --
>
> Key: HBASE-19499
> URL: https://issues.apache.org/jira/browse/HBASE-19499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>
> Probably this is the first of few issues found during some tests with 
> RegionMover. After HBASE-13014 we ship the new RegionMover tool but it 
> currently assumes that master will be hosting regions so it attempts to 
> remove master from the list and that causes an issue similar to this:
> {code}
> 17/12/12 11:01:06 WARN util.RegionMover: Could not remove master from list of 
> RS
> java.lang.Exception: Server host1.example.com:22001 is not in list of online 
> servers(Offline/Incorrect)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.stripMaster(RegionMover.java:757)
>   at 
> org.apache.hadoop.hbase.util.RegionMover.access$1800(RegionMover.java:78)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:339)
>   at 
> org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Basicaly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2017-12-12 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288429#comment-16288429
 ] 

Appy commented on HBASE-19483:
--

ACL is more important than other topics there like dead nodes, troubleshooting, 
etc. Please move it before 'Best Practice' section.
In fact, we have a single place to list ACLs, i'd rather have them just here 
(http://hbase.apache.org/book.html#_permissions) to avoid redundancy (and 
possible stale copies in future).

Is there already a jira to discuss other questions, if not, i can create one. 
Let me know.

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Attachments: HBASE-19483.master.001.patch, 
> HBASE-19483.master.002.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288421#comment-16288421
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10396/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-12 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19289:
--
Attachment: HBASE-19289.v5.patch

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch, 
> HBASE-19289.v2.patch, HBASE-19289.v3.patch, HBASE-19289.v4.patch, 
> HBASE-19289.v5.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-12 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288420#comment-16288420
 ] 

Mike Drob commented on HBASE-19289:
---

bq. How about instead: "Controls whether or not HBase will require stream 
capabilities (hflush) in tests using the LocalFileSystem class".
I'll wordsmith this a bit. 

bq. Nit: TestHStore.java has a whitespace-only change.
Fixed.

Do you mind if we handle the bit about standalone cluster in a follow on issue? 
Currently we're blocking tests, and while that should definitely be a release 
blocker I'm not sure it needs to be addressed with equal urgency.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch, 
> HBASE-19289.v2.patch, HBASE-19289.v3.patch, HBASE-19289.v4.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19456) RegionMover's region server hostname option is no longer case insensitive

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288417#comment-16288417
 ] 

Hadoop QA commented on HBASE-19456:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
55s{color} | {color:red} hbase-server: The patch generated 3 new + 38 unchanged 
- 0 fixed = 41 total (was 38) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
46m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}105m 
31s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19456 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901755/HBASE-19456.v2-master.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux eb30b1381991 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11467ef111 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10390/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10390/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10390/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically 

[jira] [Created] (HBASE-19499) RegionMover#stripMaster is not longer necessary in RegionMover

2017-12-12 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HBASE-19499:
-

 Summary: RegionMover#stripMaster is not longer necessary in 
RegionMover
 Key: HBASE-19499
 URL: https://issues.apache.org/jira/browse/HBASE-19499
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Esteban Gutierrez


Probably this is the first of few issues found during some tests with 
RegionMover. After HBASE-13014 we ship the new RegionMover tool but it 
currently assumes that master will be hosting regions so it attempts to remove 
master from the list and that causes an issue similar to this:

{code}
17/12/12 11:01:06 WARN util.RegionMover: Could not remove master from list of RS
java.lang.Exception: Server host1.example.com:22001 is not in list of online 
servers(Offline/Incorrect)
at 
org.apache.hadoop.hbase.util.RegionMover.stripServer(RegionMover.java:818)
at 
org.apache.hadoop.hbase.util.RegionMover.stripMaster(RegionMover.java:757)
at 
org.apache.hadoop.hbase.util.RegionMover.access$1800(RegionMover.java:78)
at 
org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:339)
at 
org.apache.hadoop.hbase.util.RegionMover$Unload.call(RegionMover.java:314)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}

Basicaly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19489:
-
Attachment: HBASE-19489.master.005.patch

> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch, HBASE-19489.master.005.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19287) master hangs forever if RecoverMeta send assign meta region request to target server fail

2017-12-12 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-19287:
-
Attachment: HBASE-19287-master-v3.patch

> master hangs forever if RecoverMeta send assign meta region request to target 
> server fail
> -
>
> Key: HBASE-19287
> URL: https://issues.apache.org/jira/browse/HBASE-19287
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Attachments: HBASE-19287-master-v3.patch, 
> HBASE-19287-master-v3.patch, hbase-19287-master-v2.patch, master.patch
>
>
> 2017-11-10 19:26:56,019 INFO  [ProcExecWrkr-1] 
> procedure.RecoverMetaProcedure: pid=138, 
> state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true; Retaining meta assignment to 
> server=hadoop-slave1.hadoop,16020,1510341981454
> 2017-11-10 19:26:56,029 INFO  [ProcExecWrkr-1] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=139, ppid=138, 
> state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, 
> region=1588230740, target=hadoop-slave1.hadoop,16020,1510341981454}]
> 2017-11-10 19:26:56,067 INFO  [ProcExecWrkr-2] 
> procedure.MasterProcedureScheduler: pid=139, ppid=138, 
> state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, 
> region=1588230740, target=hadoop-slave1.hadoop,16020,1510341981454 hbase:meta 
> hbase:meta,,1.1588230740
> 2017-11-10 19:26:56,071 INFO  [ProcExecWrkr-2] assignment.AssignProcedure: 
> Start pid=139, ppid=138, state=RUNNABLE:REGION_TRANSITION_QUEUE; 
> AssignProcedure table=hbase:meta, region=1588230740, 
> target=hadoop-slave1.hadoop,16020,1510341981454; rit=OFFLINE, 
> location=hadoop-slave1.hadoop,16020,1510341981454; forceNewPlan=false, 
> retain=false
> 2017-11-10 19:26:56,224 INFO  [ProcExecWrkr-4] zookeeper.MetaTableLocator: 
> Setting hbase:meta (replicaId=0) location in ZooKeeper as 
> hadoop-slave2.hadoop,16020,1510341988652
> 2017-11-10 19:26:56,230 INFO  [ProcExecWrkr-4] 
> assignment.RegionTransitionProcedure: Dispatch pid=139, ppid=138, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, 
> region=1588230740, target=hadoop-slave1.hadoop,16020,1510341981454; 
> rit=OPENING, location=hadoop-slave2.hadoop,16020,1510341988652
> 2017-11-10 19:26:56,382 INFO  [ProcedureDispatcherTimeoutThread] 
> procedure.RSProcedureDispatcher: Using procedure batch rpc execution for 
> serverName=hadoop-slave2.hadoop,16020,1510341988652 version=2097152
> 2017-11-10 19:26:57,542 INFO  [main-EventThread] 
> zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, 
> processing expiration [hadoop-slave2.hadoop,16020,1510341988652]
> 2017-11-10 19:26:57,543 INFO  [main-EventThread] master.ServerManager: Master 
> doesn't enable ServerShutdownHandler during initialization, delay expiring 
> server hadoop-slave2.hadoop,16020,1510341988652
> 2017-11-10 19:26:58,875 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] 
> master.ServerManager: Registering 
> server=hadoop-slave1.hadoop,16020,1510342016106
> 2017-11-10 19:27:05,832 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] 
> master.ServerManager: Registering 
> server=hadoop-slave2.hadoop,16020,1510342023184
> 2017-11-10 19:27:05,832 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] 
> master.ServerManager: Triggering server recovery; existingServer 
> hadoop-slave2.hadoop,16020,1510341988652 looks stale, new 
> server:hadoop-slave2.hadoop,16020,1510342023184
> 2017-11-10 19:27:05,832 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] 
> master.ServerManager: Master doesn't enable ServerShutdownHandler during 
> initialization, delay expiring server hadoop-slave2.hadoop,16020,1510341988652
> 2017-11-10 19:27:49,815 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] 
> client.RpcRetryingCallerImpl: tarted=38594 ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not 
> online on hadoop-slave2.hadoop,16020,1510342023184
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3290)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1370)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2401)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41544)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> 

[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288388#comment-16288388
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10394/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288387#comment-16288387
 ] 

Hadoop QA commented on HBASE-19489:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/10393/console in case of 
problems.


> Check against only the latest maintenance release in pre-commit hadoopcheck.
> 
>
> Key: HBASE-19489
> URL: https://issues.apache.org/jira/browse/HBASE-19489
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19489.master.001.patch, 
> HBASE-19489.master.002.patch, HBASE-19489.master.003.patch, 
> HBASE-19489.master.004.patch
>
>
> (copied from dev thread)
> {color:green}
> | +1  | hadoopcheck | 52m 1s |Patch does not cause any errors with 
> Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. 
> |
> {color}
> Almost 1 hr to check against 10 versions. And it's only going to increase as 
> more 2.6.x, 2.7.x and 3.0.x releases come out.
> Suggestion here is simple, let's check against only the latest maintenance 
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be 
> doing API breaking changes. So checking against maintenance release for each 
> minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to 
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are 
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
> Spot checking some 
> failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed 
> 10230: All 2.6.x failed. Others didn't run
>   
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's 
> irrelevant to this discussion).
> What do you say - only check latest maintenance releases in precommit (and 
> let nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >