[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Fix Version/s: 2.0.0
   Status: Patch Available  (was: Open)

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 0.94.27, 2.0.0
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Attachment: HBASE-16012-v1.patch

1. close scanners and remove scanner read point when initialize RegionScanner 
catch exception.
2. add finally block to close scanner when lease expired.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-16012-v1.patch, HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16019) Cut HBase 1.2.2 release

2016-06-13 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-16019:
---

 Summary: Cut HBase 1.2.2 release
 Key: HBASE-16019
 URL: https://issues.apache.org/jira/browse/HBASE-16019
 Project: HBase
  Issue Type: Task
  Components: community
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.2.2






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15775) Canary launches two AuthUtil Chores

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15775:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 0.98.21, 1.2.3
>
> Attachments: HBASE-15775.0.98.00.patch
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-11819:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 0.98.14, 1.3.0, 1.1.6, 1.2.3
>
> Attachments: HBASE-11819v4-master.patch, HBASE-11819v5-0.98 
> (1).patch, HBASE-11819v5-0.98.patch, HBASE-11819v5-master (1).patch, 
> HBASE-11819v5-master.patch, HBASE-11819v5-master.patch, 
> HBASE-11819v5-v0.98.patch, HBASE-11819v5-v1.0.patch, 
> HBASE-11819v6-branch-1.patch, HBASE-11819v6-master.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13603) Write test asserting desired priority of RS->Master RPCs

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13603:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Write test asserting desired priority of RS->Master RPCs
> 
>
> Key: HBASE-13603
> URL: https://issues.apache.org/jira/browse/HBASE-13603
> Project: HBase
>  Issue Type: Test
>  Components: IPC/RPC, test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.1.6, 1.2.3
>
>
> From HBASE-13351:
> {quote}
> Any way we can write a FT test to assert that the RS->Master APIs are treated 
> with higher priority. I see your UT for asserting the annotation.
> {quote}
> Write a test that verifies expected RPCs are run on the correct pools in as 
> real of an environment possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15635) Mean age of Blocks in cache (seconds) on webUI should be greater than zero

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15635:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Mean age of Blocks in cache (seconds) on webUI should be greater than zero
> --
>
> Key: HBASE-15635
> URL: https://issues.apache.org/jira/browse/HBASE-15635
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.17
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.0.5, 1.1.6, 0.98.21, 1.2.3
>
> Attachments: 7BFFAF68-0807-400C-853F-706B498449E1.png, 
> HBASE-15635.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15580) Tag coprocessor limitedprivate scope to StoreFile.Reader

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15580:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Tag coprocessor limitedprivate scope to StoreFile.Reader
> 
>
> Key: HBASE-15580
> URL: https://issues.apache.org/jira/browse/HBASE-15580
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.0.4, 1.1.6, 0.98.21, 1.2.3
>
> Attachments: HBASE-15580.patch, HBASE-15580_branch-1.0.patch
>
>
> For phoenix local indexing we need to have custom storefile reader 
> constructor(IndexHalfStoreFileReader) to distinguish from other storefile 
> readers. So wanted to mark StoreFile.Reader scope as 
> InterfaceAudience.LimitedPrivate("Coprocessor")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15691:

Fix Version/s: (was: 1.2.2)
   1.2.3
   1.3.1

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.3.1, 1.2.3
>
> Attachments: HBASE-15691-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13160) SplitLogWorker does not pick up the task immediately

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13160:

Fix Version/s: (was: 1.1.6)
   (was: 1.2.2)

> SplitLogWorker does not pick up the task immediately
> 
>
> Key: HBASE-13160
> URL: https://issues.apache.org/jira/browse/HBASE-13160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0
>
> Attachments: hbase-13160_v1.patch
>
>
> We were reading some code with Jeffrey, and we realized that the 
> SplitLogWorker's internal task loop is weird. It does {{ls}} every second and 
> sleeps, but have another mechanism to learn about new tasks, but does not 
> make affective use of the zk notification. 
> I have a simple patch which might improve this area. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13160) SplitLogWorker does not pick up the task immediately

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13160:

Issue Type: Improvement  (was: Bug)

> SplitLogWorker does not pick up the task immediately
> 
>
> Key: HBASE-13160
> URL: https://issues.apache.org/jira/browse/HBASE-13160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0
>
> Attachments: hbase-13160_v1.patch
>
>
> We were reading some code with Jeffrey, and we realized that the 
> SplitLogWorker's internal task loop is weird. It does {{ls}} every second and 
> sleeps, but have another mechanism to learn about new tasks, but does not 
> make affective use of the zk notification. 
> I have a simple patch which might improve this area. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13587) TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13587:

Fix Version/s: (was: 1.2.2)
   1.2.3

> TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey
> ---
>
> Key: HBASE-13587
> URL: https://issues.apache.org/jira/browse/HBASE-13587
> Project: HBase
>  Issue Type: Test
>Reporter: Nick Dimiduk
> Fix For: 2.0.0, 1.3.0, 1.1.6, 1.2.3
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 428, 431, 432, 433.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14391) Empty regionserver WAL will never be deleted although the coresponding regionserver has been stale

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14391:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Empty regionserver WAL will never be deleted although the coresponding 
> regionserver has been stale
> --
>
> Key: HBASE-14391
> URL: https://issues.apache.org/jira/browse/HBASE-14391
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.2
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
> Fix For: 2.0.0, 1.3.0, 1.1.6, 1.2.3
>
> Attachments: HBASE-14391-master-v3.patch, 
> HBASE_14391_master_v4.patch, HBASE_14391_trunk_v1.patch, 
> HBASE_14391_trunk_v2.patch, WALs-leftover-dir.txt
>
>
> When I restarted the hbase cluster in which there was few data, I found there 
> are two directories for one host with different timestamp which indicates 
> that the old regionserver wal directory is not deleted.
> FHLog#989
> {code}
>  @Override
>   public void close() throws IOException {
> shutdown();
> final FileStatus[] files = getFiles();
> if (null != files && 0 != files.length) {
>   for (FileStatus file : files) {
> Path p = getWALArchivePath(this.fullPathArchiveDir, file.getPath());
> // Tell our listeners that a log is going to be archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.preLogArchive(file.getPath(), p);
>   }
> }
> if (!FSUtils.renameAndSetModifyTime(fs, file.getPath(), p)) {
>   throw new IOException("Unable to rename " + file.getPath() + " to " 
> + p);
> }
> // Tell our listeners that a log was archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.postLogArchive(file.getPath(), p);
>   }
> }
>   }
>   LOG.debug("Moved " + files.length + " WAL file(s) to " +
> FSUtils.getPath(this.fullPathArchiveDir));
> }
> LOG.info("Closed WAL: " + toString());
>   }
> {code}
> When regionserver is stopped, the hlog will be archived, so wal/regionserver 
> is empty in hdfs.
> MasterFileSystem#252
> {code}
> if (curLogFiles == null || curLogFiles.length == 0) {
> // Empty log folder. No recovery needed
> continue;
>   }
> {code}
> The regionserver directory will be not splitted, it makes sense. But it will 
> be not deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15302) Reenable the other tests disabled by HBASE-14678

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328916#comment-15328916
 ] 

Hadoop QA commented on HBASE-15302:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 22s {color} 
| {color:red} HBASE-15302 does not apply to branch-1. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794162/HBASE-15302-branch-1-v1.patch
 |
| JIRA Issue | HBASE-15302 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2199/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Reenable the other tests disabled by HBASE-14678
> 
>
> Key: HBASE-15302
> URL: https://issues.apache.org/jira/browse/HBASE-15302
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15302-branch-1-v1.patch, HBASE-15302-v1.txt, 
> HBASE-15302-v1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15387) Make HalfStoreFileReader configurable in LoadIncrementalHFiles

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15387:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Make HalfStoreFileReader configurable in LoadIncrementalHFiles
> --
>
> Key: HBASE-15387
> URL: https://issues.apache.org/jira/browse/HBASE-15387
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.3.0, 1.1.6, 1.2.3
>
>
> Currently we are initializing HalfStoreFileReader to split the HFile but we 
> might have different implementation for splitting. So we can make it 
> configurable. It's needed for local indexing in Phoenix(PHOENIX-2736). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15387) Make HalfStoreFileReader configurable in LoadIncrementalHFiles

2016-06-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328908#comment-15328908
 ] 

Sean Busbey commented on HBASE-15387:
-

should this be planned for 1.1.z and 1.2.z, since it's an improvement?

> Make HalfStoreFileReader configurable in LoadIncrementalHFiles
> --
>
> Key: HBASE-15387
> URL: https://issues.apache.org/jira/browse/HBASE-15387
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.3.0, 1.1.6, 1.2.3
>
>
> Currently we are initializing HalfStoreFileReader to split the HFile but we 
> might have different implementation for splitting. So we can make it 
> configurable. It's needed for local indexing in Phoenix(PHOENIX-2736). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14223:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.1.6, 1.2.3
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch, 
> hbase-14223_v3-master.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] wal.WALSplitte

[jira] [Updated] (HBASE-13354) Add documentation and tests for external block cache.

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13354:

Fix Version/s: (was: 1.1.6)

> Add documentation and tests for external block cache.
> -
>
> Key: HBASE-13354
> URL: https://issues.apache.org/jira/browse/HBASE-13354
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache, documentation, test
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
>
> The new memcached integration needs some documentation and some testing 
> showing how it works and what can go wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14610) IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14610:

Fix Version/s: (was: 1.2.2)
   1.2.3

> IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client
> --
>
> Key: HBASE-14610
> URL: https://issues.apache.org/jira/browse/HBASE-14610
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.1.6, 1.2.3
>
> Attachments: output
>
>
> HBASE-14535 introduces an IT to simulate a running cluster with RPC servers 
> and RPC clients doing requests against the servers. 
> It passes with the sync client, but fails with async client. Probably we need 
> to take a look. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13354) Add documentation and tests for external block cache.

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13354:

Fix Version/s: (was: 1.2.2)

> Add documentation and tests for external block cache.
> -
>
> Key: HBASE-13354
> URL: https://issues.apache.org/jira/browse/HBASE-13354
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache, documentation, test
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
>
> The new memcached integration needs some documentation and some testing 
> showing how it works and what can go wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12943) Set sun.net.inetaddr.ttl in HBase

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12943:

Fix Version/s: (was: 1.2.2)

> Set sun.net.inetaddr.ttl in HBase
> -
>
> Key: HBASE-12943
> URL: https://issues.apache.org/jira/browse/HBASE-12943
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 12943-1-master.txt
>
>
> The default value of config: sun.net.inetaddr.ttl is -1 and the java 
> processes will cache the mapping of hostname to ip address  forever, See: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/net/properties.html
> But things go wrong when a regionserver with same hostname and different ip 
> address rejoins the hbase cluster. The HMaster will get wrong ip address of 
> the regionserver from this cache and every region assignment to this 
> regionserver will be blocked for a time because the HMaster can't communicate 
> with the regionserver.
> A tradeoff is to set the sun.net.inetaddr.ttl to 10m or 1h and make the wrong 
> cache expired.
> Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15302) Reenable the other tests disabled by HBASE-14678

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15302:

Fix Version/s: (was: 1.2.2)

> Reenable the other tests disabled by HBASE-14678
> 
>
> Key: HBASE-15302
> URL: https://issues.apache.org/jira/browse/HBASE-15302
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15302-branch-1-v1.patch, HBASE-15302-v1.txt, 
> HBASE-15302-v1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14847) Add FIFO compaction section to HBase book

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14847:

Fix Version/s: (was: 1.2.2)

> Add FIFO compaction section to HBase book
> -
>
> Key: HBASE-14847
> URL: https://issues.apache.org/jira/browse/HBASE-14847
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0
>
>
> HBASE-14468 introduced new compaction policy. Book needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14753) TestShell is not invoked anymore

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14753:

Fix Version/s: (was: 1.2.2)
   1.2.3

> TestShell is not invoked anymore
> 
>
> Key: HBASE-14753
> URL: https://issues.apache.org/jira/browse/HBASE-14753
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.2.3
>
>
> Not sure whether it is due to HBASE-14679 or not. 
> {code}
> mvn clean  test -Dtest=TestShell
> {code}
> does not run any test at all. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328901#comment-15328901
 ] 

Anoop Sam John commented on HBASE-16012:


There was one more scenario discussed in the mail chain.  That was like 
explicit close of the scanner is not happening from client side and so Lease 
time out happens.  Then the close() op on RegionScanner may throw some 
Exception and so the removal of the readPnt from the Map not happening.  It 
would be good to do a try finally there also.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Attachments: HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15984:

Fix Version/s: (was: 1.2.2)
   1.2.3

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.1.6, 0.98.21, 1.2.3
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14845) hbase-server leaks jdk.tools dependency to mapreduce consumers

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14845:

Fix Version/s: (was: 1.2.2)
   1.2.3

> hbase-server leaks jdk.tools dependency to mapreduce consumers
> --
>
> Key: HBASE-14845
> URL: https://issues.apache.org/jira/browse/HBASE-14845
> Project: HBase
>  Issue Type: Bug
>  Components: build, dependencies
>Affects Versions: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-14845.1.patch
>
>
> HBASE-13963 / HBASE-14844 take care of removing leaks of our dependency on 
> jdk-tools.
> Until we move the mapreduce support classes out of hbase-server 
> (HBASE-11843), we need to also avoid leaking the dependency from that module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328899#comment-15328899
 ] 

Anoop Sam John commented on HBASE-15971:


Nice find.  Looks great.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> HBASE-15971.branch-1.002.patch, Screen Shot 2016-06-10 at 5.08.24 PM.png, 
> Screen Shot 2016-06-10 at 5.08.26 PM.png, branch-1.hits.png, branch-1.png, 
> flight_recording_10172402220203_28.branch-1.jfr, 
> flight_recording_10172402220203_29.09820.0.98.20.jfr, handlers.fp.png, 
> hits.fp.png, hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14506) Reenable TestDistributedLogSplitting#testWorkerAbort

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14506:

Fix Version/s: (was: 1.2.2)

> Reenable TestDistributedLogSplitting#testWorkerAbort
> 
>
> Key: HBASE-14506
> URL: https://issues.apache.org/jira/browse/HBASE-14506
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting-output.txt
>
>
> See attached log. Flakey up on jenkins and down locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15124) Document the new 'normalization' feature in refguid

2016-06-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15124:

Fix Version/s: (was: 1.2.2)

> Document the new 'normalization' feature in refguid
> ---
>
> Key: HBASE-15124
> URL: https://issues.apache.org/jira/browse/HBASE-15124
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: stack
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
>
> A nice new feature is coming in to 1.2.0, normalization. A small bit of doc 
> on it in refguide would help.
> Should define what normalization is.
> Should say a sentence or two on how it works and when.
> Throw in the output of shell commands.
> A paragraph or so. I can help.
> Marking critical against 1.2.0. Not a blocker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15862) Backup - Delete- Restore does not restore deleted data

2016-06-13 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328885#comment-15328885
 ] 

Vladimir Rodionov edited comment on HBASE-15862 at 6/14/16 4:02 AM:


Snapshot restore works with arbitrary boundaries. 

If we select truncate option:

First full backup restore can take a long time, because, bulk load tool will 
need to split (align) all HFiles to new region boundaries before pushing them 
to a cluster. The second step (M/R job to load all WAL files) will finish 
pretty quickly and no region splits will be afterwords. 

If we select  delete option:
First full backup restore will finish quickly, second step will finish quickly 
as well, but we will have a lot of region splits and major compaction when 
restore finishes.

I vote for truncate as an interim solution.



was (Author: vrodionov):
Snapshot restore works with arbitrary boundaries. 

If we select truncate option:

First full backup restore can take a long time, because, bulk load tool will 
need to split (align) all HFiles to all boundaries before pushing them to a 
cluster. The second step (M/R job to load all WAL files) will finish pretty 
quickly and no region splits will be afterwords. 

If we select  delete option:
First full backup restore will finish quickly, second step will finish quickly 
as well, but we will have a lot of region splits and major compaction when 
restore finishes.

I vote for truncate as an interim solution.


> Backup - Delete- Restore does not restore deleted data
> --
>
> Key: HBASE-15862
> URL: https://issues.apache.org/jira/browse/HBASE-15862
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-15862-v1.patch, HBASE-15862-v2.patch, 
> HBASE-15862-v3.patch
>
>
> This was discovered during testing. If we delete row after full backup and 
> perform immediately restore, the deleted row still remains deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15862) Backup - Delete- Restore does not restore deleted data

2016-06-13 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328885#comment-15328885
 ] 

Vladimir Rodionov commented on HBASE-15862:
---

Snapshot restore works with arbitrary boundaries. 

If we select truncate option:

First full backup restore can take a long time, because, bulk load tool will 
need to split (align) all HFiles to all boundaries before pushing them to a 
cluster. The second step (M/R job to load all WAL files) will finish pretty 
quickly and no region splits will be afterwords. 

If we select  delete option:
First full backup restore will finish quickly, second step will finish quickly 
as well, but we will have a lot of region splits and major compaction when 
restore finishes.

I vote for truncate as an interim solution.


> Backup - Delete- Restore does not restore deleted data
> --
>
> Key: HBASE-15862
> URL: https://issues.apache.org/jira/browse/HBASE-15862
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-15862-v1.patch, HBASE-15862-v2.patch, 
> HBASE-15862-v3.patch
>
>
> This was discovered during testing. If we delete row after full backup and 
> perform immediately restore, the deleted row still remains deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-06-13 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328879#comment-15328879
 ] 

Heng Chen commented on HBASE-15406:
---

+1 for your proposal. 

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
>  Labels: reviewed
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.6
   1.2.2
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328873#comment-15328873
 ] 

Hadoop QA commented on HBASE-5291:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 21s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
19s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 50s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 new + 4 
unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 51s {color} 
| {color:red} root-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 new + 35 
unchanged - 2 fixed = 37 total (was 37) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 14s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 108m 11s 
{color} | {color:red} hbase-server in the patch failed. {color} |

[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328869#comment-15328869
 ] 

Guanghao Zhang commented on HBASE-16012:


Yes, it should close the store scanner, too. But there are additionalScanners 
when initialize RegionScanner. I am not sure whether should close these 
additionalScanners. Current code in master branch never use additionalScanners 
and it always is null.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Attachments: HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328865#comment-15328865
 ] 

Jingcheng Du commented on HBASE-16012:
--

Thanks for the patch [~zghaobac].
If exceptions occur when store scanner is instantiated, the heap in region is 
empty, the instantiated store scanners cannot be closed properly in 
region.close at that time. Do you want to address this in the patch?

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Attachments: HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328862#comment-15328862
 ] 

Ted Yu commented on HBASE-16012:


lgtm

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Attachments: HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16012) Major compaction can't work because left scanner read point in RegionServer

2016-06-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16012:
---
Attachment: HBASE-16012.patch

Add try...catch to handle this.

> Major compaction can't work because left scanner read point in RegionServer
> ---
>
> Key: HBASE-16012
> URL: https://issues.apache.org/jira/browse/HBASE-16012
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Scanners
>Affects Versions: 2.0.0, 0.94.27
>Reporter: Guanghao Zhang
> Attachments: HBASE-16012.patch
>
>
> When new RegionScanner, it will add a scanner read point in 
> scannerReadPoints. But if we got a exception after add read point, the read 
> point will keep in regions server and the delete after this mvcc number will 
> never be compacted.
> Our hbase version is base 0.94. If it throws other exception when initialize 
> RegionScanner, the master branch has this bug, too.
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner 
> java.io.IOException: Could not seek StoreFileScanner
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:268)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:168)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2232)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4026)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1895)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:3032)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2995)
>   at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1595)
> Caused by: org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting 
> call openScanner, since caller disconnected
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Call.throwExceptionIfCallerDisconnected(HBaseServer.java:475)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1443)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1902)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1766)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:345)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:499)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:520)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:235)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:148)
>   ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15849) Shell Cleanup: Simplify handling of commands' runtime

2016-06-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15849:
-
Fix Version/s: 1.4.0

> Shell Cleanup: Simplify handling of commands' runtime
> -
>
> Key: HBASE-15849
> URL: https://issues.apache.org/jira/browse/HBASE-15849
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15849.master.001.patch
>
>
> Functions format_simple_command and format_and_return_simple_command are used 
> to print runtimes right now. They are called from within every single command 
> and use Ruby's 'yield' magic.  Instead, we can simplify it using 
> 'command_safe' function. Since command_safe wraps all commands, we can simply 
> time before and after we call individual command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328823#comment-15328823
 ] 

Hadoop QA commented on HBASE-5291:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 27s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
30s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 52s 
{color} | {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 
new + 4 unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 52s 
{color} | {color:red} root-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 new + 35 
unchanged - 2 fixed = 37 total (was 37) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 37s {color} 
| {color:red} hbase-server in the patch failed. {colo

[jira] [Commented] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328725#comment-15328725
 ] 

Hadoop QA commented on HBASE-16016:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 27s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 121m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810098/HBase-16016.v1-master.patch
 |
| JIRA Issue | HBASE-16016 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 56c209c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_7

[jira] [Assigned] (HBASE-16018) Better documentation of ReplicationPeers

2016-06-13 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph reassigned HBASE-16018:
--

Assignee: Joseph

> Better documentation of ReplicationPeers
> 
>
> Key: HBASE-16018
> URL: https://issues.apache.org/jira/browse/HBASE-16018
> Project: HBase
>  Issue Type: Improvement
>Reporter: Joseph
>Assignee: Joseph
>Priority: Minor
>
> Some of the ReplicationPeers interface's methods are not documented. Some 
> names are also a little confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16018) Better documentation of ReplicationPeers

2016-06-13 Thread Joseph (JIRA)
Joseph created HBASE-16018:
--

 Summary: Better documentation of ReplicationPeers
 Key: HBASE-16018
 URL: https://issues.apache.org/jira/browse/HBASE-16018
 Project: HBase
  Issue Type: Improvement
Reporter: Joseph
Priority: Minor


Some of the ReplicationPeers interface's methods are not documented. Some names 
are also a little confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-3727) MultiHFileOutputFormat

2016-06-13 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang reassigned HBASE-3727:
---

Assignee: yi liang  (was: NIDHI GAMBHIR)

> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MultiHFileOutputFormat.java, MultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-3727) MultiHFileOutputFormat

2016-06-13 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-3727 started by yi liang.
---
> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Attachments: MultiHFileOutputFormat.java, MultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15862) Backup - Delete- Restore does not restore deleted data

2016-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328587#comment-15328587
 ] 

Ted Yu commented on HBASE-15862:


Since the snapshot restore would restore the region boundaries at time of full 
backup, simply truncating the table may not work (there may be splits in 
between the full backup and subsequent incremental backups).

I wonder if we should add metadata into backup manifest so that we know whether 
truncate is applicable.

> Backup - Delete- Restore does not restore deleted data
> --
>
> Key: HBASE-15862
> URL: https://issues.apache.org/jira/browse/HBASE-15862
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-15862-v1.patch, HBASE-15862-v2.patch, 
> HBASE-15862-v3.patch
>
>
> This was discovered during testing. If we delete row after full backup and 
> perform immediately restore, the deleted row still remains deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Attachment: (was: HBase-16016.v1-master.patch)

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Attachment: HBase-16016.v1-master.patch

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-5291:
--
Attachment: HBASE-5291.005.patch

.005 Addresses some of [~drankye]'s feedback

* Consolidates some null checks
* Moves some "common" methods to the parent class for (hopeful) reuse
* Lift the signature secret config property out of "spnego" (generic for all 
auth-filter implementations, not just spnego)

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15971:
--
Attachment: HBASE-15971.branch-1.002.patch

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> HBASE-15971.branch-1.002.patch, Screen Shot 2016-06-10 at 5.08.24 PM.png, 
> Screen Shot 2016-06-10 at 5.08.26 PM.png, branch-1.hits.png, branch-1.png, 
> flight_recording_10172402220203_28.branch-1.jfr, 
> flight_recording_10172402220203_29.09820.0.98.20.jfr, handlers.fp.png, 
> hits.fp.png, hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15862) Backup - Delete- Restore does not restore deleted data

2016-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328440#comment-15328440
 ] 

Ted Yu commented on HBASE-15862:


[~jerryhe]:
What do you think about Vlad's comment ?

> Backup - Delete- Restore does not restore deleted data
> --
>
> Key: HBASE-15862
> URL: https://issues.apache.org/jira/browse/HBASE-15862
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-15862-v1.patch, HBASE-15862-v2.patch, 
> HBASE-15862-v3.patch
>
>
> This was discovered during testing. If we delete row after full backup and 
> perform immediately restore, the deleted row still remains deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328437#comment-15328437
 ] 

Josh Elser commented on HBASE-5291:
---

bq. I thought the default value would be good to have any way, because probably 
you won't want to trouble users to configure it? Maybe "signature.secret" could 
be better?

Ok, I get what you're suggesting now. Just took me a moment. I'll rename the 
property to something that doesn't include "spnego" in it, and also copy some 
of the possible values for it out of Hadoop's javadoc.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328430#comment-15328430
 ] 

Kai Zheng commented on HBASE-5291:
--

bq. i'm not sure anymore why I have a default value of "privacy" in the docs.
I thought the default value would be good to have any way, because probably you 
won't want to trouble users to configure it? Maybe "signature.secret" could be 
better?

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328423#comment-15328423
 ] 

Kai Zheng commented on HBASE-5291:
--

Thanks [~elserj] for the consideration.

Regarding the cookie signature file, I just checked in Hadoop it uses the key 
{{hadoop.http.authentication.signature.secret.file}}. The value can be used in 
other means than just Kerberos, you could look at {{AuthenticationFilter}} 
class in Hadoop codebase to check this.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328415#comment-15328415
 ] 

stack commented on HBASE-15971:
---

Reverting HBASE-10993 gets the branch-1 throughput almost the same as 0.98. So, 
the difference is our rpc scheduler doing FIFO by default (0.98) or 
sort-by-priority (branch-1). This change is all that is needed to make branch-1 
same as 0.98:

{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
index d9d61c1..88258f7 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
@@ -180,8 +180,7 @@ public class SimpleRpcScheduler extends RpcScheduler 
implements ConfigurationObs
 this.highPriorityLevel = highPriorityLevel;
 this.abortable = server;

-String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY,
-  CALL_QUEUE_TYPE_DEADLINE_CONF_VALUE);
+String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY, "FIFO");
 float callqReadShare = conf.getFloat(CALL_QUEUE_READ_SHARE_CONF_KEY, 0);
 float callqScanShare = conf.getFloat(CALL_QUEUE_SCAN_SHARE_CONF_KEY, 0);
{code}

There is no 'FIFO' callQueueType... FIFO is just the default if no other config 
gets in the way.

HBASE-10993 added the 'deadline' rpc queue type. It also made it the default. 
This queue type will sort the requests by 'priority' but in the current 
implementation -- see AnnotationReadingPriorityFunction#getPriority -- only 
long-running scans are effected. When their 'next' comes in, they are 
deprioritized a configurable amount (see 
http://blog.cloudera.com/blog/2014/12/new-in-cdh-5-2-improvements-for-running-multiple-workloads-on-a-single-hbase-cluster/
 for a nice exposition). HBASE-10993 changed our scheduler from FIFO to be 
smarter but the cost turns out to be high.

Chatting w/ [~mbertozzi], the sort-on-priority is of minor benefit and suggests 
that throttling would be a more effective means of limiting users or use of a 
table (HBASE-11598). So, changing our default to go back to FIFO in branch-1.3 
seems the thing to do ([~mantonov] Ok by you?). I can make a patch to do this 
and then add on the fastpath patch added above and then branch-1 should go 
faster than 0.98. I'd also work on moving stuff out of SimpleRpcExecutor 
Currently it is overloaded with options. Instead I'd have folks enable the 
executor type they are interested in. Let SimpleRpcExecutor be explicitly FIFO.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> Screen Shot 2016-06-10 at 5.08.24 PM.png, Screen Shot 2016-06-10 at 5.08.26 
> PM.png, branch-1.hits.png, branch-1.png, 
> flight_recording_10172402220203_28.branch-1.jfr, 
> flight_recording_10172402220203_29.09820.0.98.20.jfr, handlers.fp.png, 
> hits.fp.png, hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328408#comment-15328408
 ] 

Josh Elser commented on HBASE-5291:
---

Thanks for the review [~drankye] (sorry, I missed your comments before posting 
.004)

bq.  A minor. Maybe getOrNull could return a null or a non-empty string, so the 
checking of the returned value could be simpler?

Yup, that would remove a little bit of code. Probably worth it.

bq. Together with deleteRecursively and getFreePort, wonder in HBase if there 
is any utility class to hold these.

If nothing else, I can always lift them up to HttpServerFunctionalTest. I'm not 
sure if there is a better home for them.

bq. I would suggest not coupling cookie signature with the Kerberos/SPNEGO 
mechanism, because it's not the mechanism specific, and we might need it as 
well in other mechanisms like simple, token and etc. in future.

This was something I was just pulling out of Hadoop's 
KerberosAuthenticationFilter. (aside, i'm not sure anymore why I have a default 
value of "privacy" in the docs... bad copy-paste probably).

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328407#comment-15328407
 ] 

Hadoop QA commented on HBASE-16016:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} pre-patch {color} | {color:red} 0m 0s 
{color} | {color:red} JAVA_HOME is not defined. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810062/HBase-16016.v1-master.patch
 |
| JIRA Issue | HBASE-16016 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 56c209c |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2196/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Status: Patch Available  (was: Open)

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-5291:
--
Attachment: HBASE-5291.004.patch

.004

* Fail-fast on missing configuration (thanks for the recommendation and review, 
Sergey!)
* Change the config value "switch" from "spnego" to "kerberos" to align a bit 
better with Hadoop.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch, HBASE-5291.004.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328392#comment-15328392
 ] 

Kai Zheng commented on HBASE-5291:
--

Also took a look by the chance:

1. A minor. Maybe {{getOrNull}} could return a null or a non-empty string, so 
the checking of the returned value could be simpler? Together with 
{{deleteRecursively}} and {{getFreePort}}, wonder in HBase if there is any 
utility class to hold these.

2. I would suggest not coupling cookie signature with the Kerberos/SPNEGO 
mechanism, because it's not the mechanism specific, and we might need it as 
well in other mechanisms like simple, token and etc. in future.
{code}
+
+  hbase.security.authentication.spnego.signature.secret.file
+  privacy
+  Optional, a file whose contents will be used as a secret to 
sign the HTTP cookies
+  as a part of the SPNEGO authentication handshake. If this is not provided, 
Java's `Random` library
+  will be used for the secret.
+
{code}

Thanks!

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328376#comment-15328376
 ] 

Ted Yu commented on HBASE-16017:


lgtm

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-16017:
---
Attachment: HBASE-16017-1.patch

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-16017:
---
Attachment: (was: HBASE-16017-1.patch)

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-16017:
---
Status: Patch Available  (was: Open)

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328354#comment-15328354
 ] 

Zhan Zhang commented on HBASE-16017:


[~te...@apache.org] Can you please take a look? It is a simple fix. To make the 
impact as small as possible,  I added a new constructor, so that there is no 
change to any other modules.

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-16017:
---
Attachment: HBASE-16017-1.patch

> HBase TableOutputFormat has connection leak in getRecordWriter
> --
>
> Key: HBASE-16017
> URL: https://issues.apache.org/jira/browse/HBASE-16017
> Project: HBase
>  Issue Type: Bug
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-16017-1.patch
>
>
> Currently getRecordWriter will not release the connection until jvm 
> terminate, which is not a right assumption given that the function may be 
> invoked many times in the same jvm lifecycle. Inside of mapreduce, the issue 
> has already fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16017) HBase TableOutputFormat has connection leak in getRecordWriter

2016-06-13 Thread Zhan Zhang (JIRA)
Zhan Zhang created HBASE-16017:
--

 Summary: HBase TableOutputFormat has connection leak in 
getRecordWriter
 Key: HBASE-16017
 URL: https://issues.apache.org/jira/browse/HBASE-16017
 Project: HBase
  Issue Type: Bug
Reporter: Zhan Zhang
Assignee: Zhan Zhang


Currently getRecordWriter will not release the connection until jvm terminate, 
which is not a right assumption given that the function may be invoked many 
times in the same jvm lifecycle. Inside of mapreduce, the issue has already 
fixed. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328338#comment-15328338
 ] 

Josh Elser commented on HBASE-5291:
---

bq. Kerberos errrors are often hard to understand, so may be it's worth to 
check whether all required params are present and throw human readable error 
about it instead of relying on kerberos AI. 

Ahh, that's a good point. Fail-fast is definitely something we can (and should) 
do in HBase land instead of letting it filter up into Hadoop.

[~devaraj] had also mentioned to me offline that setting the Kerberos 
authentication value for {{hbase.security.authentication.ui}} to {{kerberos}} 
instead of {{spnego}} might be better. After re-skimming the patch and 
realizing that AuthenticationFilter also uses Kerberos (and not SPNEGO), I'm 
inclined to agree with him.

Let me put together a .004 quick.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15947) Classes used only for tests included in main code base

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328313#comment-15328313
 ] 

Hadoop QA commented on HBASE-15947:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810066/HBASE-15947-v3.patch |
| JIRA Issue | HBASE-15947 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 56c209c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Buil

[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-13 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328312#comment-15328312
 ] 

Sergey Soldatov commented on HBASE-5291:


The only thing I'm worry about is the auth failures errors in case if some of 
the parameters is missing. Kerberos errrors are often hard to understand, so 
may be it's worth to check whether all required params are present and throw 
human readable error about it instead of relying on kerberos AI. 

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-5291.001.patch, HBASE-5291.002.patch, 
> HBASE-5291.003.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-06-13 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328276#comment-15328276
 ] 

Mikhail Antonov commented on HBASE-15406:
-

Then it's easier - I assume repair mode is kicked in only by operators who 
assume some level of control and understanding.

My call would be:

 - revert it from 1.3 and proceed with release, this is what I want to focus 
on, also we can re-resolve this jira again
 - leave the code in branch-1 and master, at least for now, and continue 
discussion in the other jira you filed what's the best way to address this

Opinions?



> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
>  Labels: reviewed
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16010) Put draining function through Admin API

2016-06-13 Thread Matt Warhaftig (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328275#comment-15328275
 ] 

Matt Warhaftig commented on HBASE-16010:


{quote}
"or was the method supposed to be a List 
getDrainingRegionServers()? which is what the 
ServerManager.getDrainingServersList()?"
{quote}

Yes, you are on top of things Matteo - the method signature was changed from 
that preliminary draft to return a List of the draining region 
servers.

As of 6/13 the RPC and API methods are written and closely follow the ruby 
script's logic.  Am hoping to wrap up the development in the next 48 hours.

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16010) Put draining function through Admin API

2016-06-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328274#comment-15328274
 ] 

Enis Soztutar commented on HBASE-16010:
---

bq. I don't remember anything in the AM or ServerManager that keeps track of 
regions moving because of the server in the draining list.
I think we should have decommissioning mode for servers in the core assignment 
paths as a concept. Not this patch though, we should do it in the assignment 
re-work. Both assignment and LB should be aware of decommissioning servers.  

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328265#comment-15328265
 ] 

Matteo Bertozzi commented on HBASE-16016:
-

+1

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-06-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328262#comment-15328262
 ] 

Enis Soztutar commented on HBASE-15406:
---

bq. I don't really want this jira to hold off 1.3 release if possible, so I'd 
be fine to revert is from 1.3 and reduce it from Critical to Major. 
Agreed. This is not critical for 1.3. 
bq. if your scheduled HBCK crashes (not reporting inconsistencies, but crashes 
so that switches aren't restored) you have bigger problems.
Currently, HBCK will disable balancer and split/merge switch only in case if it 
is fixing. In read-only mode (without any -fix or -repair), it won't disable 
those anyway. 


> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
>  Labels: reviewed
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15860) Improvements for HBASE-14280 - Fixing Bulkload for HDFS HA Clusters

2016-06-13 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328257#comment-15328257
 ] 

Esteban Gutierrez commented on HBASE-15860:
---

Thanks [~wchevreuil]! The failures don't seem related to this change so if 
nobody has any additional comment I will commit shortly.

> Improvements for HBASE-14280 - Fixing Bulkload for HDFS HA Clusters
> ---
>
> Key: HBASE-15860
> URL: https://issues.apache.org/jira/browse/HBASE-15860
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 1.0.0
>Reporter: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-15860.master.002.patch, 
> HBASE-15860.master.002.patch, HBASE-15860.patch
>
>
> HBASE-14280 introduced fix for bulkload failures when referring a remote 
> cluster name service id if "bulkloading" from a HA cluster.
> HBASE-14280 solution on *FSHDFSUtils.getNNAddresses* was to invoke 
> *DFSUtil.getNNServiceRpcAddressesForCluster* instead of 
> *DFSUtil.getNNServiceRpcAddresses*. This works for hadoop 2.6 and above.
> Proposed change here is to use "*DFSUtil.getRpcAddressesForNameserviceId*" 
> instead, which already returns only addresses for specific nameservice 
> informed. This is available since hadoop 2.4.
> Sample proposal on FSHDFSUtils.getNNAddresses:
> ...
> {noformat}
>  String nameServiceId = serviceName.split(":")[1];
> if (dfsUtilClazz == null) {
>   dfsUtilClazz = Class.forName("org.apache.hadoop.hdfs.DFSUtil");
> }
> if (getNNAddressesMethod == null) {
>   getNNAddressesMethod =
>   dfsUtilClazz.getMethod("getRpcAddressesForNameserviceId", 
> Configuration.class,
> String.class, String.class);
> }
> Map nnMap =
> (Map) getNNAddressesMethod
> .invoke(null, conf, nameServiceId, null);
> for (Map.Entry e2 : nnMap.entrySet()) {
> InetSocketAddress addr = e2.getValue();
> addresses.add(addr);
> }
> ...
> {noformat}
> Will also add test conditions for *FSHDFSUtils.isSameHdfs* to verify scenario 
> when multiple name service ids are defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15947) Classes used only for tests included in main code base

2016-06-13 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-15947:
--
Attachment: HBASE-15947-v3.patch

Trying one more time. I think it failed because of changes that must've gone in 
between me rebasing and submitting. If that's not it, I don't know why the 
patch wouldn't apply...

> Classes used only for tests included in main code base
> --
>
> Key: HBASE-15947
> URL: https://issues.apache.org/jira/browse/HBASE-15947
> Project: HBase
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Priority: Trivial
> Attachments: HBASE-15947-branch-1-v2.patch, HBASE-15947-v2.patch, 
> HBASE-15947-v3.patch
>
>
> The classes LoadTestKVGenerator and RedundantKVGenerator in 
> hbase-common/src/main/java/org/apache/hadoop/hbase/util/test are only used in 
> tests (at least in the HBase codebase itself - not sure if they would have 
> any use for other software). Should probably move them into src/test/java to 
> keep a good separation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-15584:
--

Assignee: Ted Yu

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Attachment: HBase-16016.v1-master.patch

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBase-16016.v1-master.patch
>
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16016:
--

 Summary: AssignmentManager#waitForAssignment could have unexpected 
negative deadline
 Key: HBASE-16016
 URL: https://issues.apache.org/jira/browse/HBASE-16016
 Project: HBase
  Issue Type: Bug
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
would be overflowed from AssignmentManager#waitForAssignment(final 
Collection regionSet, final boolean waitTillAllAssigned, final int 
reassigningRegions, final long minEndTime), which would cause no wait!

{code}
  /**
   * Waits until the specified region has completed assignment.
   * 
   * If the region is already assigned, returns immediately.  Otherwise, method
   * blocks until the region is assigned.
   * @param regionInfo region to wait on assignment for
   * @return true if the region is assigned false otherwise.
   * @throws InterruptedException
   */
  public boolean waitForAssignment(HRegionInfo regionInfo)
  throws InterruptedException {
ArrayList regionSet = new ArrayList(1);
regionSet.add(regionInfo);
return waitForAssignment(regionSet, true, Long.MAX_VALUE);
  }

  /**
   * Waits until the specified region has completed assignment, or the deadline 
is reached.
   */
  protected boolean waitForAssignment(final Collection regionSet,
  final boolean waitTillAllAssigned, final int reassigningRegions,
  final long minEndTime) throws InterruptedException {
long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
(reassigningRegions + 1);  // > OVERFLOW
return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Attachment: (was: WaitOverflow.patch)

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16016) AssignmentManager#waitForAssignment could have unexpected negative deadline

2016-06-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16016:
---
Attachment: WaitOverflow.patch

> AssignmentManager#waitForAssignment could have unexpected negative deadline
> ---
>
> Key: HBASE-16016
> URL: https://issues.apache.org/jira/browse/HBASE-16016
> Project: HBase
>  Issue Type: Bug
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>
> AssignmentManager#waitForAssignment(HRegionInfo regionInfo) passes 
> Long.MAX_VALUE deadline and intends to wait forever.  However, the deadline 
> would be overflowed from AssignmentManager#waitForAssignment(final 
> Collection regionSet, final boolean waitTillAllAssigned, final 
> int reassigningRegions, final long minEndTime), which would cause no wait!
> {code}
>   /**
>* Waits until the specified region has completed assignment.
>* 
>* If the region is already assigned, returns immediately.  Otherwise, 
> method
>* blocks until the region is assigned.
>* @param regionInfo region to wait on assignment for
>* @return true if the region is assigned false otherwise.
>* @throws InterruptedException
>*/
>   public boolean waitForAssignment(HRegionInfo regionInfo)
>   throws InterruptedException {
> ArrayList regionSet = new ArrayList(1);
> regionSet.add(regionInfo);
> return waitForAssignment(regionSet, true, Long.MAX_VALUE);
>   }
>   /**
>* Waits until the specified region has completed assignment, or the 
> deadline is reached.
>*/
>   protected boolean waitForAssignment(final Collection regionSet,
>   final boolean waitTillAllAssigned, final int reassigningRegions,
>   final long minEndTime) throws InterruptedException {
> long deadline = minEndTime + bulkPerRegionOpenTimeGuesstimate * 
> (reassigningRegions + 1);  // > OVERFLOW
> return waitForAssignment(regionSet, waitTillAllAssigned, deadline);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15860) Improvements for HBASE-14280 - Fixing Bulkload for HDFS HA Clusters

2016-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328077#comment-15328077
 ] 

Hadoop QA commented on HBASE-15860:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 20s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 121m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809896/HBASE-15860.master.002.patch
 |
| JIRA Issue | HBASE-15860 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 56c209c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
|

[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328059#comment-15328059
 ] 

stack commented on HBASE-15971:
---

Thanks for the reminder of the work over in HBASE-14331. Let me go comment over 
there.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> Screen Shot 2016-06-10 at 5.08.24 PM.png, Screen Shot 2016-06-10 at 5.08.26 
> PM.png, branch-1.hits.png, branch-1.png, 
> flight_recording_10172402220203_28.branch-1.jfr, 
> flight_recording_10172402220203_29.09820.0.98.20.jfr, handlers.fp.png, 
> hits.fp.png, hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-06-13 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328040#comment-15328040
 ] 

Samir Ahmic commented on HBASE-15908:
-

Here is exception i'm seeing 
{code}
Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
hdfs://P3cluster/hbase/data/default/cluster_test/37b19126a6455b5efd454b7774e22298/test_cf/390bef6889a042d6a08a1a386f29314d
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:547)
at 
org.apache.hadoop.hbase.regionserver.StoreFileReader.(StoreFileReader.java:94)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:270)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:419)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:526)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:516)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:614)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:115)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:481)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:478)
... 6 more
---> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must 
be direct buffers 
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:299)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1775)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1714)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1547)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl$2.nextBlock(HFileBlock.java:1447)
{code}

I have compiled master branch against hadoop-2.5.2 and deployed in distributed 
mode.

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:172

[jira] [Created] (HBASE-16015) Usability - VerifyReplication performance is too slow

2016-06-13 Thread KarthikP (JIRA)
KarthikP created HBASE-16015:


 Summary: Usability - VerifyReplication performance is too slow
 Key: HBASE-16015
 URL: https://issues.apache.org/jira/browse/HBASE-16015
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: KarthikP
Priority: Critical


I see VerifyReplication is too slow in Geo replication cluster, then I dig into 
the code where default Input scanner caching set as 1 for target cluster 
request. 
This value should be optimal or  could be exposed in usage command.
-Dhbase.mapreduce.scan.cachedrows=100 

{code:title=TableInputFormat.java|borderStyle=solid}
public static final String SCAN_CACHEDROWS = "hbase.mapreduce.scan.cachedrows";
{code}

{code:title=VerifyReplication.java|borderStyle=solid}
Configuration conf = context.getConfiguration();
final Scan scan = new Scan();
scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
{code}


If agree, then I will add this line into printUsage method as shown below,

{code:title=VerifyReplication.java|borderStyle=solid}
System.err.println("For performance consider the following option, Input 
scanner caching for source to target cluster request\n"
+ "-Dhbase.mapreduce.scan.cachedrows=100");
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-06-13 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328011#comment-15328011
 ] 

Mikhail Antonov commented on HBASE-15406:
-

[~enis] [~chenheng] [~stack] this patch is addressing real problem, but I agree 
that all solutions discussed above so far had their shortcomings.

I don't really want this jira to hold off 1.3 release if possible, so I'd be 
fine to revert is from 1.3 and reduce it from Critical to Major. With the 
reasoning that from operational standpoint, hitting Ctrl-C while running HBCK 
is preventable mis-command, prod clusters should have scheduled and frequent 
HBCK runs anyway, and if your scheduled HBCK crashes (not reporting 
inconsistencies, but crashes so that switches aren't restored) you have bigger 
problems.

Thoughts?

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
>  Labels: reviewed
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327950#comment-15327950
 ] 

Sean Busbey commented on HBASE-15344:
-

+1

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15344.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-13 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327920#comment-15327920
 ] 

Mikhail Antonov commented on HBASE-15344:
-

[~busbey] Yeah, as I posted to dev@, let's leave 2.4 in place.

Review on the patch...?

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15344.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-06-13 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-16014:


 Summary: Get and Put constructor argument lists are divergent
 Key: HBASE-16014
 URL: https://issues.apache.org/jira/browse/HBASE-16014
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk


API for construing Get and Put objects for a specific rowkey is quite 
different. 
[Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
 supports many more variations for specifying the target rowkey and timestamp 
compared to 
[Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
 Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15335) Add composite key support in row key

2016-06-13 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-15335:
---
Attachment: HBASE-15335-6.patch

> Add composite key support in row key
> 
>
> Key: HBASE-15335
> URL: https://issues.apache.org/jira/browse/HBASE-15335
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15335-1.patch, HBASE-15335-2.patch, 
> HBASE-15335-3.patch, HBASE-15335-4.patch, HBASE-15335-5.patch, 
> HBASE-15335-6.patch
>
>
> Add composite key filter support in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-06-13 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327895#comment-15327895
 ] 

Mikhail Antonov commented on HBASE-15908:
-

Hadoop 2.5.2 shouldn't attempt to use native checksum even on this codepath, as 
I recall

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-06-13 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327892#comment-15327892
 ] 

Mikhail Antonov commented on HBASE-15908:
-

[~asamir] Do you see exact same error? Basically class cast error in there?

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15525) OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts

2016-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327849#comment-15327849
 ] 

stack commented on HBASE-15525:
---

Ok. That makes sense.

Isn't the close in closeAndPutbackBuffers redundant? Or just-in-case? Close 
will have already been called? If so, then this method should be called release 
or free or releaseResources instead to make it clear what is going on.

I think this new class along with the pool implementation has the possibility 
of being generally useful in more than just the context here hence my fixation 
on making sure it is clear about how it is to be used. 

bq.  * Note: This pool returns off heap ByteBuffers.

The above is configurable? Add an override constructor that allows setting it 
I'd say.

I think the new name for the class is clearer about what it is.

Otherwise, this patch is great. Needs fat release note how you'd use it.

> OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts
> --
>
> Key: HBASE-15525
> URL: https://issues.apache.org/jira/browse/HBASE-15525
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15525_V1.patch, HBASE-15525_V2.patch, 
> HBASE-15525_V3.patch, HBASE-15525_V4.patch, HBASE-15525_V5.patch, 
> HBASE-15525_WIP.patch, WIP.patch
>
>
> After HBASE-13819 the system some times run out of direct memory whenever 
> there is some network congestion or some client side issues.
> This was because of pending RPCs in the RPCServer$Connection.responseQueue 
> and since all the responses in this queue hold a buffer for cellblock from 
> BoundedByteBufferPool this could takeup a lot of memory if the 
> BoundedByteBufferPool's moving average settles down towards a higher value 
> See the discussion here 
> [HBASE-13819-comment|https://issues.apache.org/jira/browse/HBASE-13819?focusedCommentId=15207822&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207822]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15860) Improvements for HBASE-14280 - Fixing Bulkload for HDFS HA Clusters

2016-06-13 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-15860:
-
Attachment: HBASE-15860.master.002.patch

Hi [~tedyu], these tests are passing locally. I'm re-attaching the patch, 
hopefully it will get a clean run now.

> Improvements for HBASE-14280 - Fixing Bulkload for HDFS HA Clusters
> ---
>
> Key: HBASE-15860
> URL: https://issues.apache.org/jira/browse/HBASE-15860
> Project: HBase
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 1.0.0
>Reporter: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-15860.master.002.patch, 
> HBASE-15860.master.002.patch, HBASE-15860.patch
>
>
> HBASE-14280 introduced fix for bulkload failures when referring a remote 
> cluster name service id if "bulkloading" from a HA cluster.
> HBASE-14280 solution on *FSHDFSUtils.getNNAddresses* was to invoke 
> *DFSUtil.getNNServiceRpcAddressesForCluster* instead of 
> *DFSUtil.getNNServiceRpcAddresses*. This works for hadoop 2.6 and above.
> Proposed change here is to use "*DFSUtil.getRpcAddressesForNameserviceId*" 
> instead, which already returns only addresses for specific nameservice 
> informed. This is available since hadoop 2.4.
> Sample proposal on FSHDFSUtils.getNNAddresses:
> ...
> {noformat}
>  String nameServiceId = serviceName.split(":")[1];
> if (dfsUtilClazz == null) {
>   dfsUtilClazz = Class.forName("org.apache.hadoop.hdfs.DFSUtil");
> }
> if (getNNAddressesMethod == null) {
>   getNNAddressesMethod =
>   dfsUtilClazz.getMethod("getRpcAddressesForNameserviceId", 
> Configuration.class,
> String.class, String.class);
> }
> Map nnMap =
> (Map) getNNAddressesMethod
> .invoke(null, conf, nameServiceId, null);
> for (Map.Entry e2 : nnMap.entrySet()) {
> InetSocketAddress addr = e2.getValue();
> addresses.add(addr);
> }
> ...
> {noformat}
> Will also add test conditions for *FSHDFSUtils.isSameHdfs* to verify scenario 
> when multiple name service ids are defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16010) Put draining function through Admin API

2016-06-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327817#comment-15327817
 ] 

Jerry He commented on HBASE-16010:
--

Concur with Matteo. We can probably use similar API names as in the 
ServerManager.
We still need to write to ZK node from master, in case one master fails. Then 
the direct client ZK call is still possible.

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15584.

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks for the reviews.

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327795#comment-15327795
 ] 

Ted Yu commented on HBASE-15584:


When cluster gets back, the backup procedure would resume execution.
Cancelling the backup is up to the user / admin.

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-13 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327784#comment-15327784
 ] 

Vladimir Rodionov edited comment on HBASE-15584 at 6/13/16 5:24 PM:


[~tedyu], what will happen to active backup session when cluster gets shut 
down? We need some abort/clean up functionality?

I am OK with the patch as long as we have separate JIRAs for Cancel/Abort 
backup restore (and we have in Phase 3)


was (Author: vrodionov):
[~tedyu], what will happen to active backup session when cluster gets shut 
down? We need some abort/clean up functionality?

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >