[jira] [Updated] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15716: -- Attachment: 15716.prune.synchronizations.v4.patch > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > 15716.prune.synchronizations.v4.patch, Screen Shot 2016-04-26 at 2.05.45 > PM.png, Screen Shot 2016-04-26 at 2.06.14 PM.png, Screen Shot 2016-04-26 at > 2.07.06 PM.png, Screen Shot 2016-04-26 at 2.25.26 PM.png, Screen Shot > 2016-04-26 at 6.02.29 PM.png, Screen Shot 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261587#comment-15261587 ] Hadoop QA commented on HBASE-15716: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} | {color:red} HBASE-15716 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801180/15716.prune.synchronizations.v4.patch | | JIRA Issue | HBASE-15716 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1652/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > 15716.prune.synchronizations.v4.patch, Screen Shot 2016-04-26 at 2.05.45 > PM.png, Screen Shot 2016-04-26 at 2.06.14 PM.png, Screen Shot 2016-04-26 at > 2.07.06 PM.png, Screen Shot 2016-04-26 at 2.25.26 PM.png, Screen Shot > 2016-04-26 at 6.02.29 PM.png, Screen Shot 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261585#comment-15261585 ] stack commented on HBASE-15716: --- Thank you the helpful review [~ikeda] Do you a fault other than below in this approach? bq. ...and then, the smallest readpoint goes back. Can you say more please? I do not follow how it goes 'back' since we do readPoint = currentReadPoint; at the bottom of the loop. Yes, the readpoint moves independently I like the way you think on scannerReadPoints having too much info... Yes, if we miss the close, a reference to the Scanner will be held here. That is a good point. I like your idea too of the skipping out when IL is READ_UNCOMMITTED. Let me do that. Let me look at an implementation using your suggested types Thanks again for the review. > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15702) Improve PerClientRandomNonceGenerator
[ https://issues.apache.org/jira/browse/HBASE-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261584#comment-15261584 ] Heng Chen commented on HBASE-15702: --- {quote} Just create static final fields. That is not even "initialization-on-demand holder idiom" but googling it might help you. {quote} Oh, yeah, just notice it. Let me fix it. > Improve PerClientRandomNonceGenerator > - > > Key: HBASE-15702 > URL: https://issues.apache.org/jira/browse/HBASE-15702 > Project: HBase > Issue Type: Improvement >Reporter: Hiroshi Ikeda >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-15702.patch, HBASE-15702_v1.patch > > > PerClientRandomNonceGenerator can be exposed to all the threads via the > static field ConnectionManager.nonceGenerator, but > PerClientRandomNonceGenerator uses Random, which should be ThreadLocalRandom > or something. (See javadoc of Random.) > Moreover, ConnectionManager creates or refers the singleton instance of > PerClientThreadLocalRandom with a lock or volatile, but it should be created > as a static final field in PerClientThreadLocalRandom itself, and the > creation will be postponed until the field is actually refereed and the class > is being initialized. > The same can be said for ConnectionManager.NoNonceGenerator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15702) Improve PerClientRandomNonceGenerator
[ https://issues.apache.org/jira/browse/HBASE-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261575#comment-15261575 ] Hiroshi Ikeda commented on HBASE-15702: --- Just create static final fields. That is not even "initialization-on-demand holder idiom" but googling it might help you. > Improve PerClientRandomNonceGenerator > - > > Key: HBASE-15702 > URL: https://issues.apache.org/jira/browse/HBASE-15702 > Project: HBase > Issue Type: Improvement >Reporter: Hiroshi Ikeda >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-15702.patch, HBASE-15702_v1.patch > > > PerClientRandomNonceGenerator can be exposed to all the threads via the > static field ConnectionManager.nonceGenerator, but > PerClientRandomNonceGenerator uses Random, which should be ThreadLocalRandom > or something. (See javadoc of Random.) > Moreover, ConnectionManager creates or refers the singleton instance of > PerClientThreadLocalRandom with a lock or volatile, but it should be created > as a static final field in PerClientThreadLocalRandom itself, and the > creation will be postponed until the field is actually refereed and the class > is being initialized. > The same can be said for ConnectionManager.NoNonceGenerator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261563#comment-15261563 ] Hiroshi Ikeda commented on HBASE-15716: --- Quite strictly speaking, there is a very very small bug :P When the collection scannerReadPoints is empty and {code} +private long calculateReadPoint(final IsolationLevel level) { + long readPoint = getReadpoint(level); {code} and mvcc's readpoint goes forward, with the smallest readpoint going forward, and {code} + while (true) { +scannerReadPoints.put(this, readPoint); {code} and then, the smallest readpoint goes back. By the way, I think the difficult point is that the mvcc's readpoint can move without our control. HRegion should have an independent instance variable to follow the readpoint under synchronization or something. Moreover, the collection scannerReadPoints have too much information, and we should remove entries severally in order to GC obsolete scanner readers. It is enough for entries to have a readpoint and a reference count, with each scanner reader having a reference to the entry. It is not even needed to register when IsolationLevel.READ_UNCOMMITTED. I think ConcurrentLinkedQueue, AtomicReferenceFieldUpdater, and ReentrantLock (tryLock to follow the mvcc's readpoint with best-effot) are useful to reduce conflict between threads. > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261558#comment-15261558 ] Mikhail Antonov commented on HBASE-15691: - [~apurtell] do you think it should be release blocker for 1.3.0 and 1.2.2? Did you look at the patch yet. > Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to > branch-1 > - > > Key: HBASE-15691 > URL: https://issues.apache.org/jira/browse/HBASE-15691 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 1.3.0, 1.2.2 > > Attachments: HBASE-15691-branch-1.patch > > > HBASE-10205 was committed to trunk and 0.98 branches only. To preserve > continuity we should commit it to branch-1. The change requires more than > nontrivial fixups so I will attach a backport of the change from trunk to > current branch-1 here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11290) Unlock RegionStates
[ https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-11290: Fix Version/s: (was: 1.3.0) 1.4.0 > Unlock RegionStates > --- > > Key: HBASE-11290 > URL: https://issues.apache.org/jira/browse/HBASE-11290 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.2.0, 1.3.0 >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 2.0.0, 1.4.0, 0.98.20 > > Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, > HBASE-11290.draft.patch, HBASE-11290_trunk.patch > > > Even though RegionStates is a highly accessed data structure in HMaster. Most > of it's methods are synchronized. Which limits concurrency. Even simply > making some of the getters non-synchronized by using concurrent data > structures has helped with region assignments. We can go as simple as this > approach or create locks per region or a bucket lock per region bucket. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11290) Unlock RegionStates
[ https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261555#comment-15261555 ] Mikhail Antonov commented on HBASE-11290: - Kicking out of 1.3 :( feel free to pull it back if you feel like > Unlock RegionStates > --- > > Key: HBASE-11290 > URL: https://issues.apache.org/jira/browse/HBASE-11290 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.2.0, 1.3.0 >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 2.0.0, 1.4.0, 0.98.20 > > Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, > HBASE-11290.draft.patch, HBASE-11290_trunk.patch > > > Even though RegionStates is a highly accessed data structure in HMaster. Most > of it's methods are synchronized. Which limits concurrency. Even simply > making some of the getters non-synchronized by using concurrent data > structures has helped with region assignments. We can go as simple as this > approach or create locks per region or a bucket lock per region bucket. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15677) FailedServerException shouldn't clear MetaCache
[ https://issues.apache.org/jira/browse/HBASE-15677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-15677: Fix Version/s: (was: 1.3.0) 1.4.0 > FailedServerException shouldn't clear MetaCache > --- > > Key: HBASE-15677 > URL: https://issues.apache.org/jira/browse/HBASE-15677 > Project: HBase > Issue Type: Sub-task > Components: Client >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov > Fix For: 1.4.0 > > > Right now FailedServerException clears meta cache. Seems like it's > unnecessary (if we hit that, someone has already gotten some network/remote > error in the first place and invalidated located cache for us), and seems it > could lead to unnecessary drops, as FailedServers cache has default TTL of 2 > seconds, so we can encounter situation like this: > - thread T1 hit network error and cleared the cache, put server in failed > server list > - thread T2 tries to get it's request in and gets FailedServerException > - thread T1 does meta scan to populate the cache > - thread T2 clears the cache after it's got FSE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-15677) FailedServerException shouldn't clear MetaCache
[ https://issues.apache.org/jira/browse/HBASE-15677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov reassigned HBASE-15677: --- Assignee: Mikhail Antonov > FailedServerException shouldn't clear MetaCache > --- > > Key: HBASE-15677 > URL: https://issues.apache.org/jira/browse/HBASE-15677 > Project: HBase > Issue Type: Sub-task > Components: Client >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Fix For: 1.4.0 > > > Right now FailedServerException clears meta cache. Seems like it's > unnecessary (if we hit that, someone has already gotten some network/remote > error in the first place and invalidated located cache for us), and seems it > could lead to unnecessary drops, as FailedServers cache has default TTL of 2 > seconds, so we can encounter situation like this: > - thread T1 hit network error and cleared the cache, put server in failed > server list > - thread T2 tries to get it's request in and gets FailedServerException > - thread T1 does meta scan to populate the cache > - thread T2 clears the cache after it's got FSE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15677) FailedServerException shouldn't clear MetaCache
[ https://issues.apache.org/jira/browse/HBASE-15677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261554#comment-15261554 ] Mikhail Antonov commented on HBASE-15677: - Kicked out of 1.3, feel free to pull in. > FailedServerException shouldn't clear MetaCache > --- > > Key: HBASE-15677 > URL: https://issues.apache.org/jira/browse/HBASE-15677 > Project: HBase > Issue Type: Sub-task > Components: Client >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov > Fix For: 1.4.0 > > > Right now FailedServerException clears meta cache. Seems like it's > unnecessary (if we hit that, someone has already gotten some network/remote > error in the first place and invalidated located cache for us), and seems it > could lead to unnecessary drops, as FailedServers cache has default TTL of 2 > seconds, so we can encounter situation like this: > - thread T1 hit network error and cleared the cache, put server in failed > server list > - thread T2 tries to get it's request in and gets FailedServerException > - thread T1 does meta scan to populate the cache > - thread T2 clears the cache after it's got FSE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261550#comment-15261550 ] Hadoop QA commented on HBASE-15676: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 54s {color} | {color:green} hbase-client: patch generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s {color} | {color:green} hbase-server: patch generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 52s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 146m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestAccessController3 | | Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient | \\
[jira] [Commented] (HBASE-15697) Excessive TestHRegion running time on branch-1
[ https://issues.apache.org/jira/browse/HBASE-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261549#comment-15261549 ] Mikhail Antonov commented on HBASE-15697: - Yeah, seems like good improvements. +1 from me to commit everywhere (trunk/branch-1/branch-1.3)? > Excessive TestHRegion running time on branch-1 > -- > > Key: HBASE-15697 > URL: https://issues.apache.org/jira/browse/HBASE-15697 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.0 > > Attachments: HBASE-15697_branch-1.patch > > > On my dev box TestHRegion takes about 90 seconds to complete in master and > about 60 seconds in 0.98, but about 370 seconds in branch-1. Furthermore > TestHRegion in branch-1 blew past my open files ulimit. I had to raise it > from default in order for the unit to complete at all. > I am going to bisect the recent history of branch-1 in search of a culprit > and report back. > {panel:title=master} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.299 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.529 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.23 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=branch-1} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 368.868 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.203 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 345.806 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=0.98} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.038 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.382 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.509 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen resolved HBASE-15720. --- Resolution: Fixed Hadoop Flags: Reviewed > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261529#comment-15261529 ] Heng Chen commented on HBASE-15720: --- push to all branches. > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15720: -- Fix Version/s: 1.0.5 0.98.20 1.2.2 1.1.5 1.4.0 1.3.0 2.0.0 > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15676: --- Hadoop Flags: Reviewed Fix Version/s: 1.2.2 1.1.5 1.4.0 1.3.0 2.0.0 > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha >Assignee: Matt Warhaftig > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2 > > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch, hbase-15676-v4.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261518#comment-15261518 ] Ted Yu commented on HBASE-15676: Please attach patch for 0.98 branch. Current patch doesn't compile. > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha >Assignee: Matt Warhaftig > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch, hbase-15676-v4.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-15676: -- Assignee: Matt Warhaftig > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha >Assignee: Matt Warhaftig > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch, hbase-15676-v4.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261493#comment-15261493 ] Hudson commented on HBASE-15708: FAILURE: Integrated in HBase-Trunk_matrix #875 (See [https://builds.apache.org/job/HBase-Trunk_matrix/875/]) HBASE-15708 Docker for dev-support scripts. (Apekshit) (stack: rev e1bf3a66fc00b7965443a7a632adcc298931e794) * dev-support/python-requirements.txt * dev-support/Dockerfile > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0 > > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master-v7.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum
[ https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261479#comment-15261479 ] Hadoop QA commented on HBASE-11625: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 9s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 39m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 130m 48s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 205m 36s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.security.access.TestNamespaceCommands | | | org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12780263/HBASE-11625.patch | | JIRA Issue | HBASE-11625 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux pomona.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision |
[jira] [Commented] (HBASE-15615) Wrong sleep time when RegionServerCallable need retry
[ https://issues.apache.org/jira/browse/HBASE-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261449#comment-15261449 ] Guanghao Zhang commented on HBASE-15615: [~ghelmling] [~mantonov] Any ideas about this patch? > Wrong sleep time when RegionServerCallable need retry > - > > Key: HBASE-15615 > URL: https://issues.apache.org/jira/browse/HBASE-15615 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-15615-branch-1.patch, HBASE-15615-v1.patch, > HBASE-15615-v1.patch, HBASE-15615.patch > > > In RpcRetryingCallerImpl, it get pause time by expectedSleep = > callable.sleep(pause, tries + 1); And in RegionServerCallable, it get pasue > time by sleep = ConnectionUtils.getPauseTime(pause, tries + 1). So tries will > be bumped up twice. And the pasue time is 3 * hbase.client.pause when tries > is 0. > RETRY_BACKOFF = {1, 2, 3, 5, 10, 20, 40, 100, 100, 100, 100, 200, 200} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261447#comment-15261447 ] Hudson commented on HBASE-15707: FAILURE: Integrated in HBase-1.4 #120 (See [https://builds.apache.org/job/HBase-1.4/120/]) HBASE-15707 ImportTSV bulk output does not support tags with (tedyu: rev 4ba57cb935489efa5a9738794956a6c45c394a6d) * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun >Assignee: huaxiang sun > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15706) HFilePrettyPrinter should print out nicely formatted tags
[ https://issues.apache.org/jira/browse/HBASE-15706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261444#comment-15261444 ] Anoop Sam John commented on HBASE-15706: Sorry if I was not clearly saying it.. I dont mean that. bq.leave TagUtil as is then. No need to change and the MOB part and VC will use this util API only. bq.In KeyValue#toStringMap also used the TagUtil. Can u change there are also to use Tag.toString() instead. U can see in KeyValue there is a usage of TagUtil.getValueAsString. There can u change that to Tag.toString() just like in HFilePrettyPrinter case. > HFilePrettyPrinter should print out nicely formatted tags > - > > Key: HBASE-15706 > URL: https://issues.apache.org/jira/browse/HBASE-15706 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Priority: Minor > Attachments: HBASE-15706-v001.patch > > > When I was using HFile to print out a rows with tags, the output is like: > {code} > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase > org.apache.hadoop.hbase.io.hfile.HFile -f > /tmp/71afa45b1cb94ea1858a99f31197274f -p > 2016-04-25 11:40:40,409 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > 2016-04-25 11:40:40,580 INFO [main] hfile.CacheConfig: CacheConfig:disabled > K: b/b:b/1461608231279/Maximum/vlen=0/seqid=0 V: > K: b/b:b/1461608231278/Put/vlen=1/seqid=0 V: b T[0]: � > Scanned kv count -> 2 > {code} > With attached patch, the print is now like: > {code} > 2016-04-25 11:57:05,849 INFO [main] hfile.CacheConfig: CacheConfig:disabled > K: b/b:b/1461609876838/Maximum/vlen=0/seqid=0 V: > K: b/b:b/1461609876837/Put/vlen=1/seqid=0 V: b T[0]: [Tag type : 8, value : > \x00\x0E\xEE\xEE] > Scanned kv count -> 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261445#comment-15261445 ] Hadoop QA commented on HBASE-15676: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 56s {color} | {color:red} hbase-client: patch generated 1 new + 16 unchanged - 1 fixed = 17 total (was 17) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 35s {color} | {color:red} hbase-server: patch generated 1 new + 16 unchanged - 1 fixed = 17 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 4s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 167m 3s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestAccessController3 | | | hadoop.hbase.security.access.TestNamespaceCommands | \\ \\ || Subsystem || Report/Notes || | JIRA
[jira] [Updated] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15686: --- Hadoop Flags: Reviewed Fix Version/s: 1.4.0 2.0.0 Issue Type: Improvement (was: Bug) > Add override mechanism for the exempt classes when dynamically loading table > coprocessor > > > Key: HBASE-15686 > URL: https://issues.apache.org/jira/browse/HBASE-15686 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Affects Versions: 1.0.1 >Reporter: Sangjin Lee >Assignee: Ted Yu > Fix For: 2.0.0, 1.4.0 > > Attachments: 15686.v2.txt, 15686.v3.txt, 15686.v4.txt, 15686.v5.txt, > 15686.v6.txt, 15686.wip > > > As part of Hadoop's Timeline Service v.2 (YARN-2928), we're adding a table > coprocessor (YARN-4062). However, we're finding that the coprocessor cannot > be loaded dynamically. A relevant snippet for the exception: > {noformat} > java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:324) > at > org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327) > ... 8 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at > org.apache.hadoop.hbase.util.CoprocessorClassLoader.loadClass(CoprocessorClassLoader.java:275) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:322) > ... 10 more > {noformat} > We tracked it down to the fact that {{CoprocessorClassLoader}} regarding all > hadoop classes as exempt from loading from the coprocessor jar. Since our > coprocessor sits in the coprocessor jar, and yet the loading of this class is > delegated to the parent which does not have this jar, the classloading fails. > What would be nice is the ability to exclude certain classes from the exempt > classes so that they can be loaded via table coprocessor classloader. See > hadoop's {{ApplicationClassLoader}} for a similar feature. > Is there any other way to load this coprocessor at the table scope? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15686: --- Release Note: New coprocessor table descriptor attribute, hbase.coprocessor.classloader.included.classes, is added. User can specify class name prefixes (semicolon separated) which should be loaded by CoprocessorClassLoader through this attribute using the following syntax: {code} hbase> alter 't1', 'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2' {code} was: New coprocessor table descriptor attribute, coprocessor.classloader.included.classes, is added. User can specify class name prefixes (semicolon separated) which should be loaded by CoprocessorClassLoader through this attribute using the following syntax: {code} hbase> alter 't1', 'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2' {code} > Add override mechanism for the exempt classes when dynamically loading table > coprocessor > > > Key: HBASE-15686 > URL: https://issues.apache.org/jira/browse/HBASE-15686 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.0.1 >Reporter: Sangjin Lee >Assignee: Ted Yu > Attachments: 15686.v2.txt, 15686.v3.txt, 15686.v4.txt, 15686.v5.txt, > 15686.v6.txt, 15686.wip > > > As part of Hadoop's Timeline Service v.2 (YARN-2928), we're adding a table > coprocessor (YARN-4062). However, we're finding that the coprocessor cannot > be loaded dynamically. A relevant snippet for the exception: > {noformat} > java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:324) > at > org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327) > ... 8 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at > org.apache.hadoop.hbase.util.CoprocessorClassLoader.loadClass(CoprocessorClassLoader.java:275) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:322) > ... 10 more > {noformat} > We tracked it down to the fact that {{CoprocessorClassLoader}} regarding all > hadoop classes as exempt from loading from the coprocessor jar. Since our > coprocessor sits in the coprocessor jar, and yet the loading of this class is > delegated to the parent which does not have this jar, the classloading fails. > What would be nice is the ability to exclude certain classes from the exempt > classes so that they can be loaded via table coprocessor classloader. See > hadoop's {{ApplicationClassLoader}} for a similar feature. > Is there any other way to load this coprocessor at the table scope? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Warhaftig updated HBASE-15676: --- Attachment: hbase-15676-v4.patch {quote} "Please fix checkstyle and findbugs" {quote} Had overlooked the checkstyle error, fixed now, 'hbase-15676-v4.patch'. > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch, hbase-15676-v4.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261415#comment-15261415 ] Enis Soztutar commented on HBASE-15686: --- +1. > Add override mechanism for the exempt classes when dynamically loading table > coprocessor > > > Key: HBASE-15686 > URL: https://issues.apache.org/jira/browse/HBASE-15686 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.0.1 >Reporter: Sangjin Lee >Assignee: Ted Yu > Attachments: 15686.v2.txt, 15686.v3.txt, 15686.v4.txt, 15686.v5.txt, > 15686.v6.txt, 15686.wip > > > As part of Hadoop's Timeline Service v.2 (YARN-2928), we're adding a table > coprocessor (YARN-4062). However, we're finding that the coprocessor cannot > be loaded dynamically. A relevant snippet for the exception: > {noformat} > java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:324) > at > org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327) > ... 8 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at > org.apache.hadoop.hbase.util.CoprocessorClassLoader.loadClass(CoprocessorClassLoader.java:275) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:322) > ... 10 more > {noformat} > We tracked it down to the fact that {{CoprocessorClassLoader}} regarding all > hadoop classes as exempt from loading from the coprocessor jar. Since our > coprocessor sits in the coprocessor jar, and yet the loading of this class is > delegated to the parent which does not have this jar, the classloading fails. > What would be nice is the ability to exclude certain classes from the exempt > classes so that they can be loaded via table coprocessor classloader. See > hadoop's {{ApplicationClassLoader}} for a similar feature. > Is there any other way to load this coprocessor at the table scope? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261413#comment-15261413 ] Heng Chen commented on HBASE-15676: --- Please fix checkstyle and findbugs, otherwise patch_v3 LGTM. +1 > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261411#comment-15261411 ] Hudson commented on HBASE-15707: FAILURE: Integrated in HBase-1.3 #675 (See [https://builds.apache.org/job/HBase-1.3/675/]) HBASE-15707 ImportTSV bulk output does not support tags with (tedyu: rev bc44128fce4675df954cd3d824e31d0f30d08b4f) * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun >Assignee: huaxiang sun > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261393#comment-15261393 ] Ted Yu commented on HBASE-15676: +1 if QA run is good. > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15278) AsyncRPCClient hangs if Connection closes before RPC call response
[ https://issues.apache.org/jira/browse/HBASE-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261373#comment-15261373 ] Duo Zhang commented on HBASE-15278: --- Will take a look today. Thanks for the remind [~enis]. > AsyncRPCClient hangs if Connection closes before RPC call response > --- > > Key: HBASE-15278 > URL: https://issues.apache.org/jira/browse/HBASE-15278 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-15278.patch, hbase-15278_v00.patch > > > The test for HBASE-15212 discovered an issue with Async RPC Client. > In that test, we are closing the connection if an RPC call writes a call > larger than max allowed size, the server closes the connection. However the > async client does not seem to handle connection closes with outstanding RPC > calls. The client just hangs. > Marking this blocker against 2.0 since it is default there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261370#comment-15261370 ] Duo Zhang commented on HBASE-15337: --- No, I have to finish HBASE-15454 first so do not have time to do the documentation work here... Thanks [~enis]. > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15685) Typo in REST documentation
[ https://issues.apache.org/jira/browse/HBASE-15685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15685: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) Committed this. Thanks [~biwa7636] for the patch. > Typo in REST documentation > -- > > Key: HBASE-15685 > URL: https://issues.apache.org/jira/browse/HBASE-15685 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Bin Wang >Assignee: Bin Wang >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-15685.patch, HBASE-15685.patch > > Original Estimate: 2h > Time Spent: 3h > Remaining Estimate: 0h > > The Chapter - [REST|http://hbase.apache.org/book.html#_table_information] of > HBase Book has a few typo in the provided example links, like > "http://example.com:8000//:?v=" > which misses a forward slash between the port number 8000 and table name. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261355#comment-15261355 ] Enis Soztutar commented on HBASE-15337: --- [~claraxiong] / [~Apache9] do you want to handle this. If not I can take a stab at it. > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261331#comment-15261331 ] Hadoop QA commented on HBASE-15686: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 11s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 116m 34s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 169m 13s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261300#comment-15261300 ] Andrew Purtell commented on HBASE-15691: I'm not happy with the code but it's weird something went in on master and 0.98 skipping branch-1. We should commit this I think, then have another look at the BC code before we make any release from the result. > Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to > branch-1 > - > > Key: HBASE-15691 > URL: https://issues.apache.org/jira/browse/HBASE-15691 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 1.3.0, 1.2.2 > > Attachments: HBASE-15691-branch-1.patch > > > HBASE-10205 was committed to trunk and 0.98 branches only. To preserve > continuity we should commit it to branch-1. The change requires more than > nontrivial fixups so I will attach a backport of the change from trunk to > current branch-1 here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15645) hbase.rpc.timeout is not used in operations of HTable
[ https://issues.apache.org/jira/browse/HBASE-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261291#comment-15261291 ] Hudson commented on HBASE-15645: SUCCESS: Integrated in HBase-1.1-JDK8 #1792 (See [https://builds.apache.org/job/HBase-1.1-JDK8/1792/]) Label the new methods on Table introduced by HBASE-15645 as (stack: rev 6b54917d520d32d00f5b4e9420e0d4894aaa34e8) * hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java > hbase.rpc.timeout is not used in operations of HTable > - > > Key: HBASE-15645 > URL: https://issues.apache.org/jira/browse/HBASE-15645 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.0.3, 1.1.4 >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Critical > Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.1.5, 1.2.2 > > Attachments: HBASE-15645-branch-1-v1.patch, > HBASE-15645-branch-1.0-v1.patch, HBASE-15645-branch-1.1-v1.patch, > HBASE-15645-branch-1.2-v1.patch, HBASE-15645-v1.patch, HBASE-15645-v2.patch, > HBASE-15645-v3.patch, HBASE-15645-v4.patch, lable.patch > > > While fixing HBASE-15593, I find that we use operationTimeout as the timeout > of Get operation rpc call (hbase.client.scanner.timeout.period is used in > scan rpc), not the hbase.rpc.timeout. > This can be verified by add one line in TestHCM.setUpBeforeClass(): > {code} > TEST_UTIL.getConfiguration().setLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 3000); > {code} > and then run testOperationTimeout(), the test passes but it should have > failed because we should get rpc timeout first after 3 seconds then client > should retry and timeout again and again until operationTimeout or max > retries reached. > If I port this test to 0.98, it will fail as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15697) Excessive TestHRegion running time on branch-1
[ https://issues.apache.org/jira/browse/HBASE-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261289#comment-15261289 ] Andrew Purtell commented on HBASE-15697: bq. But the better fix is that we need to ensure that compaction runs frequently and they are getting cleared. Can we also apply these changes to trunk? They sound like nice test improvements. Plus, it would be weird it branch-1 gets something trunk does not. > Excessive TestHRegion running time on branch-1 > -- > > Key: HBASE-15697 > URL: https://issues.apache.org/jira/browse/HBASE-15697 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.0 > > Attachments: HBASE-15697_branch-1.patch > > > On my dev box TestHRegion takes about 90 seconds to complete in master and > about 60 seconds in 0.98, but about 370 seconds in branch-1. Furthermore > TestHRegion in branch-1 blew past my open files ulimit. I had to raise it > from default in order for the unit to complete at all. > I am going to bisect the recent history of branch-1 in search of a culprit > and report back. > {panel:title=master} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.299 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.529 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.23 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=branch-1} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 368.868 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.203 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 345.806 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=0.98} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.038 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.382 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.509 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15697) Excessive TestHRegion running time on branch-1
[ https://issues.apache.org/jira/browse/HBASE-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261286#comment-15261286 ] Andrew Purtell commented on HBASE-15697: bq. (Ram) Can you try out this patch? I applied the patch to head of branch-1 and measured these very much improved running times: Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.978 sec - in org.apache.hadoop.hbase.regionserver.TestHRegion Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.356 sec - in org.apache.hadoop.hbase.regionserver.TestHRegion Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.43 sec - in org.apache.hadoop.hbase.regionserver.TestHRegion > Excessive TestHRegion running time on branch-1 > -- > > Key: HBASE-15697 > URL: https://issues.apache.org/jira/browse/HBASE-15697 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: ramkrishna.s.vasudevan > Fix For: 1.3.0 > > Attachments: HBASE-15697_branch-1.patch > > > On my dev box TestHRegion takes about 90 seconds to complete in master and > about 60 seconds in 0.98, but about 370 seconds in branch-1. Furthermore > TestHRegion in branch-1 blew past my open files ulimit. I had to raise it > from default in order for the unit to complete at all. > I am going to bisect the recent history of branch-1 in search of a culprit > and report back. > {panel:title=master} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.299 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.529 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.23 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=branch-1} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 368.868 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.203 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 345.806 sec > - in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} > {panel:title=0.98} > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.038 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.382 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > Running org.apache.hadoop.hbase.regionserver.TestHRegion > Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.509 sec - > in org.apache.hadoop.hbase.regionserver.TestHRegion > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15698) Increment TimeRange not serialized to server
[ https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261281#comment-15261281 ] Andrew Purtell commented on HBASE-15698: Unless I misunderstand what happened this is a wire compatibility breach. No more 1.2 or any subsequent 1.x releases should go out until it is fixed. I made this issue a blocker. We can document errata for already released versions of 1.2 if need be. [~busbey] [~mantonov] > Increment TimeRange not serialized to server > > > Key: HBASE-15698 > URL: https://issues.apache.org/jira/browse/HBASE-15698 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0 >Reporter: James Taylor >Priority: Blocker > Labels: phoenix > Fix For: 1.3.0, 1.2.2 > > > Before HBase-1.2, the Increment TimeRange set on the client was serialized > over to the server. As of HBase 1.2, this appears to no longer be true, as my > preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value > of increment.getTimeRange().getMax() regardless of what the client has > specified. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15698) Increment TimeRange not serialized to server
[ https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-15698: --- Affects Version/s: 1.3.0 Priority: Blocker (was: Major) Fix Version/s: 1.2.2 1.3.0 > Increment TimeRange not serialized to server > > > Key: HBASE-15698 > URL: https://issues.apache.org/jira/browse/HBASE-15698 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0 >Reporter: James Taylor >Priority: Blocker > Labels: phoenix > Fix For: 1.3.0, 1.2.2 > > > Before HBase-1.2, the Increment TimeRange set on the client was serialized > over to the server. As of HBase 1.2, this appears to no longer be true, as my > preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value > of increment.getTimeRange().getMax() regardless of what the client has > specified. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15731) Add on a connection pool
Elliott Clark created HBASE-15731: - Summary: Add on a connection pool Key: HBASE-15731 URL: https://issues.apache.org/jira/browse/HBASE-15731 Project: HBase Issue Type: Sub-task Reporter: Elliott Clark Assignee: Elliott Clark We need to reuse connections so we need a pool -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15645) hbase.rpc.timeout is not used in operations of HTable
[ https://issues.apache.org/jira/browse/HBASE-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261271#comment-15261271 ] Hudson commented on HBASE-15645: SUCCESS: Integrated in HBase-1.1-JDK7 #1706 (See [https://builds.apache.org/job/HBase-1.1-JDK7/1706/]) Label the new methods on Table introduced by HBASE-15645 as (stack: rev 6b54917d520d32d00f5b4e9420e0d4894aaa34e8) * hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java > hbase.rpc.timeout is not used in operations of HTable > - > > Key: HBASE-15645 > URL: https://issues.apache.org/jira/browse/HBASE-15645 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.0.3, 1.1.4 >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Critical > Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.1.5, 1.2.2 > > Attachments: HBASE-15645-branch-1-v1.patch, > HBASE-15645-branch-1.0-v1.patch, HBASE-15645-branch-1.1-v1.patch, > HBASE-15645-branch-1.2-v1.patch, HBASE-15645-v1.patch, HBASE-15645-v2.patch, > HBASE-15645-v3.patch, HBASE-15645-v4.patch, lable.patch > > > While fixing HBASE-15593, I find that we use operationTimeout as the timeout > of Get operation rpc call (hbase.client.scanner.timeout.period is used in > scan rpc), not the hbase.rpc.timeout. > This can be verified by add one line in TestHCM.setUpBeforeClass(): > {code} > TEST_UTIL.getConfiguration().setLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 3000); > {code} > and then run testOperationTimeout(), the test passes but it should have > failed because we should get rpc timeout first after 3 seconds then client > should retry and timeout again and again until operationTimeout or max > retries reached. > If I port this test to 0.98, it will fail as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261248#comment-15261248 ] Hadoop QA commented on HBASE-15716: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s {color} | {color:red} HBASE-15716 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801125/15716.prune.synchronizations.v4.patch | | JIRA Issue | HBASE-15716 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1649/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks
[ https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261226#comment-15261226 ] Hudson commented on HBASE-15477: FAILURE: Integrated in HBase-1.4 #119 (See [https://builds.apache.org/job/HBase-1.4/119/]) HBASE-15477 Purge 'next block header' from cached blocks (stack: rev 6b78409eb259e263354cdd95b6970168219b1464) * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java * hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestNamespaceUpgrade.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * hbase-server/src/test/data/TestNamespaceUpgrade.tgz * hbase-external-blockcache/src/main/java/org/apache/hadoop/hbase/io/hfile/MemcachedBlockCache.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaMigrationConvertingToPB.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java > Do not save 'next block header' when we cache hfileblocks > - > > Key: HBASE-15477 > URL: https://issues.apache.org/jira/browse/HBASE-15477 > Project: HBase > Issue Type: Sub-task > Components: BlockCache, Performance >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 15366v4.patch, 15477.backport.branch-1.patch, > 15477.backport.branch-1.v2.patch, 15477.backport.branch-1.v3.patch, > 15477.backport.branch-1.v4.patch, 15477.backport.branch-1.v4.patch, > 15477.backport.branch-1.v5.patch, 15477.backport.branch-1.v5.patch, > 15477.backport.branch-1.v6.patch, 15477.backport.branch-1.v7.patch, > 15477.patch, 15477v2.patch, 15477v3.patch, 15477v3.patch, 15477v4.patch > > > When we read from HDFS, we overread to pick up the next blocks header. > Doing this saves a seek as we move through the hfile; we save having to > do an explicit seek just to read the block header every time we need to > read the body. We used to read in the next header as part of the > current blocks buffer. This buffer was then what got persisted to > blockcache; so we were over-persisting wrtiting out our block plus the > next blocks' header (overpersisting 33 bytes). Parse of HFileBlock > complicated by this extra tail. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15676) FuzzyRowFilter fails and matches all the rows in the table if the mask consists of all 0s
[ https://issues.apache.org/jira/browse/HBASE-15676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Warhaftig updated HBASE-15676: --- Attachment: hbase-15676-v3.patch Attached patch 'hbase-15676-v3.patch' corrects the smell FindBugs found and updates the code comment. > FuzzyRowFilter fails and matches all the rows in the table if the mask > consists of all 0s > - > > Key: HBASE-15676 > URL: https://issues.apache.org/jira/browse/HBASE-15676 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 >Reporter: Rohit Sinha > Attachments: hbase-15676-v1.patch, hbase-15676-v2.patch, > hbase-15676-v3.patch > > > While using FuzzyRowFilter we noticed that if the mask array consists of all > 0s (fixed) the FuzzyRowFilter matches all the rows in the table. We noticed > this on HBase 1.1, 1.2 and higher. > After some digging we suspect that this is because of isPreprocessedMask() > check which is used in preprocessMask() which was added here: > https://issues.apache.org/jira/browse/HBASE-13761 > If the mask consists of all 0s then the isPreprocessedMask() returns true and > the preprocessing which responsible for changing 0s to -1 doesn't happen and > hence all rows are matched in scan. > This scenario can be tested in TestFuzzyRowFilterEndToEnd#testHBASE14782() If > we change the > byte[] fuzzyKey = Bytes.toBytesBinary("\\x00\\x00\\x044"); > byte[] mask = new byte[] {1,0,0,0}; > to > byte[] fuzzyKey = Bytes.toBytesBinary("\\x9B\\x00\\x044e"); > byte[] mask = new byte[] {0,0,0,0,0}; > We expect one match but this will match all the rows in the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14876) Provide maven archetypes
[ https://issues.apache.org/jira/browse/HBASE-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261216#comment-15261216 ] Daniel Vimont commented on HBASE-14876: --- The patch for HBASE-14879 (archetype for mapreduce-job) appears to have passed muster with Hadoop QA, so it may be ready to be pushed through to the master branch (and, if time permits, cherry-picking into the 1.3 build). > Provide maven archetypes > > > Key: HBASE-14876 > URL: https://issues.apache.org/jira/browse/HBASE-14876 > Project: HBase > Issue Type: New Feature > Components: build, Usability >Affects Versions: 2.0.0 >Reporter: Nick Dimiduk >Assignee: Daniel Vimont > Labels: beginner, maven > Attachments: HBASE-14876-v2.patch, HBASE-14876.patch, > archetype_prototype.zip, archetype_prototype02.zip, > archetype_shaded_prototype01.zip > > > To help onboard new users, we should provide maven archetypes for hbase > client applications. Off the top of my head, we should have templates for > - hbase client application with all dependencies > - hbase client application using client-shaded jar > - mapreduce application with hbase as input and output (ie, copy table) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15729) Remove old JDiff wrapper scripts in dev-support
[ https://issues.apache.org/jira/browse/HBASE-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261212#comment-15261212 ] Hadoop QA commented on HBASE-15729: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 4s {color} | {color:green} The applied patch generated 0 new + 416 unchanged - 44 fixed = 416 total (was 460) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 1s {color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 23s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801140/HBASE-15729.patch | | JIRA Issue | HBASE-15729 | | Optional Tests | asflicense xml shellcheck shelldocs | | uname | Linux proserpina.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / e1bf3a6 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1647/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Remove old JDiff wrapper scripts in dev-support > --- > > Key: HBASE-15729 > URL: https://issues.apache.org/jira/browse/HBASE-15729 > Project: HBase > Issue Type: Task >Reporter: Dima Spivak >Assignee: Dima Spivak >Priority: Minor > Attachments: HBASE-15729.patch > > > Since HBASE-12808, we've been using the Java API Compliance Checker instead > of JDiff to look at API compatibility. Probably makes sense to remove the old > wrapper scripts that aren't being used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15721) Optimization in cloning cells into MSLAB
[ https://issues.apache.org/jira/browse/HBASE-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261190#comment-15261190 ] Ted Yu commented on HBASE-15721: lgtm {code} 469 // We wont use this cells in the write path at all. {code} 'this cells' -> 'this cell' > Optimization in cloning cells into MSLAB > > > Key: HBASE-15721 > URL: https://issues.apache.org/jira/browse/HBASE-15721 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15721.patch > > > Before cells added to memstore CSLM, there is a clone of cell after copying > it to MSLAB chunk area. This is done not in an efficient way. > {code} > public static int appendToByteArray(final Cell cell, final byte[] output, > final int offset) { > int pos = offset; > pos = Bytes.putInt(output, pos, keyLength(cell)); > pos = Bytes.putInt(output, pos, cell.getValueLength()); > pos = appendKeyTo(cell, output, pos); > pos = CellUtil.copyValueTo(cell, output, pos); > if ((cell.getTagsLength() > 0)) { > pos = Bytes.putAsShort(output, pos, cell.getTagsLength()); > pos = CellUtil.copyTagTo(cell, output, pos); > } > return pos; > } > {code} > Copied in 9 steps and we end up parsing all lengths. When the cell > implementation is backed by a single byte[] (Like KeyValue) this can be done > in single step copy. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15687) Allow decoding more than GetResponse from the server
[ https://issues.apache.org/jira/browse/HBASE-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15687: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Allow decoding more than GetResponse from the server > > > Key: HBASE-15687 > URL: https://issues.apache.org/jira/browse/HBASE-15687 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15687.patch > > > HBASE-15620 adds on Call serialization and de-serialization. However the > client-serialize-handler currently assumes that all responses will be > GetResponse. > We should keep a call id to response type mapping. Maybe the inner request > should have a response factory inside it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15724) Use explicit docker image
[ https://issues.apache.org/jira/browse/HBASE-15724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15724: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Use explicit docker image > - > > Key: HBASE-15724 > URL: https://issues.apache.org/jira/browse/HBASE-15724 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15724.patch > > > Using an explicit docker image allows the upstream image to change without > breaking us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15730) Add on script to format all .h,.cc, and BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15730: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Add on script to format all .h,.cc, and BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15730.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15718) Add on TableName implementation and tests
[ https://issues.apache.org/jira/browse/HBASE-15718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15718: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Add on TableName implementation and tests > - > > Key: HBASE-15718 > URL: https://issues.apache.org/jira/browse/HBASE-15718 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15718-v1.patch, HBASE-15718-v2.patch, > HBASE-15718.patch > > > Table name will be needed to look up rows from meta. Add the implementation > and a test around TableName. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15730) Add on script to format all .h,.cc, and BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261169#comment-15261169 ] Hadoop QA commented on HBASE-15730: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HBASE-15730 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801138/HBASE-15730.patch | | JIRA Issue | HBASE-15730 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1646/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Add on script to format all .h,.cc, and BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15730.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15727) Canary Tool for Zookeeper
[ https://issues.apache.org/jira/browse/HBASE-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261163#comment-15261163 ] Hadoop QA commented on HBASE-15727: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 11s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 37s {color} | {color:red} hbase-server: patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 50s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 151m 11s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestNamespaceCommands | | Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801104/HBASE-15727.patch | | JIRA Issue | HBASE-15727 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / ce318a2 | | Default Java | 1.7.0_79 | |
[jira] [Commented] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks
[ https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261146#comment-15261146 ] Hudson commented on HBASE-15477: SUCCESS: Integrated in HBase-1.3 #674 (See [https://builds.apache.org/job/HBase-1.3/674/]) HBASE-15477 Purge 'next block header' from cached blocks (stack: rev 064c3c09ec4ee1641691aee43156bbc3736c53b3) * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java * hbase-external-blockcache/src/main/java/org/apache/hadoop/hbase/io/hfile/MemcachedBlockCache.java * hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaMigrationConvertingToPB.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java * hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java * hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java * hbase-server/src/test/data/TestNamespaceUpgrade.tgz * hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestNamespaceUpgrade.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java > Do not save 'next block header' when we cache hfileblocks > - > > Key: HBASE-15477 > URL: https://issues.apache.org/jira/browse/HBASE-15477 > Project: HBase > Issue Type: Sub-task > Components: BlockCache, Performance >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 15366v4.patch, 15477.backport.branch-1.patch, > 15477.backport.branch-1.v2.patch, 15477.backport.branch-1.v3.patch, > 15477.backport.branch-1.v4.patch, 15477.backport.branch-1.v4.patch, > 15477.backport.branch-1.v5.patch, 15477.backport.branch-1.v5.patch, > 15477.backport.branch-1.v6.patch, 15477.backport.branch-1.v7.patch, > 15477.patch, 15477v2.patch, 15477v3.patch, 15477v3.patch, 15477v4.patch > > > When we read from HDFS, we overread to pick up the next blocks header. > Doing this saves a seek as we move through the hfile; we save having to > do an explicit seek just to read the block header every time we need to > read the body. We used to read in the next header as part of the > current blocks buffer. This buffer was then what got persisted to > blockcache; so we were over-persisting wrtiting out our block plus the > next blocks' header (overpersisting 33 bytes). Parse of HFileBlock > complicated by this extra tail. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261147#comment-15261147 ] Hudson commented on HBASE-15707: SUCCESS: Integrated in HBase-1.3-IT #638 (See [https://builds.apache.org/job/HBase-1.3-IT/638/]) HBASE-15707 ImportTSV bulk output does not support tags with (tedyu: rev bc44128fce4675df954cd3d824e31d0f30d08b4f) * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun >Assignee: huaxiang sun > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15729) Remove old JDiff wrapper scripts in dev-support
[ https://issues.apache.org/jira/browse/HBASE-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dima Spivak updated HBASE-15729: Attachment: HBASE-15729.patch Uploaded patch. > Remove old JDiff wrapper scripts in dev-support > --- > > Key: HBASE-15729 > URL: https://issues.apache.org/jira/browse/HBASE-15729 > Project: HBase > Issue Type: Task >Reporter: Dima Spivak >Assignee: Dima Spivak >Priority: Minor > Attachments: HBASE-15729.patch > > > Since HBASE-12808, we've been using the Java API Compliance Checker instead > of JDiff to look at API compatibility. Probably makes sense to remove the old > wrapper scripts that aren't being used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15729) Remove old JDiff wrapper scripts in dev-support
[ https://issues.apache.org/jira/browse/HBASE-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dima Spivak updated HBASE-15729: Status: Patch Available (was: Open) > Remove old JDiff wrapper scripts in dev-support > --- > > Key: HBASE-15729 > URL: https://issues.apache.org/jira/browse/HBASE-15729 > Project: HBase > Issue Type: Task >Reporter: Dima Spivak >Assignee: Dima Spivak >Priority: Minor > Attachments: HBASE-15729.patch > > > Since HBASE-12808, we've been using the Java API Compliance Checker instead > of JDiff to look at API compatibility. Probably makes sense to remove the old > wrapper scripts that aren't being used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15730) Add on script to format all .h,.cc, and BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15730: -- Summary: Add on script to format all .h,.cc, and BUCK files. (was: Add on script to format all .h,.cc,a nd BUCK files.) > Add on script to format all .h,.cc, and BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15730.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15730) Add on script to format all .h,.cc, and BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15730: -- Status: Patch Available (was: Open) > Add on script to format all .h,.cc, and BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15730.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15730) Add on script to format all .h,.cc, and BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15730: -- Attachment: HBASE-15730.patch > Add on script to format all .h,.cc, and BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15730.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15730) Add on script to format all .h,.cc,a nd BUCK files.
[ https://issues.apache.org/jira/browse/HBASE-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15730: -- Summary: Add on script to format all .h,.cc,a nd BUCK files. (was: Add on script to format all files.) > Add on script to format all .h,.cc,a nd BUCK files. > --- > > Key: HBASE-15730 > URL: https://issues.apache.org/jira/browse/HBASE-15730 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15645) hbase.rpc.timeout is not used in operations of HTable
[ https://issues.apache.org/jira/browse/HBASE-15645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261122#comment-15261122 ] Hudson commented on HBASE-15645: FAILURE: Integrated in HBase-1.2 #611 (See [https://builds.apache.org/job/HBase-1.2/611/]) Label the new methods on Table introduced by HBASE-15645 as (stack: rev ed520133d6dbb47a40f1883a56460582732f863a) * hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java > hbase.rpc.timeout is not used in operations of HTable > - > > Key: HBASE-15645 > URL: https://issues.apache.org/jira/browse/HBASE-15645 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.0.3, 1.1.4 >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Critical > Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.1.5, 1.2.2 > > Attachments: HBASE-15645-branch-1-v1.patch, > HBASE-15645-branch-1.0-v1.patch, HBASE-15645-branch-1.1-v1.patch, > HBASE-15645-branch-1.2-v1.patch, HBASE-15645-v1.patch, HBASE-15645-v2.patch, > HBASE-15645-v3.patch, HBASE-15645-v4.patch, lable.patch > > > While fixing HBASE-15593, I find that we use operationTimeout as the timeout > of Get operation rpc call (hbase.client.scanner.timeout.period is used in > scan rpc), not the hbase.rpc.timeout. > This can be verified by add one line in TestHCM.setUpBeforeClass(): > {code} > TEST_UTIL.getConfiguration().setLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 3000); > {code} > and then run testOperationTimeout(), the test passes but it should have > failed because we should get rpc timeout first after 3 seconds then client > should retry and timeout again and again until operationTimeout or max > retries reached. > If I port this test to 0.98, it will fail as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261113#comment-15261113 ] Hadoop QA commented on HBASE-15708: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 2s {color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 2s {color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 3s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 28s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801134/HBASE-15708-master-v7.patch | | JIRA Issue | HBASE-15708 | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / ce318a2 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1643/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0 > > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master-v7.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15730) Add on script to format all files.
Elliott Clark created HBASE-15730: - Summary: Add on script to format all files. Key: HBASE-15730 URL: https://issues.apache.org/jira/browse/HBASE-15730 Project: HBase Issue Type: Sub-task Reporter: Elliott Clark Assignee: Elliott Clark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum
[ https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261087#comment-15261087 ] Hadoop QA commented on HBASE-11625: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 26m 13s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 34s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 153m 5s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestNamespaceCommands | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12780263/HBASE-11625.patch | | JIRA Issue | HBASE-11625 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / ce318a2 | | Default Java | 1.7.0_79 | | Multi-JDK versions |
[jira] [Commented] (HBASE-15687) Allow decoding more than GetResponse from the server
[ https://issues.apache.org/jira/browse/HBASE-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261089#comment-15261089 ] Hadoop QA commented on HBASE-15687: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HBASE-15687 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801135/HBASE-15687.patch | | JIRA Issue | HBASE-15687 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1645/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Allow decoding more than GetResponse from the server > > > Key: HBASE-15687 > URL: https://issues.apache.org/jira/browse/HBASE-15687 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15687.patch > > > HBASE-15620 adds on Call serialization and de-serialization. However the > client-serialize-handler currently assumes that all responses will be > GetResponse. > We should keep a call id to response type mapping. Maybe the inner request > should have a response factory inside it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15687) Allow decoding more than GetResponse from the server
[ https://issues.apache.org/jira/browse/HBASE-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15687: -- Attachment: HBASE-15687.patch > Allow decoding more than GetResponse from the server > > > Key: HBASE-15687 > URL: https://issues.apache.org/jira/browse/HBASE-15687 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15687.patch > > > HBASE-15620 adds on Call serialization and de-serialization. However the > client-serialize-handler currently assumes that all responses will be > GetResponse. > We should keep a call id to response type mapping. Maybe the inner request > should have a response factory inside it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15687) Allow decoding more than GetResponse from the server
[ https://issues.apache.org/jira/browse/HBASE-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15687: -- Status: Patch Available (was: Open) > Allow decoding more than GetResponse from the server > > > Key: HBASE-15687 > URL: https://issues.apache.org/jira/browse/HBASE-15687 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15687.patch > > > HBASE-15620 adds on Call serialization and de-serialization. However the > client-serialize-handler currently assumes that all responses will be > GetResponse. > We should keep a call id to response type mapping. Maybe the inner request > should have a response factory inside it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15729) Remove old JDiff wrapper scripts in dev-support
Dima Spivak created HBASE-15729: --- Summary: Remove old JDiff wrapper scripts in dev-support Key: HBASE-15729 URL: https://issues.apache.org/jira/browse/HBASE-15729 Project: HBase Issue Type: Task Reporter: Dima Spivak Assignee: Dima Spivak Priority: Minor Since HBASE-12808, we've been using the Java API Compliance Checker instead of JDiff to look at API compatibility. Probably makes sense to remove the old wrapper scripts that aren't being used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15708: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) Pushed. Thanks [~appy] and [~dimaspivak] for review. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0 > > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master-v7.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261068#comment-15261068 ] Hadoop QA commented on HBASE-15716: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} | {color:red} HBASE-15716 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801125/15716.prune.synchronizations.v4.patch | | JIRA Issue | HBASE-15716 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1644/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261066#comment-15261066 ] Appy commented on HBASE-15708: -- argh, stupid me. :( updated new patch v7. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master-v7.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-15708: - Attachment: HBASE-15708-master-v7.patch > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master-v7.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261056#comment-15261056 ] Dima Spivak commented on HBASE-15708: - {{requirements.txt}} probably shouldn't refer to Dockerfile? Guessing a sloppy copy and paste? After that's fixed, +1 from me. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-15708: - Attachment: HBASE-15708-master-v6.patch v6 i guess no harm in having a python-requirements.txt. Adding it. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master-v6.patch, > HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15728) Add remaining per-table region / store / flush / compaction related metrics
[ https://issues.apache.org/jira/browse/HBASE-15728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-15728: -- Attachment: hbase-15728_v1.patch Attaching v1 patch. Here is a sample JMX output. We are reporting on region count, store file count, and compaction and flush metrics per table : {code} { "name" : "Hadoop:service=HBase,name=RegionServer,sub=Tables", "modelerType" : "RegionServer,sub=Tables", "tag.Context" : "regionserver", "tag.Hostname" : "HW10676", "Namespace_hbase_table_meta_metric_readRequestCount" : 596, "Namespace_hbase_table_meta_metric_filteredReadRequestCount" : 0, "Namespace_hbase_table_meta_metric_writeRequestCount" : 82, "Namespace_hbase_table_meta_metric_totalRequestCount" : 678, "Namespace_hbase_table_meta_metric_memStoreSize" : 18296, "Namespace_hbase_table_meta_metric_storeFileCount" : 3, "Namespace_hbase_table_meta_metric_storeFileSize" : 45509, "Namespace_hbase_table_meta_metric_tableSize" : 63805, "Namespace_hbase_table_meta_metric_averageRegionSize" : 63805, "Namespace_hbase_table_meta_metric_regionCount" : 1, "Namespace_hbase_table_meta_metric_storeCount" : 2, "Namespace_hbase_table_meta_metric_maxStoreFileAge" : 835887, "Namespace_hbase_table_meta_metric_minStoreFileAge" : 392887, "Namespace_hbase_table_meta_metric_avgStoreFileAge" : 130962, "Namespace_hbase_table_meta_metric_numReferenceFiles" : 0, "Namespace_default_table_cluster_test_metric_readRequestCount" : 398569, "Namespace_default_table_cluster_test_metric_filteredReadRequestCount" : 0, "Namespace_default_table_cluster_test_metric_writeRequestCount" : 398341, "Namespace_default_table_cluster_test_metric_totalRequestCount" : 796910, "Namespace_default_table_cluster_test_metric_memStoreSize" : 3840, "Namespace_default_table_cluster_test_metric_storeFileCount" : 30, "Namespace_default_table_cluster_test_metric_storeFileSize" : 335351788, "Namespace_default_table_cluster_test_metric_tableSize" : 335355628, "Namespace_default_table_cluster_test_metric_averageRegionSize" : 33535562, "Namespace_default_table_cluster_test_metric_regionCount" : 10, "Namespace_default_table_cluster_test_metric_storeCount" : 10, "Namespace_default_table_cluster_test_metric_maxStoreFileAge" : 12340887, "Namespace_default_table_cluster_test_metric_minStoreFileAge" : 14887, "Namespace_default_table_cluster_test_metric_avgStoreFileAge" : 413655, "Namespace_default_table_cluster_test_metric_numReferenceFiles" : 0, "Namespace_hbase_table_namespace_metric_readRequestCount" : 4, "Namespace_hbase_table_namespace_metric_filteredReadRequestCount" : 0, "Namespace_hbase_table_namespace_metric_writeRequestCount" : 0, "Namespace_hbase_table_namespace_metric_totalRequestCount" : 4, "Namespace_hbase_table_namespace_metric_memStoreSize" : 384, "Namespace_hbase_table_namespace_metric_storeFileCount" : 1, "Namespace_hbase_table_namespace_metric_storeFileSize" : 4912, "Namespace_hbase_table_namespace_metric_tableSize" : 5296, "Namespace_hbase_table_namespace_metric_averageRegionSize" : 5296, "Namespace_hbase_table_namespace_metric_regionCount" : 1, "Namespace_hbase_table_namespace_metric_storeCount" : 1, "Namespace_hbase_table_namespace_metric_maxStoreFileAge" : 16117887, "Namespace_hbase_table_namespace_metric_minStoreFileAge" : 16117887, "Namespace_hbase_table_namespace_metric_avgStoreFileAge" : 16117887, "Namespace_hbase_table_namespace_metric_numReferenceFiles" : 0, "Namespace_default_table_t1_metric_readRequestCount" : 0, "Namespace_default_table_t1_metric_filteredReadRequestCount" : 0, "Namespace_default_table_t1_metric_writeRequestCount" : 0, "Namespace_default_table_t1_metric_totalRequestCount" : 0, "Namespace_default_table_t1_metric_memStoreSize" : 384, "Namespace_default_table_t1_metric_storeFileCount" : 0, "Namespace_default_table_t1_metric_storeFileSize" : 0, "Namespace_default_table_t1_metric_tableSize" : 384, "Namespace_default_table_t1_metric_averageRegionSize" : 384, "Namespace_default_table_t1_metric_regionCount" : 1, "Namespace_default_table_t1_metric_storeCount" : 1, "Namespace_default_table_t1_metric_maxStoreFileAge" : 0, "Namespace_default_table_t1_metric_minStoreFileAge" : 0, "Namespace_default_table_t1_metric_avgStoreFileAge" : 0, "Namespace_default_table_t1_metric_numReferenceFiles" : 0, "numTables" : 4, "Namespace_default_table_t1_metric_compactionOutputSize_num_ops" : 0, "Namespace_default_table_t1_metric_compactionOutputSize_min" : 0, "Namespace_default_table_t1_metric_compactionOutputSize_max" : 0, "Namespace_default_table_t1_metric_compactionOutputSize_mean" : 0, "Namespace_default_table_t1_metric_compactionOutputSize_25th_percentile" : 0,
[jira] [Updated] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15686: --- Attachment: 15686.v6.txt Patch v6 wraps long line. > Add override mechanism for the exempt classes when dynamically loading table > coprocessor > > > Key: HBASE-15686 > URL: https://issues.apache.org/jira/browse/HBASE-15686 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.0.1 >Reporter: Sangjin Lee >Assignee: Ted Yu > Attachments: 15686.v2.txt, 15686.v3.txt, 15686.v4.txt, 15686.v5.txt, > 15686.v6.txt, 15686.wip > > > As part of Hadoop's Timeline Service v.2 (YARN-2928), we're adding a table > coprocessor (YARN-4062). However, we're finding that the coprocessor cannot > be loaded dynamically. A relevant snippet for the exception: > {noformat} > java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Class > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > cannot be loaded > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:324) > at > org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327) > ... 8 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at > org.apache.hadoop.hbase.util.CoprocessorClassLoader.loadClass(CoprocessorClassLoader.java:275) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:322) > ... 10 more > {noformat} > We tracked it down to the fact that {{CoprocessorClassLoader}} regarding all > hadoop classes as exempt from loading from the coprocessor jar. Since our > coprocessor sits in the coprocessor jar, and yet the loading of this class is > delegated to the parent which does not have this jar, the classloading fails. > What would be nice is the ability to exclude certain classes from the exempt > classes so that they can be loaded via table coprocessor classloader. See > hadoop's {{ApplicationClassLoader}} for a similar feature. > Is there any other way to load this coprocessor at the table scope? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15728) Add remaining per-table region / store / flush / compaction related metrics
Enis Soztutar created HBASE-15728: - Summary: Add remaining per-table region / store / flush / compaction related metrics Key: HBASE-15728 URL: https://issues.apache.org/jira/browse/HBASE-15728 Project: HBase Issue Type: Sub-task Components: metrics Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 2.0.0, 1.4.0 Continuing on the work for per-table metrics, HBASE-15518 and HBASE-15671. We need to add some remaining metrics at the per-table level, so that we will have the same metrics reported at the per-regionserver, per-region and per-table levels. After this patch, most of the metrics at the RS and all of the per-region level are also reported at the per-table level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15685) Typo in REST documentation
[ https://issues.apache.org/jira/browse/HBASE-15685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261031#comment-15261031 ] Hadoop QA commented on HBASE-15685: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 1s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 35m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 39s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 179m 57s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 240m 56s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures | | | hadoop.hbase.security.access.TestAccessController3 | | | hadoop.hbase.security.access.TestNamespaceCommands | | Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801083/HBASE-15685.patch | | JIRA Issue | HBASE-15685 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / ce318a2 | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/1634/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/1634/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/1634/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1634/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Typo in REST documentation > -- > > Key: HBASE-15685 > URL: https://issues.apache.org/jira/browse/HBASE-15685 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Bin Wang >Assignee: Bin Wang >Priority: Minor > Attachments: HBASE-15685.patch, HBASE-15685.patch > > Original Estimate: 2h > Time Spent: 3h > Remaining Estimate: 0h > > The Chapter - [REST|http://hbase.apache.org/book.html#_table_information] of > HBase Book has a few typo in the provided example links, like > "http://example.com:8000//:?v=" > which misses a forward slash between the port number 8000 and table name.
[jira] [Commented] (HBASE-15686) Add override mechanism for the exempt classes when dynamically loading table coprocessor
[ https://issues.apache.org/jira/browse/HBASE-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261027#comment-15261027 ] Hadoop QA commented on HBASE-15686: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 3s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s {color} | {color:red} hbase-common: patch generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 18s {color} | {color:red} hbase-server: patch generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 36m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 168m 23s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s
[jira] [Commented] (HBASE-15706) HFilePrettyPrinter should print out nicely formatted tags
[ https://issues.apache.org/jira/browse/HBASE-15706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261022#comment-15261022 ] huaxiang sun commented on HBASE-15706: -- HI [~anoop.hbase], The mob code I looked at is HMobStore#resolve. {code} String tableNameString = TagUtil.getValueAsString(tableNameTag); {code} If this changes to {code} String tableNameString = tableNameTag.toString(); {code} tableNameString will be changed to "[Tag type : *]", it will give us the wrong path in the following codes. Not sure if this is the change you commented, thanks. > HFilePrettyPrinter should print out nicely formatted tags > - > > Key: HBASE-15706 > URL: https://issues.apache.org/jira/browse/HBASE-15706 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Priority: Minor > Attachments: HBASE-15706-v001.patch > > > When I was using HFile to print out a rows with tags, the output is like: > {code} > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase > org.apache.hadoop.hbase.io.hfile.HFile -f > /tmp/71afa45b1cb94ea1858a99f31197274f -p > 2016-04-25 11:40:40,409 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > 2016-04-25 11:40:40,580 INFO [main] hfile.CacheConfig: CacheConfig:disabled > K: b/b:b/1461608231279/Maximum/vlen=0/seqid=0 V: > K: b/b:b/1461608231278/Put/vlen=1/seqid=0 V: b T[0]: � > Scanned kv count -> 2 > {code} > With attached patch, the print is now like: > {code} > 2016-04-25 11:57:05,849 INFO [main] hfile.CacheConfig: CacheConfig:disabled > K: b/b:b/1461609876838/Maximum/vlen=0/seqid=0 V: > K: b/b:b/1461609876837/Put/vlen=1/seqid=0 V: b T[0]: [Tag type : 8, value : > \x00\x0E\xEE\xEE] > Scanned kv count -> 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15708) Docker for dev-support scripts
[ https://issues.apache.org/jira/browse/HBASE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-15708: - Attachment: HBASE-15708-master-v5.patch v5. Broke up the RUN line. requirements file seem like something you'd want to do when there are many dependencies. Our project doesn't have enough python stuff that we need a separate file just to manage dependencies. I'd rather keep {{pip install}} unless we have quite a few (em, 5+ ?) python dependencies. > Docker for dev-support scripts > -- > > Key: HBASE-15708 > URL: https://issues.apache.org/jira/browse/HBASE-15708 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-15708-master-v2.patch, > HBASE-15708-master-v3.patch, HBASE-15708-master-v4.patch, > HBASE-15708-master-v5.patch, HBASE-15708-master.patch > > > Scripts in dev-support are limited in terms of dependencies by what's > installed on apache machines. Installing new stuff is not easy. It can be > painful even importing simple python libraries like 'requests'. This jira is > to add a single docker instance which we can tweak to build the right > environment for dev-support scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15716: -- Assignee: stack Status: Patch Available (was: Open) > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack >Assignee: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15716: -- Attachment: current-branch-1.vs.NoSynchronization.vs.Patch.png Here are some runs that compare current branch-1, all synchronization removed, and then the v4 patch. I do not see the 30% quoted above but more like 10%. Then when the server is overrun, we seem to be able to do more work... 15%? > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15716) HRegion#RegionScannerImpl scannerReadPoints synchronization costs
[ https://issues.apache.org/jira/browse/HBASE-15716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-15716: -- Attachment: 15716.prune.synchronizations.v4.patch > HRegion#RegionScannerImpl scannerReadPoints synchronization costs > - > > Key: HBASE-15716 > URL: https://issues.apache.org/jira/browse/HBASE-15716 > Project: HBase > Issue Type: Bug > Components: Performance >Reporter: stack > Attachments: 15716.prune.synchronizations.patch, > 15716.prune.synchronizations.v3.patch, 15716.prune.synchronizations.v4.patch, > Screen Shot 2016-04-26 at 2.05.45 PM.png, Screen Shot 2016-04-26 at 2.06.14 > PM.png, Screen Shot 2016-04-26 at 2.07.06 PM.png, Screen Shot 2016-04-26 at > 2.25.26 PM.png, Screen Shot 2016-04-26 at 6.02.29 PM.png, Screen Shot > 2016-04-27 at 9.49.35 AM.png, > current-branch-1.vs.NoSynchronization.vs.Patch.png, hits.png, > remove_cslm.patch > > > Here is a [~lhofhansl] special. > When we construct the region scanner, we get our read point and then store it > with the scanner instance in a Region scoped CSLM. This is done under a > synchronize on the CSLM. > This synchronize on a region-scoped Map creating region scanners is the > outstanding point of lock contention according to flight recorder (My work > load is workload c, random reads). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15611) add examples to shell docs
[ https://issues.apache.org/jira/browse/HBASE-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260980#comment-15260980 ] Sean Busbey commented on HBASE-15611: - h3. how can I determine which RegionServer is currently responsible for a given row? You can request the current host of a given row using a RegionLocator: {code} hbase(main):313:0> example_table = get_table 'MyTable' hbase(main):314:0> find_row = "an_example_row_key".to_java_bytes hbase(main):315:0> @hbase.instance_eval do hbase(main):316:1* puts @connection.get_region_locator(example_table.table.get_name).get_region_location(find_row, true).get_server_name.to_short_string hbase(main):317:1> end region-server-3.example.com:60020 {code} This will give you a hostname and a listening port, which should be enough information to find the correct RegionServer instance and its logs. > add examples to shell docs > --- > > Key: HBASE-15611 > URL: https://issues.apache.org/jira/browse/HBASE-15611 > Project: HBase > Issue Type: Improvement > Components: documentation, shell >Reporter: Sean Busbey > Labels: beginner > Fix For: 2.0.0 > > > It would be nice if our shell documentation included some additional examples > of operational tasks one can perform. > things to include to come in comments. when we have a patch to submit we can > update the jira summary to better reflect what scope we end up with. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14140) HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include backup/restore - related API
[ https://issues.apache.org/jira/browse/HBASE-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260951#comment-15260951 ] Ted Yu commented on HBASE-14140: In HBASE-7912 branch, there are currently some test failures. e.g. {code} testReadWriteSeqIdFiles(org.apache.hadoop.hbase.master.TestDistributedLogSplitting) Time elapsed: 9.589 sec <<< FAILURE! java.lang.AssertionError: expected:<2> but was:<3> at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.installTable(TestDistributedLogSplitting.java:1502) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.installTable(TestDistributedLogSplitting.java:1473) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testReadWriteSeqIdFiles(TestDistributedLogSplitting.java:1442) testThreeRSAbort(org.apache.hadoop.hbase.master.TestDistributedLogSplitting) Time elapsed: 8.811 sec <<< FAILURE! java.lang.AssertionError: expected:<2> but was:<3> at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.installTable(TestDistributedLogSplitting.java:1502) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.installTable(TestDistributedLogSplitting.java:1473) at org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testThreeRSAbort(TestDistributedLogSplitting.java:1074) {code} Do you want to address them in seperate JIRA(s) ? > HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include > backup/restore - related API > > > Key: HBASE-14140 > URL: https://issues.apache.org/jira/browse/HBASE-14140 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-14140-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double
[ https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260928#comment-15260928 ] Zhan Zhang commented on HBASE-15333: [~jmhsieh] Thanks for reviewing the code. I want to discuss in more details before changing the code. Following is my comments: 1. DefaultSource.scala: It is not a replacement, instead it fix the logic in the partition pruning and predicate pushdown logic (here we assume the data is naive encoded, same as the current code base). 2. DefaultSource.scala:618: typo- Will fix it. 3. Make sense. Will do it. 4. 4.1 will move to separate class. 4.2 We are not assume any specific encoding/decoding here. As we want to support the java primitive type as already done in the current existing codebase. You are definitely right that we may want to more flexibility and different encoder/decoder. I think fixing the current naive bytes may take priority, and can be the first step. 5. If we don't change it, and there is PassFilter, it will crash the region server. 6. BoundRange.scala:26: Will format the doc in more formal way. Typically it is inclusive, but the upper level logic need to take care of exclusive for special cases. 7. Will change FilterOps to JavaBytesEncoder 8. Will enhance the current test cases. Overall, I think we can first make the code base support naive encoding work correctly as the first step, and the framework level it does not prevent adding special encoding/decoding, which can be added later by me or other contributors. How do you think? Please let me know if you have any concerns. > Enhance the filter to handle short, integer, long, float and double > --- > > Key: HBASE-15333 > URL: https://issues.apache.org/jira/browse/HBASE-15333 > Project: HBase > Issue Type: Sub-task >Reporter: Zhan Zhang >Assignee: Zhan Zhang > Attachments: HBASE-15333-1.patch, HBASE-15333-2.patch, > HBASE-15333-3.patch, HBASE-15333-4.patch, HBASE-15333-5.patch > > > Currently, the range filter is based on the order of bytes. But for java > primitive type, such as short, int, long, double, float, etc, their order is > not consistent with their byte order, extra manipulation has to be in place > to take care of them correctly. > For example, for the integer range (-100, 100), the filter <= 1, the current > filter will return 0 and 1, and the right return value should be (-100, 1] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15707: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.4.0 1.3.0 Status: Resolved (was: Patch Available) Thanks for the patch, huaxiang. > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun >Assignee: huaxiang sun > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-15707: -- Assignee: huaxiang sun > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun >Assignee: huaxiang sun > Fix For: 2.0.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15727) Canary Tool for Zookeeper
[ https://issues.apache.org/jira/browse/HBASE-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] churro morales updated HBASE-15727: --- Attachment: HBASE-15727.patch I just time getData() for the base znode? Even though an exists() call is made. I was torn whether to time both, but thought getData() would be a better measure. > Canary Tool for Zookeeper > - > > Key: HBASE-15727 > URL: https://issues.apache.org/jira/browse/HBASE-15727 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: churro morales >Assignee: churro morales > Attachments: HBASE-15727.patch > > > It would be nice to have the canary tool also monitor zookeeper. Something > simple like doing a getData() call on zookeeper.znode.parent > It would be nice to create clients for every instance in the quorum such that > you could monitor overloaded or poor behaving instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15727) Canary Tool for Zookeeper
[ https://issues.apache.org/jira/browse/HBASE-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] churro morales updated HBASE-15727: --- Status: Patch Available (was: Open) > Canary Tool for Zookeeper > - > > Key: HBASE-15727 > URL: https://issues.apache.org/jira/browse/HBASE-15727 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: churro morales >Assignee: churro morales > Attachments: HBASE-15727.patch > > > It would be nice to have the canary tool also monitor zookeeper. Something > simple like doing a getData() call on zookeeper.znode.parent > It would be nice to create clients for every instance in the quorum such that > you could monitor overloaded or poor behaving instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15707) ImportTSV bulk output does not support tags with hfile.format.version=3
[ https://issues.apache.org/jira/browse/HBASE-15707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260872#comment-15260872 ] Hadoop QA commented on HBASE-15707: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} | {color:red} HBASE-15707 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801103/HBASE-15707-branch-1_v001.patch | | JIRA Issue | HBASE-15707 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1640/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > ImportTSV bulk output does not support tags with hfile.format.version=3 > --- > > Key: HBASE-15707 > URL: https://issues.apache.org/jira/browse/HBASE-15707 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 1.0.5 >Reporter: huaxiang sun > Fix For: 2.0.0 > > Attachments: HBASE-15707-branch-1_v001.patch, HBASE-15707-v001.patch, > HBASE-15707-v002.patch > > > Running the following command: > {code} > hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv \ > -Dhfile.format.version=3 \ > -Dmapreduce.map.combine.minspills=1 \ > -Dimporttsv.separator=, \ > -Dimporttsv.skip.bad.lines=false \ > -Dimporttsv.columns="HBASE_ROW_KEY,cf1:a,HBASE_CELL_TTL" \ > -Dimporttsv.bulk.output=/tmp/testttl/output/1 \ > testttl \ > /tmp/testttl/input > {code} > The content of input is like: > {code} > row1,data1,0060 > row2,data2,0660 > row3,data3,0060 > row4,data4,0660 > {code} > When running hfile tool with the output hfile, there is no ttl tag. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14140) HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include backup/restore - related API
[ https://issues.apache.org/jira/browse/HBASE-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260868#comment-15260868 ] Vladimir Rodionov commented on HBASE-14140: --- That is mostly file moves/renaming, that is why it is so large. > HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include > backup/restore - related API > > > Key: HBASE-14140 > URL: https://issues.apache.org/jira/browse/HBASE-14140 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-14140-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14140) HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include backup/restore - related API
[ https://issues.apache.org/jira/browse/HBASE-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260867#comment-15260867 ] Vladimir Rodionov commented on HBASE-14140: --- Here it is: https://reviews.apache.org/r/46749/ > HBase Backup/Restore Phase 3: Enhance HBaseAdmin API to include > backup/restore - related API > > > Key: HBASE-14140 > URL: https://issues.apache.org/jira/browse/HBASE-14140 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-14140-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)