[jira] [Comment Edited] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378182#comment-16378182 ] Reid Chan edited comment on HBASE-20064 at 2/27/18 7:54 AM: Looks like there're more holes than i expected. Even though this config controls the start up, but if a user specify one column family 'MOB' which this switch in this patch could not control, then those generated mob files will be kept forever and have no chore to clean and have no threads to compact? was (Author: reidchan): Looks like there're more holes than i expected. Even though this config controls the start up, but if a user specify one column family 'MOB', then those mob files will be kept forever and have no chore to clean and have no threads to compact? > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20086) PE randomSeekScan fails with ClassNotFoundException
[ https://issues.apache.org/jira/browse/HBASE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378185#comment-16378185 ] Hudson commented on HBASE-20086: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4656 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4656/]) HBASE-20086 PE randomSeekScan fails with ClassNotFoundException (tedyu: rev d3aefe783476e860e7b1c474b50cf18a7ae0be00) * (edit) hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java > PE randomSeekScan fails with ClassNotFoundException > --- > > Key: HBASE-20086 > URL: https://issues.apache.org/jira/browse/HBASE-20086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 20086.v1.txt, 20086.v2.txt, 20086.v3.txt > > > When running PE randomSeekScan against hadoop 3 cluster, I got the following > error: > {code} > 2018-02-26 17:11:09,548 INFO [main] mapreduce.Job: Task Id : > attempt_1519408774395_0003_m_04_0, Status : FAILED > Error: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.filter.FilterAllFilter > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.forName(PerformanceEvaluation.java:291) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.setup(PerformanceEvaluation.java:276) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > {code} > This is due to FilterAllFilter being inside hbase-server tests jar, hence not > added as dependency for PE job. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378184#comment-16378184 ] Reid Chan commented on HBASE-20064: --- Maybe hbase need a global control component to guard the mob, this issue becomes huge... > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20066) Region sequence id may go backward after split or merge
[ https://issues.apache.org/jira/browse/HBASE-20066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20066: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master and branch-2. Thanks [~stack] for reviewing. > Region sequence id may go backward after split or merge > --- > > Key: HBASE-20066 > URL: https://issues.apache.org/jira/browse/HBASE-20066 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-20066-v1.patch, HBASE-20066-v2.patch, > HBASE-20066-v3.patch, HBASE-20066-v4.patch, HBASE-20066.patch > > > The problem is that, now we have markers which will be written to WAL but not > in store file. For a normal region close, we will write a sequence id file > under the region directory, and when opening we will use this as the open > sequence id. But for split and merge, we do not copy the sequence id file to > the newly generated regions so the sequence id may go backwards since when > closing the region we will write flush marker and close marker into WAL... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378182#comment-16378182 ] Reid Chan commented on HBASE-20064: --- Looks like there're more holes than i expected. Even though this config controls the start up, but if a user specify one column family 'MOB', then those mob files will be kept forever and have no chore to clean and have no threads to compact? > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378180#comment-16378180 ] Hadoop QA commented on HBASE-20092: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 16s{color} | {color:red} hbase-server in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 47s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 9s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}117m 32s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}162m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-20092 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912202/HBASE-20092.v0.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux dbc3dfa40f02 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / d3aefe7834 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC3 | | findbugs | https://builds.apache.org/job/PreCommit-HBASE-Build/11699/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11699/testReport/ | | Max. process+thread count | 4674 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output |
[jira] [Commented] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378166#comment-16378166 ] Chia-Ping Tsai commented on HBASE-20064: Use getAllDescriptors instead. The getAll will be removed by HBASE-20097. {code:java} + /** + * Check whether any table has mob column family. + * @return true if there is, false otherwise + * @throws IOException ioe possibly happens while getting all TableDescriptors + */ + private boolean checkColumnFamilyMobEnable() throws IOException { +TableDescriptors tds = getTableDescriptors(); +// Can we expect no ioe? since this method only called at master start up. +for (TableDescriptor td : tds.getAll().values()) { // here + for (ColumnFamilyDescriptor cfd : td.getColumnFamilies()) { +if (cfd.isMobEnabled()) { + return true; +} + } +} +return false; + } {code} mobCacheConfig can be final. You can assign the null to it if the mob is disabled. {code:java} - final MobCacheConfig mobCacheConfig; + MobCacheConfig mobCacheConfig; {code} Our metrics still assume the mob is enabled. Please check RegionServerMetricsWrapperRunnable#run. It may cause NPE. {code:java|title=RegionServerMetricsWrapperRunnable.java} mobFileCacheAccessCount = mobFileCache.getAccessCount(); mobFileCacheMissCount = mobFileCache.getMissCount(); mobFileCacheHitRatio = Double. isNaN(mobFileCache.getHitRatio())?0:mobFileCache.getHitRatio(); mobFileCacheEvictedCount = mobFileCache.getEvictedFileCount(); mobFileCacheCount = mobFileCache.getCacheSize(); blockedRequestsCount = tempBlockedRequestsCount; {code} Should we shutdown RS as with master? {code:java} - mobCacheConfig = new MobCacheConfig(conf); + if (MobUtils.isMobEnable(conf)) { +mobCacheConfig = new MobCacheConfig(conf); + } {code} > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20069: -- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed the addendum. Thank you for review [~chia7712] > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, > 0002-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20097) Remove TableDescriptors#getAll since it just clone the returned object from TableDescriptors#getAllDescriptors
Chia-Ping Tsai created HBASE-20097: -- Summary: Remove TableDescriptors#getAll since it just clone the returned object from TableDescriptors#getAllDescriptors Key: HBASE-20097 URL: https://issues.apache.org/jira/browse/HBASE-20097 Project: HBase Issue Type: Task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 2.0.0 {code:java} @Override public MapgetAll() throws IOException { Map htds = new TreeMap<>(); Map allDescriptors = getAllDescriptors(); for (Map.Entry entry : allDescriptors .entrySet()) { htds.put(entry.getKey(), entry.getValue()); } return htds; }{code} The returned map from #getAllDescriptors isn't a inner object so doing the clone is meaningless. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378158#comment-16378158 ] Hadoop QA commented on HBASE-20069: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 49s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 1s{color} | {color:red} hbase-server in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 39s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}118m 22s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-20069 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912197/0002-HBASE-20069-fix-existing-findbugs-errors-addendum.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 2ff83613751b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / d3aefe7834 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC3 | | findbugs | https://builds.apache.org/job/PreCommit-HBASE-Build/11698/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html | | Test Results |
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378144#comment-16378144 ] Thiruvel Thirumoolan commented on HBASE-20001: -- Pre commit results for branch-1.3 patch: TestEndToEndSplitTransaction#testMasterOpsWhileSplitting has been failing for a while, see nightly build [https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/245/testReport/] Pre commit results for branch-1.2 patch: TestMultiTableSnapshotInputFormat.testScanOBBToOPP - is flaky, as can be seen in one of the nightly builds [https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/243/] TestSplitTransactionOnCluster.testMasterRestartWhenSplittingIsPartial is flaky too, as can be seen here [https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/246/] > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.2.001.patch, > HBASE-20001.branch-1.3.001.patch, HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378075#comment-16378075 ] Chia-Ping Tsai edited comment on HBASE-20092 at 2/27/18 6:57 AM: - Thanks [~yuzhih...@gmail.com] for the reviews. Will address your comment in next patch. was (Author: chia7712): Thanks [~ted yu] for the reviews. Will address your comment in next patch. > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20092.v0.patch > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378125#comment-16378125 ] Reid Chan commented on HBASE-20064: --- I chose your former comment, just shutdown master and throw IllegalArgumentException. > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20064: -- Attachment: HBASE-20064.master.002.patch > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20064: -- Attachment: (was: HBASE-20064.master.002.patch) > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20064) Disable MOB threads that are running whether you MOB or not
[ https://issues.apache.org/jira/browse/HBASE-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20064: -- Attachment: HBASE-20064.master.002.patch > Disable MOB threads that are running whether you MOB or not > --- > > Key: HBASE-20064 > URL: https://issues.apache.org/jira/browse/HBASE-20064 > Project: HBase > Issue Type: Bug >Reporter: stack >Assignee: Reid Chan >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20064.master.001.patch, > HBASE-20064.master.002.patch > > > Master starts up some cleaner and compacting threads even though no MOB. > Disable them and have users explicitly enable it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used
[ https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Soldatov updated HBASE-19863: Attachment: HBASE-19863.v5-branch-1.4.patch > java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter > is used > - > > Key: HBASE-19863 > URL: https://issues.apache.org/jira/browse/HBASE-19863 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 1.4.1 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, > HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, > HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch, > HBASE-19863.v4-master.patch, HBASE-19863.v5-branch-1.4.patch, > HBASE-19863.v5-branch-1.patch, HBASE-19863.v5-branch-2.patch > > > Under some circumstances scan with SingleColumnValueFilter may fail with an > exception > {noformat} > java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, > qualifier=C2, timestamp=1516433595543, comparison result: 1 > at > org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149) > at > org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167) > {noformat} > Conditions: > table T with a single column family 0 that uses ROWCOL bloom filter > (important) and column qualifiers C1,C2,C3,C4,C5. > When we fill the table for every row we put deleted cell for C3. > The table has a single region with two HStore: > A: start row: 0, stop row: 99 > B: start row: 10 stop row: 99 > B has newer versions of rows 10-99. Store files have several blocks each > (important). > Store A is the result of major compaction, so it doesn't have any deleted > cells (important). > So, we are running a scan like: > {noformat} > scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter > ('0','C5',=,'binary:whatever')"} > {noformat} > How the scan performs: > First, we iterate A for rows 0 and 1 without any problems. > Next, we start to iterate A for row 10, so read the first cell and set hfs > scanner to A : > 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : > 10:0/C1/1/Put/x, > so we make B as our current store scanner. Since we are looking for > particular columns > C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn > which > would run reseek for all store scanners. > For store A the following magic would happen in requestSeek: > 1. bloom filter check passesGeneralBloomFilter would set haveToSeek to > false because row 10 doesn't have C3 qualifier in store A. > 2. Since we don't have to seek we just create a fake row > 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for > us and it commented with : > {noformat} > // Multi-column Bloom filter optimization. > // Create a fake key/value, so that this scanner only bubbles up to the > top > // of the KeyValueHeap in StoreScanner after we scanned this row/column in > // all other store files. The query matcher will then just skip this fake > // key/value and the store scanner will progress to the next column. This > // is obviously not a "real real" seek, but unlike the fake KV earlier in > // this method, we want this to be propagated to ScanQueryMatcher. > {noformat} > > For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum > to skip C3 entirely. > After that we start searching for qualifier C5 using seekOrSkipToNextColumn > which run first trySkipToNextColumn: > {noformat} > protected boolean
[jira] [Commented] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378075#comment-16378075 ] Chia-Ping Tsai commented on HBASE-20092: Thanks [~ted yu] for the reviews. Will address your comment in next patch. > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20092.v0.patch > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378069#comment-16378069 ] Hadoop QA commented on HBASE-20001: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 1s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1.2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 48s{color} | {color:green} branch-1.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} branch-1.2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 47s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} hbase-server: The patch generated 0 new + 338 unchanged - 12 fixed = 338 total (was 350) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 5s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 40s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat | | | hadoop.hbase.regionserver.TestSplitTransactionOnCluster | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:e77c578 | | JIRA Issue | HBASE-20001 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912187/HBASE-20001.branch-1.2.001.patch | | Optional Tests | asflicense
[jira] [Commented] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378053#comment-16378053 ] Ted Yu commented on HBASE-20092: Looks good. {code} 125 // get the data from RS. Hence, it will fail if we do the assert check before RS have done {code} nit: have done -> has done The new info logs toward the end of patch can be DEBUG, right ? > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20092.v0.patch > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378049#comment-16378049 ] Hadoop QA commented on HBASE-20001: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 3s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 51s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} hbase-server: The patch generated 0 new + 344 unchanged - 12 fixed = 344 total (was 356) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 23s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 26s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 46s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:dca6535 | | JIRA Issue | HBASE-20001 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912180/HBASE-20001.branch-1.3.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck
[jira] [Updated] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-20092: --- Attachment: HBASE-20092.v0.patch > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20092.v0.patch > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-20092: --- Status: Patch Available (was: Open) Loop TestRegionMetrics and TestRegionLoad 100 times. All pass > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20092.v0.patch > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20086) PE randomSeekScan fails with ClassNotFoundException
[ https://issues.apache.org/jira/browse/HBASE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378039#comment-16378039 ] Ted Yu commented on HBASE-20086: I tested using --nomapred flag which worked. > PE randomSeekScan fails with ClassNotFoundException > --- > > Key: HBASE-20086 > URL: https://issues.apache.org/jira/browse/HBASE-20086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 20086.v1.txt, 20086.v2.txt, 20086.v3.txt > > > When running PE randomSeekScan against hadoop 3 cluster, I got the following > error: > {code} > 2018-02-26 17:11:09,548 INFO [main] mapreduce.Job: Task Id : > attempt_1519408774395_0003_m_04_0, Status : FAILED > Error: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.filter.FilterAllFilter > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.forName(PerformanceEvaluation.java:291) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.setup(PerformanceEvaluation.java:276) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > {code} > This is due to FilterAllFilter being inside hbase-server tests jar, hence not > added as dependency for PE job. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used
[ https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378031#comment-16378031 ] ramkrishna.s.vasudevan commented on HBASE-19863: LGTM. Just a question {code} public Table createTable(TableDescriptor htd, byte[][] families, byte[][] splitKeys, 1389 Configuration c) throws IOException { 1389 Configuration c) throws IOException { {code} This createTable which internally disables Bloom - By default I think we have ROW Bloom enabled. So if this createTable() is used we will be disabling the Blooms always. Is that wanted? Rest looks good to me. > java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter > is used > - > > Key: HBASE-19863 > URL: https://issues.apache.org/jira/browse/HBASE-19863 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 1.4.1 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, > HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, > HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch, > HBASE-19863.v4-master.patch, HBASE-19863.v5-branch-1.patch, > HBASE-19863.v5-branch-2.patch > > > Under some circumstances scan with SingleColumnValueFilter may fail with an > exception > {noformat} > java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, > qualifier=C2, timestamp=1516433595543, comparison result: 1 > at > org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149) > at > org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167) > {noformat} > Conditions: > table T with a single column family 0 that uses ROWCOL bloom filter > (important) and column qualifiers C1,C2,C3,C4,C5. > When we fill the table for every row we put deleted cell for C3. > The table has a single region with two HStore: > A: start row: 0, stop row: 99 > B: start row: 10 stop row: 99 > B has newer versions of rows 10-99. Store files have several blocks each > (important). > Store A is the result of major compaction, so it doesn't have any deleted > cells (important). > So, we are running a scan like: > {noformat} > scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter > ('0','C5',=,'binary:whatever')"} > {noformat} > How the scan performs: > First, we iterate A for rows 0 and 1 without any problems. > Next, we start to iterate A for row 10, so read the first cell and set hfs > scanner to A : > 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : > 10:0/C1/1/Put/x, > so we make B as our current store scanner. Since we are looking for > particular columns > C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn > which > would run reseek for all store scanners. > For store A the following magic would happen in requestSeek: > 1. bloom filter check passesGeneralBloomFilter would set haveToSeek to > false because row 10 doesn't have C3 qualifier in store A. > 2. Since we don't have to seek we just create a fake row > 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for > us and it commented with : > {noformat} > // Multi-column Bloom filter optimization. > // Create a fake key/value, so that this scanner only bubbles up to the > top > // of the KeyValueHeap in StoreScanner after we scanned this row/column in > // all other store files. The query matcher will then just skip this fake > // key/value and the store scanner will progress to the next column. This > // is
[jira] [Commented] (HBASE-20086) PE randomSeekScan fails with ClassNotFoundException
[ https://issues.apache.org/jira/browse/HBASE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378029#comment-16378029 ] ramkrishna.s.vasudevan commented on HBASE-20086: [~yuzhih...@gmail.com] So this change will work only if the PE tool is ran with map reduce mode? If we are running with 'nomapred' - the FilterAllFilter gets loaded and works fine? If so this patch is fine. Thanks. > PE randomSeekScan fails with ClassNotFoundException > --- > > Key: HBASE-20086 > URL: https://issues.apache.org/jira/browse/HBASE-20086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 20086.v1.txt, 20086.v2.txt, 20086.v3.txt > > > When running PE randomSeekScan against hadoop 3 cluster, I got the following > error: > {code} > 2018-02-26 17:11:09,548 INFO [main] mapreduce.Job: Task Id : > attempt_1519408774395_0003_m_04_0, Status : FAILED > Error: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.filter.FilterAllFilter > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.forName(PerformanceEvaluation.java:291) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.setup(PerformanceEvaluation.java:276) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > {code} > This is due to FilterAllFilter being inside hbase-server tests jar, hence not > added as dependency for PE job. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HBASE-20096) Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants
[ https://issues.apache.org/jira/browse/HBASE-20096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-20096: Comment: was deleted (was: I really appreciate this wonderful post that you have provided for us. I assure this would be beneficial for most of the people, [mangahere|https://manga-here.io/]) > Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants > -- > > Key: HBASE-20096 > URL: https://issues.apache.org/jira/browse/HBASE-20096 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Andrew Purtell >Priority: Minor > > Reported by [~dbist13]: > Affects branch-1 and branch-1.4 > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.5.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.codehaus.mojo:exec-maven-plugin is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /Users/apurtell/src/hbase/hbase-shaded/hbase-shaded-check-invariants/pom.xml, > line 161, column 15 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20070) website generation is failing
[ https://issues.apache.org/jira/browse/HBASE-20070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378024#comment-16378024 ] Sean Busbey commented on HBASE-20070: - Also, MJAVADOC-511 is interesting as a way we might have been spared some pain here. > website generation is failing > - > > Key: HBASE-20070 > URL: https://issues.apache.org/jira/browse/HBASE-20070 > Project: HBase > Issue Type: Bug > Components: website >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HBASE-20070-misty.patch, HBASE-20070-misty.patch.1, > HBASE-20070-misty.patch.3, HBASE-20070.0.patch, HBASE-20070.1.patch, > HBASE-20070.2.patch, HBASE-20070.3.patch, > hbase-install-log-a29b3caf4dbc7b8833474ef5da5438f7f6907e00.txt > > > website generation has been failing since Feb 20th > {code} > Checking out files: 100% (68971/68971), done. > Usage: grep [OPTION]... PATTERN [FILE]... > Try 'grep --help' for more information. > PUSHED is 2 > is not yet mentioned in the hbase-site commit log. Assuming we don't have it > yet. 2 > Building HBase > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Failure: mvn clean site > Build step 'Execute shell' marked build as failure > {code} > The status email says > {code} > Build status: Still Failing > The HBase website has not been updated to incorporate HBase commit > ${CURRENT_HBASE_COMMIT}. > {code} > Looking at the code where that grep happens, it looks like the env variable > CURRENT_HBASE_COMMIT isn't getting set. That comes from some git command. I'm > guessing the version of git changed on the build hosts and upended our > assumptions. > we should fix this to 1) rely on git's porcelain interface, and 2) fail as > soon as that git command fails -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used
[ https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378023#comment-16378023 ] Ted Yu commented on HBASE-19863: [~ram_krish]: Do you want to take another look ? > java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter > is used > - > > Key: HBASE-19863 > URL: https://issues.apache.org/jira/browse/HBASE-19863 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 1.4.1 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, > HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, > HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch, > HBASE-19863.v4-master.patch, HBASE-19863.v5-branch-1.patch, > HBASE-19863.v5-branch-2.patch > > > Under some circumstances scan with SingleColumnValueFilter may fail with an > exception > {noformat} > java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, > qualifier=C2, timestamp=1516433595543, comparison result: 1 > at > org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149) > at > org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167) > {noformat} > Conditions: > table T with a single column family 0 that uses ROWCOL bloom filter > (important) and column qualifiers C1,C2,C3,C4,C5. > When we fill the table for every row we put deleted cell for C3. > The table has a single region with two HStore: > A: start row: 0, stop row: 99 > B: start row: 10 stop row: 99 > B has newer versions of rows 10-99. Store files have several blocks each > (important). > Store A is the result of major compaction, so it doesn't have any deleted > cells (important). > So, we are running a scan like: > {noformat} > scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter > ('0','C5',=,'binary:whatever')"} > {noformat} > How the scan performs: > First, we iterate A for rows 0 and 1 without any problems. > Next, we start to iterate A for row 10, so read the first cell and set hfs > scanner to A : > 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : > 10:0/C1/1/Put/x, > so we make B as our current store scanner. Since we are looking for > particular columns > C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn > which > would run reseek for all store scanners. > For store A the following magic would happen in requestSeek: > 1. bloom filter check passesGeneralBloomFilter would set haveToSeek to > false because row 10 doesn't have C3 qualifier in store A. > 2. Since we don't have to seek we just create a fake row > 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for > us and it commented with : > {noformat} > // Multi-column Bloom filter optimization. > // Create a fake key/value, so that this scanner only bubbles up to the > top > // of the KeyValueHeap in StoreScanner after we scanned this row/column in > // all other store files. The query matcher will then just skip this fake > // key/value and the store scanner will progress to the next column. This > // is obviously not a "real real" seek, but unlike the fake KV earlier in > // this method, we want this to be propagated to ScanQueryMatcher. > {noformat} > > For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum > to skip C3 entirely. > After that we start searching for qualifier C5 using seekOrSkipToNextColumn > which run first trySkipToNextColumn: > {noformat} > protected boolean
[jira] [Comment Edited] (HBASE-20070) website generation is failing
[ https://issues.apache.org/jira/browse/HBASE-20070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378019#comment-16378019 ] Sean Busbey edited comment on HBASE-20070 at 2/27/18 4:34 AM: -- I figured it out! HBASE-20032 changed it so that we're using {{maven-javadoc-plugin}} version 3.0.0 for reports, which is when the actual site goal started failing on jenkins. prior to that, wether mvnsite worked or not would depend on the maven version you had, since we weren't specifying a version for the {{maven-javadoc-plugin}} in the reports section. version 3.0.0 of the javadoc plugin removed the configuration value we use to turn off the javadoc linter in MJAVADOC-474. They added the ability to directly configure the doclint setting via MJAVADOC-387 and made a new way to give custom javadoc parameters in MJAVADOC-475. Unfortunately, nothing in the maven output makes this obvious. Everything not in the site generation is still working because in all those cases we're still using {{maven-javadoc-plugin}} version 2.10.3. ugh. what a mess. I'll get a new patch that a) updates our use of maven-javadoc-plugin to match in non-report contexts, b) consistently disables the linter. was (Author: busbey): I figured it out! HBASE-20032 changed it so that we're using {{maven-javadoc-plugin}} version 3.0.0 for reports, which is when the actual site goal started failing on jenkins. prior to that, wether mvnsite worked or not would depend on the maven version you had, since we weren't specifying a version for the {{maven-javadoc-plugin}}. version 3.0.0 of the javadoc plugin removed the configuration value we use to turn off the javadoc linter in MJAVADOC-474. They added the ability to directly configure the doclint setting via MJAVADOC-387 and made a new way to give custom javadoc parameters in MJAVADOC-475. Unfortunately, nothing in the maven output makes this obvious. Everything not in the site generation is still working because in all those cases we're still using {{maven-javadoc-plugin}} version 2.10.3. ugh. what a mess. I'll get a new patch that a) updates our use of maven-javadoc-plugin to match in non-report contexts, b) consistently disables the linter. > website generation is failing > - > > Key: HBASE-20070 > URL: https://issues.apache.org/jira/browse/HBASE-20070 > Project: HBase > Issue Type: Bug > Components: website >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HBASE-20070-misty.patch, HBASE-20070-misty.patch.1, > HBASE-20070-misty.patch.3, HBASE-20070.0.patch, HBASE-20070.1.patch, > HBASE-20070.2.patch, HBASE-20070.3.patch, > hbase-install-log-a29b3caf4dbc7b8833474ef5da5438f7f6907e00.txt > > > website generation has been failing since Feb 20th > {code} > Checking out files: 100% (68971/68971), done. > Usage: grep [OPTION]... PATTERN [FILE]... > Try 'grep --help' for more information. > PUSHED is 2 > is not yet mentioned in the hbase-site commit log. Assuming we don't have it > yet. 2 > Building HBase > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Failure: mvn clean site > Build step 'Execute shell' marked build as failure > {code} > The status email says > {code} > Build status: Still Failing > The HBase website has not been updated to incorporate HBase commit > ${CURRENT_HBASE_COMMIT}. > {code} > Looking at the code where that grep happens, it looks like the env variable > CURRENT_HBASE_COMMIT isn't getting set. That comes from some git command. I'm > guessing the version of git changed on the build hosts and upended our > assumptions. > we should fix this to 1) rely on git's porcelain interface, and 2) fail as > soon as that git command fails -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20070) website generation is failing
[ https://issues.apache.org/jira/browse/HBASE-20070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378019#comment-16378019 ] Sean Busbey commented on HBASE-20070: - I figured it out! HBASE-20032 changed it so that we're using {{maven-javadoc-plugin}} version 3.0.0 for reports, which is when the actual site goal started failing on jenkins. prior to that, wether mvnsite worked or not would depend on the maven version you had, since we weren't specifying a version for the {{maven-javadoc-plugin}}. version 3.0.0 of the javadoc plugin removed the configuration value we use to turn off the javadoc linter in MJAVADOC-474. They added the ability to directly configure the doclint setting via MJAVADOC-387 and made a new way to give custom javadoc parameters in MJAVADOC-475. Unfortunately, nothing in the maven output makes this obvious. Everything not in the site generation is still working because in all those cases we're still using {{maven-javadoc-plugin}} version 2.10.3. ugh. what a mess. I'll get a new patch that a) updates our use of maven-javadoc-plugin to match in non-report contexts, b) consistently disables the linter. > website generation is failing > - > > Key: HBASE-20070 > URL: https://issues.apache.org/jira/browse/HBASE-20070 > Project: HBase > Issue Type: Bug > Components: website >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HBASE-20070-misty.patch, HBASE-20070-misty.patch.1, > HBASE-20070-misty.patch.3, HBASE-20070.0.patch, HBASE-20070.1.patch, > HBASE-20070.2.patch, HBASE-20070.3.patch, > hbase-install-log-a29b3caf4dbc7b8833474ef5da5438f7f6907e00.txt > > > website generation has been failing since Feb 20th > {code} > Checking out files: 100% (68971/68971), done. > Usage: grep [OPTION]... PATTERN [FILE]... > Try 'grep --help' for more information. > PUSHED is 2 > is not yet mentioned in the hbase-site commit log. Assuming we don't have it > yet. 2 > Building HBase > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; > support was removed in 8.0 > Failure: mvn clean site > Build step 'Execute shell' marked build as failure > {code} > The status email says > {code} > Build status: Still Failing > The HBase website has not been updated to incorporate HBase commit > ${CURRENT_HBASE_COMMIT}. > {code} > Looking at the code where that grep happens, it looks like the env variable > CURRENT_HBASE_COMMIT isn't getting set. That comes from some git command. I'm > guessing the version of git changed on the build hosts and upended our > assumptions. > we should fix this to 1) rely on git's porcelain interface, and 2) fail as > soon as that git command fails -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used
[ https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378015#comment-16378015 ] Hadoop QA commented on HBASE-19863: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 28s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 15s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} hbase-server: The patch generated 0 new + 384 unchanged - 6 fixed = 384 total (was 390) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 50s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 46s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 43s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 12s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}130m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.regionserver.TestGlobalThrottler | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-19863 | | JIRA Patch URL |
[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378011#comment-16378011 ] stack commented on HBASE-20095: --- Thanks you [~reidchan]. You are a gentleman. It might be tricky to do looking at what is needed. > Redesign single instance pool in CleanerChore > - > > Key: HBASE-20095 > URL: https://issues.apache.org/jira/browse/HBASE-20095 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Priority: Critical > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378008#comment-16378008 ] stack commented on HBASE-20069: --- Thanks [~chia7712]. Let me fix the newly reported findbugs while I'm at it. > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, > 0002-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20069: -- Attachment: 0002-HBASE-20069-fix-existing-findbugs-errors-addendum.patch > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, > 0002-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377997#comment-16377997 ] Hudson commented on HBASE-20069: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4655 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4655/]) HBASE-20069 fix existing findbugs errors in hbase-server (stack: rev b11e506664614c243c08949c256430d4dd13ba6c) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/compaction/MajorCompactor.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java * (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/nio/TestMultiByteBuff.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java * (edit) hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20074) [FindBugs] Same code on both branches in CompactingMemStore#initMemStoreCompactor
[ https://issues.apache.org/jira/browse/HBASE-20074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377996#comment-16377996 ] Hudson commented on HBASE-20074: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4655 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4655/]) for creating patch HBASE-20074-V01.patch (stack: rev 73028d5bd9f85655b284654579ddcbbca31e41e8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java > [FindBugs] Same code on both branches in > CompactingMemStore#initMemStoreCompactor > - > > Key: HBASE-20074 > URL: https://issues.apache.org/jira/browse/HBASE-20074 > Project: HBase > Issue Type: Bug > Components: findbugs >Reporter: stack >Assignee: Gali Sheffi >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-20074-V01.patch > > > [~galish] Our findbugs checking was disabled for a few weeks and we just > turned it on again. It found a good one. Mind fixing it please? Meantime I've > undone the if class to get a factor dependent on index type. Thanks. > Code Warning > DB > org.apache.hadoop.hbase.regionserver.CompactingMemStore.initInmemoryFlushSize(Configuration) > uses the same code for two branches > Bug type DB_DUPLICATE_BRANCHES (click for details) > In class org.apache.hadoop.hbase.regionserver.CompactingMemStore > In method > org.apache.hadoop.hbase.regionserver.CompactingMemStore.initInmemoryFlushSize(Configuration) > At CompactingMemStore.java:[line 140] > At CompactingMemStore.java:[line 143] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20091: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the patch, Artem. Thanks for the review, Andrew. > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.branch-1.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20096) Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants
[ https://issues.apache.org/jira/browse/HBASE-20096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377960#comment-16377960 ] Chalchiuhticue Naabah commented on HBASE-20096: --- I really appreciate this wonderful post that you have provided for us. I assure this would be beneficial for most of the people, [mangahere|https://manga-here.io/] > Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants > -- > > Key: HBASE-20096 > URL: https://issues.apache.org/jira/browse/HBASE-20096 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Andrew Purtell >Priority: Minor > > Reported by [~dbist13]: > Affects branch-1 and branch-1.4 > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.5.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.codehaus.mojo:exec-maven-plugin is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /Users/apurtell/src/hbase/hbase-shaded/hbase-shaded-check-invariants/pom.xml, > line 161, column 15 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377958#comment-16377958 ] Andrew Purtell commented on HBASE-20091: +1 > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.branch-1.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20096) Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants
[ https://issues.apache.org/jira/browse/HBASE-20096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-20096. Resolution: Duplicate Fix Version/s: (was: 1.4.3) (was: 1.5.0) Dup of HBASE-20091 > Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants > -- > > Key: HBASE-20096 > URL: https://issues.apache.org/jira/browse/HBASE-20096 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Andrew Purtell >Priority: Minor > > Reported by [~dbist13]: > Affects branch-1 and branch-1.4 > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.5.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.codehaus.mojo:exec-maven-plugin is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /Users/apurtell/src/hbase/hbase-shaded/hbase-shaded-check-invariants/pom.xml, > line 161, column 15 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377944#comment-16377944 ] Thiruvel Thirumoolan commented on HBASE-20001: -- Uploaded 1.3 and 1.2 branch patches, the 1.4 patch didn't apply cleanly, but there were only minor changes. The new tests passed locally, will wait for pre-commit result. > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.2.001.patch, > HBASE-20001.branch-1.3.001.patch, HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-20001: - Attachment: HBASE-20001.branch-1.2.001.patch > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.2.001.patch, > HBASE-20001.branch-1.3.001.patch, HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-20001: - Attachment: (was: HBASE-20001.branch-1.2.001.patch) > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.3.001.patch, > HBASE-20001.branch-1.4.001.patch, HBASE-20001.branch-1.4.002.patch, > HBASE-20001.branch-1.4.003.patch, HBASE-20001.branch-1.4.004.patch, > HBASE-20001.branch-1.4.005.patch, HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-20001: - Attachment: HBASE-20001.branch-1.2.001.patch > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.2.001.patch, > HBASE-20001.branch-1.3.001.patch, HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20096) Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants
Andrew Purtell created HBASE-20096: -- Summary: Missing version warning for exec-maven-plugin in hbase-shaded-check-invariants Key: HBASE-20096 URL: https://issues.apache.org/jira/browse/HBASE-20096 Project: HBase Issue Type: Bug Components: build Reporter: Andrew Purtell Fix For: 1.5.0, 1.4.3 Reported by [~dbist13]: Affects branch-1 and branch-1.4 {noformat} [WARNING] Some problems were encountered while building the effective model for org.apache.hbase:hbase-shaded-check-invariants:pom:1.5.0-SNAPSHOT [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo:exec-maven-plugin is missing. @ org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], /Users/apurtell/src/hbase/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line 161, column 15 {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377932#comment-16377932 ] Ted Yu commented on HBASE-20001: 20001.branch-1.4.006.patch on branch-1.3: Hunk #6 succeeded at 750 (offset -41 lines). 1 out of 6 hunks FAILED -- saving rejects to file hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java.rej > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.3.001.patch, > HBASE-20001.branch-1.4.001.patch, HBASE-20001.branch-1.4.002.patch, > HBASE-20001.branch-1.4.003.patch, HBASE-20001.branch-1.4.004.patch, > HBASE-20001.branch-1.4.005.patch, HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20088) Update copyright notices to year 2018
[ https://issues.apache.org/jira/browse/HBASE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377930#comment-16377930 ] Andrew Purtell commented on HBASE-20088: +1 grepped around a bit and looks like NOTICE is it > Update copyright notices to year 2018 > - > > Key: HBASE-20088 > URL: https://issues.apache.org/jira/browse/HBASE-20088 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Josh Elser >Priority: Minor > Fix For: 2.0.0, 1.5.0, 1.4.3 > > Attachments: HBASE-20088.001.patch > > > NOTICE file, UIs, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377928#comment-16377928 ] Andrew Purtell commented on HBASE-20001: Doesn't the 1.4 commit apply cleanly against 1.3.? > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.3.001.patch, > HBASE-20001.branch-1.4.001.patch, HBASE-20001.branch-1.4.002.patch, > HBASE-20001.branch-1.4.003.patch, HBASE-20001.branch-1.4.004.patch, > HBASE-20001.branch-1.4.005.patch, HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-18309) Support multi threads in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377926#comment-16377926 ] Reid Chan edited comment on HBASE-18309 at 2/27/18 2:57 AM: FYI, [~stack] and whoever has the same interest, HBASE-20095 was (Author: reidchan): FYI, [~stack] and who has the same interest, HBASE-20095 > Support multi threads in CleanerChore > - > > Key: HBASE-18309 > URL: https://issues.apache.org/jira/browse/HBASE-18309 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.0.0-beta-1 > > Attachments: HBASE-18309.addendum.patch, > HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, > HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, > HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, > HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, > HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, > HBASE-18309.master.012.patch, space_consumption_in_archive.png > > > There is only one thread in LogCleaner to clean oldWALs and in our big > cluster we find this is not enough. The number of files under oldWALs reach > the max-directory-items limit of HDFS and cause region server crash, so we > use multi threads for LogCleaner and the crash not happened any more. > What's more, currently there's only one thread iterating the archive > directory, and we could use multiple threads cleaning sub directories in > parallel to speed it up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-18309) Support multi threads in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377926#comment-16377926 ] Reid Chan edited comment on HBASE-18309 at 2/27/18 2:57 AM: FYI, [~stack] and who has the same interest, HBASE-20095 was (Author: reidchan): FYI, [~stack], HBASE-20095 > Support multi threads in CleanerChore > - > > Key: HBASE-18309 > URL: https://issues.apache.org/jira/browse/HBASE-18309 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.0.0-beta-1 > > Attachments: HBASE-18309.addendum.patch, > HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, > HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, > HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, > HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, > HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, > HBASE-18309.master.012.patch, space_consumption_in_archive.png > > > There is only one thread in LogCleaner to clean oldWALs and in our big > cluster we find this is not enough. The number of files under oldWALs reach > the max-directory-items limit of HDFS and cause region server crash, so we > use multi threads for LogCleaner and the crash not happened any more. > What's more, currently there's only one thread iterating the archive > directory, and we could use multiple threads cleaning sub directories in > parallel to speed it up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377925#comment-16377925 ] Chia-Ping Tsai commented on HBASE-20069: +1 to addendum > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18309) Support multi threads in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377926#comment-16377926 ] Reid Chan commented on HBASE-18309: --- FYI, [~stack], HBASE-20095 > Support multi threads in CleanerChore > - > > Key: HBASE-18309 > URL: https://issues.apache.org/jira/browse/HBASE-18309 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.0.0-beta-1 > > Attachments: HBASE-18309.addendum.patch, > HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, > HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, > HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, > HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, > HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, > HBASE-18309.master.012.patch, space_consumption_in_archive.png > > > There is only one thread in LogCleaner to clean oldWALs and in our big > cluster we find this is not enough. The number of files under oldWALs reach > the max-directory-items limit of HDFS and cause region server crash, so we > use multi threads for LogCleaner and the crash not happened any more. > What's more, currently there's only one thread iterating the archive > directory, and we could use multiple threads cleaning sub directories in > parallel to speed it up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20089) make_rc.sh should name SHA-512 checksum files with the extension .sha512
[ https://issues.apache.org/jira/browse/HBASE-20089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377924#comment-16377924 ] Andrew Purtell commented on HBASE-20089: +1 > make_rc.sh should name SHA-512 checksum files with the extension .sha512 > > > Key: HBASE-20089 > URL: https://issues.apache.org/jira/browse/HBASE-20089 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Josh Elser >Priority: Minor > Fix For: 2.0.0, 1.3.2, 1.5.0, 1.4.3 > > Attachments: HBASE-20089.001.patch > > > From [~elserj] > {quote} > we need to update the checksum naming convention for SHA*. Per [1], .sha > filenames should only contain SHA1, and .sha512 file names should be used for > SHA512 xsum. I believe this means we just need to modify make_rc.sh to put > the xsum into .sha512 instead of .sha. We do not need to distribute SHA1 > xsums and, afaik, there is little cryptographic value to this. > [1] http://www.apache.org/dev/release-distribution.html#sigs-and-sums > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20095) Redesign single instance pool in CleanerChore
Reid Chan created HBASE-20095: - Summary: Redesign single instance pool in CleanerChore Key: HBASE-20095 URL: https://issues.apache.org/jira/browse/HBASE-20095 Project: HBase Issue Type: Improvement Reporter: Reid Chan -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20087) Periodically attempt redeploy of regions in FAILED_OPEN state
[ https://issues.apache.org/jira/browse/HBASE-20087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-20087: --- Attachment: HBASE-20087-branch-1.patch > Periodically attempt redeploy of regions in FAILED_OPEN state > - > > Key: HBASE-20087 > URL: https://issues.apache.org/jira/browse/HBASE-20087 > Project: HBase > Issue Type: Improvement > Components: master, Region Assignment >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Major > Fix For: 2.0.0, 1.5.0 > > Attachments: > 0001-W-4723090-Port-the-RIT-FAILED_OPEN-state-hack-from-R.patch, > HBASE-20087-branch-1.patch, HBASE-20087-branch-1.patch > > > Because RSGroups can cause permanent RIT with regions in FAILED_OPEN state, > we added logic to the master portion of the RSGroups extention to enumerate > RITs and retry assignment of regions in FAILED_OPEN state. > However, this strategy can be applied generally to reduce need of operator > involvement in cluster operations. Now an operator has to manually resolve > FAILED_OPEN assignments but there is little risk in automatically retrying > them after a while. If the reason the assignment failed has not cleared, the > assignment will just fail again. Should the reason the assignment failed be > resolved, then operators don't have to do more in order for the cluster to > fully heal. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377918#comment-16377918 ] Hadoop QA commented on HBASE-20069: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 4s{color} | {color:red} hbase-server in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 12s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 9s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 6s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 20s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 22s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-20069 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912162/0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0d9ab158a8ad 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b11e506664 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d;
[jira] [Updated] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-20001: - Attachment: HBASE-20001.branch-1.3.001.patch > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.3.001.patch, > HBASE-20001.branch-1.4.001.patch, HBASE-20001.branch-1.4.002.patch, > HBASE-20001.branch-1.4.003.patch, HBASE-20001.branch-1.4.004.patch, > HBASE-20001.branch-1.4.005.patch, HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20094) Import$CellWritableComparable should define equals()
Ted Yu created HBASE-20094: -- Summary: Import$CellWritableComparable should define equals() Key: HBASE-20094 URL: https://issues.apache.org/jira/browse/HBASE-20094 Project: HBase Issue Type: Sub-task Reporter: Ted Yu Bug type EQ_COMPARETO_USE_OBJECT_EQUALS {code} In class org.apache.hadoop.hbase.mapreduce.Import$CellWritableComparable In method org.apache.hadoop.hbase.mapreduce.Import$CellWritableComparable.compareTo(Import$CellWritableComparable) At Import.java:[line 149] {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20086) PE randomSeekScan fails with ClassNotFoundException
[ https://issues.apache.org/jira/browse/HBASE-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20086: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the review, Mike. > PE randomSeekScan fails with ClassNotFoundException > --- > > Key: HBASE-20086 > URL: https://issues.apache.org/jira/browse/HBASE-20086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 20086.v1.txt, 20086.v2.txt, 20086.v3.txt > > > When running PE randomSeekScan against hadoop 3 cluster, I got the following > error: > {code} > 2018-02-26 17:11:09,548 INFO [main] mapreduce.Job: Task Id : > attempt_1519408774395_0003_m_04_0, Status : FAILED > Error: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.filter.FilterAllFilter > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.forName(PerformanceEvaluation.java:291) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.setup(PerformanceEvaluation.java:276) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) > {code} > This is due to FilterAllFilter being inside hbase-server tests jar, hence not > added as dependency for PE job. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20093) Replace ServerLoad by ServerMetrics for ServerManager
Chia-Ping Tsai created HBASE-20093: -- Summary: Replace ServerLoad by ServerMetrics for ServerManager Key: HBASE-20093 URL: https://issues.apache.org/jira/browse/HBASE-20093 Project: HBase Issue Type: Task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 2.0.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18309) Support multi threads in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377906#comment-16377906 ] Reid Chan commented on HBASE-18309: --- Yes, sir, i got your point, a new issue is fine. > Support multi threads in CleanerChore > - > > Key: HBASE-18309 > URL: https://issues.apache.org/jira/browse/HBASE-18309 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.0.0-beta-1 > > Attachments: HBASE-18309.addendum.patch, > HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, > HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, > HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, > HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, > HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, > HBASE-18309.master.012.patch, space_consumption_in_archive.png > > > There is only one thread in LogCleaner to clean oldWALs and in our big > cluster we find this is not enough. The number of files under oldWALs reach > the max-directory-items limit of HDFS and cause region server crash, so we > use multi threads for LogCleaner and the crash not happened any more. > What's more, currently there's only one thread iterating the archive > directory, and we could use multiple threads cleaning sub directories in > parallel to speed it up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used
[ https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Soldatov updated HBASE-19863: Attachment: HBASE-19863.v5-branch-1.patch > java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter > is used > - > > Key: HBASE-19863 > URL: https://issues.apache.org/jira/browse/HBASE-19863 > Project: HBase > Issue Type: Bug > Components: Filters >Affects Versions: 1.4.1 >Reporter: Sergey Soldatov >Assignee: Sergey Soldatov >Priority: Major > Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, > HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, > HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch, > HBASE-19863.v4-master.patch, HBASE-19863.v5-branch-1.patch, > HBASE-19863.v5-branch-2.patch > > > Under some circumstances scan with SingleColumnValueFilter may fail with an > exception > {noformat} > java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, > qualifier=C2, timestamp=1516433595543, comparison result: 1 > at > org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149) > at > org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167) > {noformat} > Conditions: > table T with a single column family 0 that uses ROWCOL bloom filter > (important) and column qualifiers C1,C2,C3,C4,C5. > When we fill the table for every row we put deleted cell for C3. > The table has a single region with two HStore: > A: start row: 0, stop row: 99 > B: start row: 10 stop row: 99 > B has newer versions of rows 10-99. Store files have several blocks each > (important). > Store A is the result of major compaction, so it doesn't have any deleted > cells (important). > So, we are running a scan like: > {noformat} > scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter > ('0','C5',=,'binary:whatever')"} > {noformat} > How the scan performs: > First, we iterate A for rows 0 and 1 without any problems. > Next, we start to iterate A for row 10, so read the first cell and set hfs > scanner to A : > 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : > 10:0/C1/1/Put/x, > so we make B as our current store scanner. Since we are looking for > particular columns > C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn > which > would run reseek for all store scanners. > For store A the following magic would happen in requestSeek: > 1. bloom filter check passesGeneralBloomFilter would set haveToSeek to > false because row 10 doesn't have C3 qualifier in store A. > 2. Since we don't have to seek we just create a fake row > 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for > us and it commented with : > {noformat} > // Multi-column Bloom filter optimization. > // Create a fake key/value, so that this scanner only bubbles up to the > top > // of the KeyValueHeap in StoreScanner after we scanned this row/column in > // all other store files. The query matcher will then just skip this fake > // key/value and the store scanner will progress to the next column. This > // is obviously not a "real real" seek, but unlike the fake KV earlier in > // this method, we want this to be propagated to ScanQueryMatcher. > {noformat} > > For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum > to skip C3 entirely. > After that we start searching for qualifier C5 using seekOrSkipToNextColumn > which run first trySkipToNextColumn: > {noformat} > protected boolean trySkipToNextColumn(Cell cell) throws
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377901#comment-16377901 ] Ted Yu commented on HBASE-20001: Integrated to branch-1 and branch-1.4 Please attach patch for 1.3 > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377899#comment-16377899 ] Chia-Ping Tsai commented on HBASE-20001: +1 > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20066) Region sequence id may go backward after split or merge
[ https://issues.apache.org/jira/browse/HBASE-20066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377893#comment-16377893 ] Duo Zhang commented on HBASE-20066: --- Ping [~stack]... > Region sequence id may go backward after split or merge > --- > > Key: HBASE-20066 > URL: https://issues.apache.org/jira/browse/HBASE-20066 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-20066-v1.patch, HBASE-20066-v2.patch, > HBASE-20066-v3.patch, HBASE-20066-v4.patch, HBASE-20066.patch > > > The problem is that, now we have markers which will be written to WAL but not > in store file. For a normal region close, we will write a sequence id file > under the region directory, and when opening we will use this as the open > sequence id. But for split and merge, we do not copy the sequence id file to > the newly generated regions so the sequence id may go backwards since when > closing the region we will write flush marker and close marker into WAL... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377891#comment-16377891 ] Ted Yu commented on HBASE-20001: Good by me. > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
[ https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377889#comment-16377889 ] Thiruvel Thirumoolan commented on HBASE-20001: -- [~yuzhih...@gmail.com], [~chia7712] - Patch with pre-commit passed. Lemme know if branch-1.4 patch can get in. I can start working on 1.3 and 1.2 patches. > cleanIfNoMetaEntry() uses encoded instead of region name to lookup region > - > > Key: HBASE-20001 > URL: https://issues.apache.org/jira/browse/HBASE-20001 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7 >Reporter: Francis Liu >Assignee: Thiruvel Thirumoolan >Priority: Major > Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3 > > Attachments: HBASE-20001.branch-1.4.001.patch, > HBASE-20001.branch-1.4.002.patch, HBASE-20001.branch-1.4.003.patch, > HBASE-20001.branch-1.4.004.patch, HBASE-20001.branch-1.4.005.patch, > HBASE-20001.branch-1.4.006.patch > > > In RegionStates.cleanIfNoMetaEntry() > {{if (MetaTableAccessor.getRegion(server.getConnection(), > hri.getEncodedNameAsBytes()) == null) {}} > {{regionOffline(hri);}} > {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}} > } > But api expects regionname > {{public static PairgetRegion(Connection > connection, byte [] regionName)}} > So we might end up cleaning good regions. > > ADDENDUM: > The scenario mentioned occurs when zkless assignment is used. With zk-based > assignment without the patch what could occur is the daughter regions are > offlined and have no hdfs directory but have entries in meta. The daughter > meta entries will prolly be picked up by the client causing NSREs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20089) make_rc.sh should name SHA-512 checksum files with the extension .sha512
[ https://issues.apache.org/jira/browse/HBASE-20089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377861#comment-16377861 ] stack commented on HBASE-20089: --- +1 > make_rc.sh should name SHA-512 checksum files with the extension .sha512 > > > Key: HBASE-20089 > URL: https://issues.apache.org/jira/browse/HBASE-20089 > Project: HBase > Issue Type: Task >Reporter: Andrew Purtell >Assignee: Josh Elser >Priority: Minor > Fix For: 2.0.0, 1.3.2, 1.5.0, 1.4.3 > > Attachments: HBASE-20089.001.patch > > > From [~elserj] > {quote} > we need to update the checksum naming convention for SHA*. Per [1], .sha > filenames should only contain SHA1, and .sha512 file names should be used for > SHA512 xsum. I believe this means we just need to modify make_rc.sh to put > the xsum into .sha512 instead of .sha. We do not need to distribute SHA1 > xsums and, afaik, there is little cryptographic value to this. > [1] http://www.apache.org/dev/release-distribution.html#sigs-and-sums > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19400) Add missing security checks in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-19400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377854#comment-16377854 ] stack commented on HBASE-19400: --- You want to push this on branch-1 [~appy]? > Add missing security checks in MasterRpcServices > > > Key: HBASE-19400 > URL: https://issues.apache.org/jira/browse/HBASE-19400 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0-beta-1 >Reporter: Balazs Meszaros >Assignee: Appy >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19400.branch-1.001.patch, > HBASE-19400.master.001.patch, HBASE-19400.master.002.patch, > HBASE-19400.master.003.patch, HBASE-19400.master.004.patch, > HBASE-19400.master.004.patch, HBASE-19400.master.005.patch, > HBASE-19400.master.006.patch, HBASE-19400.master.007.patch, > HBASE-19400.master.007.patch > > > The following RPC methods in MasterRpcServices do not have ACL check for > ADMIN rights. > - normalize > - setNormalizerRunning > - runCatalogScan > - enableCatalogJanitor > - runCleanerChore > - setCleanerChoreRunning > - execMasterService > - execProcedure > - execProcedureWithRet -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20084) Refactor the RSRpcServices#doBatchOp
[ https://issues.apache.org/jira/browse/HBASE-20084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377849#comment-16377849 ] Chia-Ping Tsai commented on HBASE-20084: Thanks [~yuzhih...@gmail.com] The findbugs warnings should be resolved by HBASE-20069. Will commit it tomorrow if no objections. > Refactor the RSRpcServices#doBatchOp > > > Key: HBASE-20084 > URL: https://issues.apache.org/jira/browse/HBASE-20084 > Project: HBase > Issue Type: Task > Components: regionserver >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-20084.v0.patch.patch > > > follow the discussion in > https://issues.apache.org/jira/browse/HBASE-19876?focusedCommentId=16359618=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16359618 > RSRpcServices#doBatchOp will throw IOE to log the error in the region-level > response if any mutation fails in atomic mode. However, the exception in > method signature force normal (non-atomic) batch to handle the exception even > though IOE won't be thrown in non-atomic mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20052) TestRegionOpen#testNonExistentRegionReplica fails due to NPE
[ https://issues.apache.org/jira/browse/HBASE-20052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20052: -- Fix Version/s: (was: 2.0.0-beta-2) 2.0.0 > TestRegionOpen#testNonExistentRegionReplica fails due to NPE > > > Key: HBASE-20052 > URL: https://issues.apache.org/jira/browse/HBASE-20052 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 2.0.0, 1.5.0, 1.4.3 > > Attachments: 20052.v1.txt, 20052.v2.txt > > > After HBASE-19391 was integrated, the following test failure can be observed: > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hbase.regionserver.TestRegionOpen.testNonExistentRegionReplica(TestRegionOpen.java:122) > {code} > This was due null being returned from > HRegionFileSystem#createRegionOnFileSystem(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20092) Fix TestRegionMetrics#testRegionMetrics
[ https://issues.apache.org/jira/browse/HBASE-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-20092: --- Summary: Fix TestRegionMetrics#testRegionMetrics (was: Fix TestRegionMetrics) > Fix TestRegionMetrics#testRegionMetrics > --- > > Key: HBASE-20092 > URL: https://issues.apache.org/jira/browse/HBASE-20092 > Project: HBase > Issue Type: Task > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > > {code:java} > java.lang.AssertionError: expected:<12> but was:<13> > at > org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} > [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] > http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19835) Make explicit casting of atleast one operand to final type
[ https://issues.apache.org/jira/browse/HBASE-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19835: -- Fix Version/s: (was: 2.0.0-beta-2) 2.0.0 > Make explicit casting of atleast one operand to final type > -- > > Key: HBASE-19835 > URL: https://issues.apache.org/jira/browse/HBASE-19835 > Project: HBase > Issue Type: Bug > Components: hbase >Affects Versions: 3.0.0 >Reporter: Aman Poonia >Assignee: Aman Poonia >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-19835.master.01.patch, HBASE-19835.master.02.patch > > > We have used > _long = int + int_ > at many places mostly wherever ClassSize.java variables are used for > calculation. > Need to cast explicitly at-least one operand to final type(i.e. type the > result is intended to be casted). > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run
[ https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377843#comment-16377843 ] Ted Yu commented on HBASE-20090: Observed the following in region server log (in hadoop3 cluster): {code} 2018-02-26 16:06:49,962 INFO [MemStoreFlusher.1] regionserver.HRegion: Flushing 1/1 column families, memstore=804.67 KB 2018-02-26 16:06:50,028 INFO [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=5448, memsize=804.7 K, hasBloomFilter=true, into tmp file hdfs:// mycluster/apps/hbase/data/data/default/TestTable/3552368c92476437cb96e357d2c7d618/.tmp/info/81721cc57fee43ebb55ba430f5730c25 2018-02-26 16:06:50,042 INFO [MemStoreFlusher.1] regionserver.HStore: Added hdfs://mycluster/apps/hbase/data/data/default/TestTable/3552368c92476437cb96e357d2c7d618/info/ 81721cc57fee43ebb55ba430f5730c25, entries=784, sequenceid=5448, filesize=813.9 K 2018-02-26 16:06:50,044 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~804.67 KB/823984, currentsize=0 B/0 for region TestTable, 00155728,1519661093622.3552368c92476437cb96e357d2c7d618. in 82ms, sequenceid=5448, compaction requested=true 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 185ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 163ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 160ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 160ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=23,queue=2,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 158ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=27,queue=0,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 151ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 147ms 2018-02-26 16:06:50,044 WARN [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=16020] regionserver.MemStoreFlusher: Memstore is above high water mark and block 135ms 2018-02-26 16:06:50,049 ERROR [MemStoreFlusher.1] regionserver.MemStoreFlusher: Cache flusher failed for entry org.apache.hadoop.hbase.regionserver. MemStoreFlusher$WakeupFlushThread@2adfadd7 java.lang.IllegalStateException at org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkState(Preconditions.java:441) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:174) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$600(MemStoreFlusher.java:69) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237) at java.lang.Thread.run(Thread.java:748) {code} Unfortunately the DEBUG logging was not on. Will see if I can reproduce the exception next time. > Properly handle Preconditions check failure in > MemStoreFlusher$FlushHandler.run > --- > > Key: HBASE-20090 > URL: https://issues.apache.org/jira/browse/HBASE-20090 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Priority: Major > > Here is the code in branch-2 : > {code} > try { > wakeupPending.set(false); // allow someone to wake us up again > fqe = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS); > if (fqe == null || fqe instanceof WakeupFlushThread) { > ... > if (!flushOneForGlobalPressure()) { > ... > FlushRegionEntry fre = (FlushRegionEntry) fqe; > if (!flushRegion(fre)) { > break; > ... > } catch (Exception ex) { > LOG.error("Cache flusher failed for entry " + fqe, ex); > if (!server.checkFileSystem()) { > break; > } > } > {code} > Inside flushOneForGlobalPressure(): > {code} > Preconditions.checkState( > (regionToFlush != null && regionToFlushSize > 0) || > (bestRegionReplica != null && bestRegionReplicaSize > 0)); > {code} > When the Preconditions check fails, IllegalStateException is caught by the > catch block shown above. > However, the fqe is not flushed, resulting in potential data loss. -- This message was sent by
[jira] [Created] (HBASE-20092) Fix TestRegionMetrics
Chia-Ping Tsai created HBASE-20092: -- Summary: Fix TestRegionMetrics Key: HBASE-20092 URL: https://issues.apache.org/jira/browse/HBASE-20092 Project: HBase Issue Type: Task Components: test Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 2.0.0 {code:java} java.lang.AssertionError: expected:<12> but was:<13> at org.apache.hadoop.hbase.TestRegionMetrics.testRegionMetrics(TestRegionMetrics.java:111){code} [http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34589/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/] http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34591/testReport/junit/org.apache.hadoop.hbase/TestRegionMetrics/testRegionMetrics/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377839#comment-16377839 ] Hadoop QA commented on HBASE-20091: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} xml {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 ill-formed XML file(s). {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 31s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 25s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hbase-shaded-check-invariants in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-20091 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912161/HBASE-20091.branch-1.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile | | uname | Linux 32b0d2e00550 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-1 / a37c91b | | maven | version: Apache Maven 3.0.5 | |
[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run
[ https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377838#comment-16377838 ] stack commented on HBASE-20090: --- Who wrote this code? Flag them. I'm sure they'd be interested. Do you have an example of the exception or this just splunking? > Properly handle Preconditions check failure in > MemStoreFlusher$FlushHandler.run > --- > > Key: HBASE-20090 > URL: https://issues.apache.org/jira/browse/HBASE-20090 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Priority: Major > > Here is the code in branch-2 : > {code} > try { > wakeupPending.set(false); // allow someone to wake us up again > fqe = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS); > if (fqe == null || fqe instanceof WakeupFlushThread) { > ... > if (!flushOneForGlobalPressure()) { > ... > FlushRegionEntry fre = (FlushRegionEntry) fqe; > if (!flushRegion(fre)) { > break; > ... > } catch (Exception ex) { > LOG.error("Cache flusher failed for entry " + fqe, ex); > if (!server.checkFileSystem()) { > break; > } > } > {code} > Inside flushOneForGlobalPressure(): > {code} > Preconditions.checkState( > (regionToFlush != null && regionToFlushSize > 0) || > (bestRegionReplica != null && bestRegionReplicaSize > 0)); > {code} > When the Preconditions check fails, IllegalStateException is caught by the > catch block shown above. > However, the fqe is not flushed, resulting in potential data loss. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20069: -- Status: Patch Available (was: Reopened) Submit the 001-HBASE-20069-fix addendum. See how it does on precommit. > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20069: -- Attachment: 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-20069: --- Reopen to address review feedback > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: > 0001-HBASE-20069-fix-existing-findbugs-errors-addendum.patch, FindBugs > Report.htm, HBASE-20069.branch-2.001.patch, HBASE-20069.branch-2.002.patch, > HBASE-20069.branch-2.003.patch, HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20036) TestAvoidCellReferencesIntoShippedBlocks timed out
[ https://issues.apache.org/jira/browse/HBASE-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377832#comment-16377832 ] Hudson commented on HBASE-20036: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4654 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4654/]) HBASE-20036 TestAvoidCellReferencesIntoShippedBlocks timed out (Ram) (ramkrishna.s.vasudevan: rev 7cfb46432fbdf9b53592be11efc8a7d79d1a9455) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAvoidCellReferencesIntoShippedBlocks.java > TestAvoidCellReferencesIntoShippedBlocks timed out > -- > > Key: HBASE-20036 > URL: https://issues.apache.org/jira/browse/HBASE-20036 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 2.0.0, 3.0.0 > > Attachments: HBASE-20036.patch, HBASE-20036_1.patch > > > Looks like it is stuck can't flush; bad math? > See the dashboard where it hung here: > https://builds.apache.org/job/HBASE-Flaky-Tests-branch2.0/2428/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAvoidCellReferencesIntoShippedBlocks-output.txt > ... the provocation could be this? > 2018-02-21 04:18:44,973 DEBUG [Thread-178] bucket.BucketCache(629): This > block eefaa9a7b10e437b9fc2b55a67d63191_4356 is still referred by 1 readers. > Can not be freed now. Hence will mark this for evicting at a later point > Exception in thread "Thread-178" java.lang.AssertionError: old blocks should > still be found expected:<6> but was:<5> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > ... Then we get stuck doing this: > 2018-02-21 04:23:34,661 DEBUG [master/asf903:0.Chore.1] master.HMaster(1524): > Skipping normalization for table: testHBase16372InCompactionWritePath, as > it's either system table or doesn't have auto normalization turned on > 2018-02-21 04:23:35,695 INFO [regionserver/asf903:0.Chore.1] > regionserver.HRegionServer$PeriodicMemStoreFlusher(1752): > MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because > info has an old edit so flush to free WALs after random delay 286820ms > 2018-02-21 04:23:36,009 DEBUG [ReadOnlyZKClient-localhost:61855@0x227ea3ed] > zookeeper.ReadOnlyZKClient(316): 0x227ea3ed to localhost:61855 inactive for > 6ms; closing (Will reconnect when new requests) > It also failed a recent nightly for same reason: > https://builds.apache.org/job/HBase%20Nightly/job/branch-2/355/ > Any chance you'd take a look [~ram_krish]? You best at this stuff? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19974) Fix decommissioned servers cannot be removed by remove_servers_rsgroup methods
[ https://issues.apache.org/jira/browse/HBASE-19974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377831#comment-16377831 ] Hudson commented on HBASE-19974: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4654 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4654/]) HBASE-19974 Fix decommissioned servers cannot be removed by (tedyu: rev a29b3caf4dbc7b8833474ef5da5438f7f6907e00) * (edit) hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java * (edit) hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java * (edit) hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java > Fix decommissioned servers cannot be removed by remove_servers_rsgroup methods > -- > > Key: HBASE-19974 > URL: https://issues.apache.org/jira/browse/HBASE-19974 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.0.0-beta-2 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19974.branch-2.001.patch, > HBASE-19974.branch-2.002.patch, HBASE-19974.branch-2.003.patch, > HBASE-19974.branch-2.004.patch, HBASE-19974.branch-2.005.patch, > HBASE-19974.branch-2.006.patch > > > When remove servers from a rsgroup, it will check the server is not online or > dead. But when we decommision a server, the server will be both in online > list and drainning list. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20083) Fix findbugs error for ReplicationSyncUp
[ https://issues.apache.org/jira/browse/HBASE-20083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377830#comment-16377830 ] Hudson commented on HBASE-20083: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4654 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4654/]) HBASE-20083 Fix findbugs error for ReplicationSyncUp (zhangduo: rev 2beda62a10f0828eb10cec28b0ba53246cd0b671) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSyncUp.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java > Fix findbugs error for ReplicationSyncUp > > > Key: HBASE-20083 > URL: https://issues.apache.org/jira/browse/HBASE-20083 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-20083.patch > > > The static 'conf' field seems dodgy, I think it is OK to just remove it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377828#comment-16377828 ] stack commented on HBASE-20069: --- bq. The "==" work well even if we use anonymous class. Ok. Let me do this as addendum. Thank you [~chia7712] > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: FindBugs Report.htm, HBASE-20069.branch-2.001.patch, > HBASE-20069.branch-2.002.patch, HBASE-20069.branch-2.003.patch, > HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18309) Support multi threads in CleanerChore
[ https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377826#comment-16377826 ] stack commented on HBASE-18309: --- [~reidchan] You up for taking a look again at how CleanerChore does its single-instance pool? Findbugs was off for a while and when we reenabled it, it 'lit up" around this bit of code. I hacked on it, probably made it worse (HBASE-20069), but it passes findbugs now (smile). First suggestion was a lazy singleton ... but maybe that won't work because of onChangeConfiguration... where you want to support changing pool. Another suggestion at https://reviews.apache.org/r/65794/#comment278374 is that rather than CleanerChore hosting the pool, instead we'd pass in the pool. This might be tough-to-do for same reason in that what happens onChangeConfiguration Do all instances change the pool Anyways, would be interested in your thoughts. We could do it in a new issue? Thanks. > Support multi threads in CleanerChore > - > > Key: HBASE-18309 > URL: https://issues.apache.org/jira/browse/HBASE-18309 > Project: HBase > Issue Type: Improvement >Reporter: binlijin >Assignee: Reid Chan >Priority: Major > Fix For: 3.0.0, 2.0.0-beta-1 > > Attachments: HBASE-18309.addendum.patch, > HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, > HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, > HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, > HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, > HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, > HBASE-18309.master.012.patch, space_consumption_in_archive.png > > > There is only one thread in LogCleaner to clean oldWALs and in our big > cluster we find this is not enough. The number of files under oldWALs reach > the max-directory-items limit of HDFS and cause region server crash, so we > use multi threads for LogCleaner and the crash not happened any more. > What's more, currently there's only one thread iterating the archive > directory, and we could use multiple threads cleaning sub directories in > parallel to speed it up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Ervits updated HBASE-20091: - Attachment: (was: HBASE-20091.v00.patch) > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.branch-1.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Ervits updated HBASE-20091: - Attachment: HBASE-20091.branch-1.patch > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.branch-1.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377824#comment-16377824 ] Artem Ervits commented on HBASE-20091: -- [~yuzhih...@gmail.com] Yetus generated the following name for the patch `branch-1.4.branch-1.4.patch` > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Ervits updated HBASE-20091: - Attachment: branch-1.4.branch-1.4.patch > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing.
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Ervits updated HBASE-20091: - Summary: Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. (was: Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch) > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. > - > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch, branch-1.4.branch-1.4.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377820#comment-16377820 ] Chia-Ping Tsai commented on HBASE-20069: {quote}We cannot have WakeupFlushThread be anonymous as whole point of its existence is being able to identify this explicit signaling class. {quote} We don't use the instance check anymore, so it is ok to remove the WakeupFlushThread. The "==" work well even if we use anonymous class. {code:java} - if (fqe == null || fqe instanceof WakeupFlushThread) { + if (fqe == null || fqe == WAKEUPFLUSH_INSTANCE) {{code} {quote}Any +1s? {quote} Pardon me. I just wake up. +1 > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: FindBugs Report.htm, HBASE-20069.branch-2.001.patch, > HBASE-20069.branch-2.002.patch, HBASE-20069.branch-2.003.patch, > HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20069: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0-beta-2 Status: Resolved (was: Patch Available) Pushed to branch-2 and master. > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: FindBugs Report.htm, HBASE-20069.branch-2.001.patch, > HBASE-20069.branch-2.002.patch, HBASE-20069.branch-2.003.patch, > HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18133) Low-latency space quota size reports
[ https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377816#comment-16377816 ] Hadoop QA commented on HBASE-18133: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 26s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s{color} | {color:red} hbase-server in master has 20 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s{color} | {color:red} hbase-server: The patch generated 1 new + 543 unchanged - 3 fixed = 544 total (was 546) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 31s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 47s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 5s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s{color} | {color:red} hbase-server generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 27s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Nullcheck of HRegion.rsServices at line 2736 of value previously dereferenced in
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377814#comment-16377814 ] stack commented on HBASE-20069: --- Or, let me push this. I want it in so we have chance of a successful nightly build. Shout and I can roll in any other comments. Thanks for the great reviews. > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Attachments: FindBugs Report.htm, HBASE-20069.branch-2.001.patch, > HBASE-20069.branch-2.002.patch, HBASE-20069.branch-2.003.patch, > HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20069) fix existing findbugs errors in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377809#comment-16377809 ] stack commented on HBASE-20069: --- Any +1s? > fix existing findbugs errors in hbase-server > > > Key: HBASE-20069 > URL: https://issues.apache.org/jira/browse/HBASE-20069 > Project: HBase > Issue Type: Sub-task > Components: findbugs >Reporter: Sean Busbey >Assignee: stack >Priority: Critical > Attachments: FindBugs Report.htm, HBASE-20069.branch-2.001.patch, > HBASE-20069.branch-2.002.patch, HBASE-20069.branch-2.003.patch, > HBASE-20069.branch-2.004.patch > > > now that findbugs is running on precommit we have some cleanup to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20081) TestDisableTableProcedure sometimes hung in MiniHBaseCluster#waitUntilShutDown
[ https://issues.apache.org/jira/browse/HBASE-20081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377808#comment-16377808 ] stack commented on HBASE-20081: --- It is a daemon thread. That will not hold-up the shutdown. > TestDisableTableProcedure sometimes hung in MiniHBaseCluster#waitUntilShutDown > -- > > Key: HBASE-20081 > URL: https://issues.apache.org/jira/browse/HBASE-20081 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Priority: Major > > https://builds.apache.org/job/HBase-2.0-hadoop3-tests/lastCompletedBuild/org.apache.hbase$hbase-server/testReport/org.apache.hadoop.hbase.master.procedure/TestDisableTableProcedure/org_apache_hadoop_hbase_master_procedure_TestDisableTableProcedure/ > was one recent occurrence. > I noticed two things in test output: > {code} > 2018-02-25 18:12:45,053 WARN [Time-limited test-EventThread] > master.RegionServerTracker(136): asf912.gq1.ygridcore.net,45649,1519582305777 > is not online or isn't known to the master.The latter could be caused by a > DNS misconfiguration. > {code} > Since DNS misconfiguration was very unlikely on Apache Jenkins nodes, the > above should not have been logged. > {code} > 2018-02-25 18:16:51,531 WARN [master/asf912:0.Chore.1] > master.CatalogJanitor(127): Failed scan of catalog table > java.io.IOException: connection is closed > at > org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:263) > at > org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:761) > at > org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:680) > at > org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:675) > at > org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:188) > at > org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:140) > at > org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:246) > at > org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:119) > at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) > {code} > The above was possibly related to the lost region server. > I searched test output of successful run where none of the above two can be > seen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377807#comment-16377807 ] Ted Yu commented on HBASE-20091: The patch filename needs to contain branch-1 e.g. 20091.branch-1.patch > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch > -- > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377784#comment-16377784 ] Hadoop QA commented on HBASE-20091: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HBASE-20091 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-20091 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12912154/HBASE-20091.v00.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11692/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch > -- > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20091) Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch
[ https://issues.apache.org/jira/browse/HBASE-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1637#comment-1637 ] Ted Yu commented on HBASE-20091: lgtm Please modify the subject to make it shorter. > Fix for 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version] in 1.4 Branch > -- > > Key: HBASE-20091 > URL: https://issues.apache.org/jira/browse/HBASE-20091 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.1 > Environment: Apache Maven 3.5.2 > (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T03:58:13-04:00) > Maven home: /usr/local/apache-maven-3.5.2 > Java version: 1.8.0_162, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.3", arch: "x86_64", family: "mac" >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: 1.4.3 > > Attachments: HBASE-20091.v00.patch > > > receiving warning > {noformat} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-shaded-check-invariants:pom:1.4.2 > [WARNING] 'build.plugins.plugin.version' for org.codehaus.mojo: is missing. @ > org.apache.hbase:hbase-shaded-check-invariants:[unknown-version], > /tmp/hbase-1.4.2/hbase-shaded/hbase-shaded-check-invariants/pom.xml, line > 161, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)