[jira] [Commented] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433444#comment-16433444 ] Chia-Ping Tsai commented on HBASE-20352: +1. Both of docs enhancement and the potential bug (should we shutdown the pool when stopping the master?) can be addressed in follow-up. I feel this patch is useful to large cluster since the elapsed time of clearing the old files is significant. That is why it pays to doc this nice improvement to our hbase book. > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433429#comment-16433429 ] Yu Li commented on HBASE-20352: --- And I will wait for [~chia7712]'s +1 (we need binding +1s here although I appreciate Stephen's review) before committing. Thanks. > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433428#comment-16433428 ] Yu Li commented on HBASE-20352: --- bq. In fact, i feel perplexed to address new comments when doing backport It's common that new reviewers found new issues, and I think it's a good thing if code/doc truly improves with new comments. For old closed issues we could push addendum commits (for small changes) or open new JIRAs (for big changes), or in this case refine release note doc, don't worry (smile) bq. And RN in HBASE-18398 has the documentation as Yu said (HBASE-18309 actually) I think we should add some more words emphasizing the difference between 1 and 1.0 for {{hbase.cleaner.scan.dir.concurrent.size}}, which is missing in current RN. Thanks. > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-18620) Secure bulkload job fails when HDFS umask has limited scope
[ https://issues.apache.org/jira/browse/HBASE-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433410#comment-16433410 ] Pankaj Kumar commented on HBASE-18620: -- Addressed the checkstyle finding in V2 patch. TestGlobalThrottler failure is not relevant. > Secure bulkload job fails when HDFS umask has limited scope > --- > > Key: HBASE-18620 > URL: https://issues.apache.org/jira/browse/HBASE-18620 > Project: HBase > Issue Type: Bug > Components: security >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-18620-branch-1-v2.patch, HBASE-18620-branch-1.patch > > > By default "hbase.fs.tmp.dir" parameter value is > /user/$\{user.name}/hbase-staging. > RegionServer creates the staging directory (hbase.bulkload.staging.dir, > default value is hbase.fs.tmp.dir) during opening a region as below when > SecureBulkLoadEndpoint configured in hbase.coprocessor.region.classes, > {noformat} > drwx-- - hbase hadoop 0 2017-08-12 13:55 /user/xyz > drwx--x--x - hbase hadoop 0 2017-08-12 13:55 /user/xyz/hbase-staging > drwx--x--x - hbase hadoop 0 2017-08-12 13:55 > /user/xyz/hbase-staging/DONOTERASE > {noformat} > Here, > 1. RegionServer is started using "xyz" linux user. > 2. HDFS umask (fs.permissions.umask-mode) has been set as 077, so file/dir > permission will not be wider than 700. "/user/xyz" directory (doesn't exist > earlier) permission will be 700 and "/user/xyz/hbase-staging" will be 711 as > we are just setting permission of staging directory not the parent > directories which are created (fs.mkdirs()) by RegionServer. > Secure bulkload will fail as other user doesn't have EXECUTE permission on > "/user/xyz" directory. > *Steps to reproduce:* > == > 1. Configure org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > in "hbase.coprocessor.region.classes" at client side. > 2. Login to machine as "root" linux user. > 3. kinit to any kerberos user except RegionServer kerberos user (say admin). > 4. ImportTSV will create the user temp directory (hbase.fs.tmp.dir) while > writing partition file, > {noformat} > drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root > drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root/hbase-staging > {noformat} > 4. During LoadIncrementalHFiles job, > - a. prepareBulkLoad() step - Random dir will be created by RegionServer > credentials, > {noformat} > drwxrwxrwx - hbase hadoop 0 2017-08-12 14:58 > /user/xyz/hbase-staging/hbase__t1__e67b23m2ghe6fkn1bqrb95ak41ferj8957cdhsep4ebmpohm22nvi54vh8g3qh1 > {noformat} > - b. secureBulkLoadHFiles() step - Family dir existence check and creation is > done by using client user credentials. Here client operation will fail as > below, > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=admin, access=EXECUTE, > inode="/user/xyz/hbase-staging/admin__t1__e1f3m4r2prud9117thg5pdg91lkg0le0fdvtbbpg03epqg0f14lv54j8sqd8s0n6/cf1":hbase:hadoop:drwx-- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:342) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:279) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:223) > at > com.huawei.hadoop.adapter.hdfs.plugin.HWAccessControlEnforce.checkPermission(HWAccessControlEnforce.java:69) > {noformat} > So the root cause is "admin" user doesn't have EXECUTE permission over > "/user/xyz", because RegionServer has created this intermediate parent > directory during opening (SecureBulkLoadEndpoint) a region where the default > permission is set as 700 based on the hdfs UMASK 077. > *Solution:* > = > However it can be handled by the creating /user/xyz manually and setting > sufficient permission explicitly. But we should handle this by setting > sufficient permission to intermediate staging directories which is created by > RegionServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-18620) Secure bulkload job fails when HDFS umask has limited scope
[ https://issues.apache.org/jira/browse/HBASE-18620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pankaj Kumar updated HBASE-18620: - Attachment: HBASE-18620-branch-1-v2.patch > Secure bulkload job fails when HDFS umask has limited scope > --- > > Key: HBASE-18620 > URL: https://issues.apache.org/jira/browse/HBASE-18620 > Project: HBase > Issue Type: Bug > Components: security >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-18620-branch-1-v2.patch, HBASE-18620-branch-1.patch > > > By default "hbase.fs.tmp.dir" parameter value is > /user/$\{user.name}/hbase-staging. > RegionServer creates the staging directory (hbase.bulkload.staging.dir, > default value is hbase.fs.tmp.dir) during opening a region as below when > SecureBulkLoadEndpoint configured in hbase.coprocessor.region.classes, > {noformat} > drwx-- - hbase hadoop 0 2017-08-12 13:55 /user/xyz > drwx--x--x - hbase hadoop 0 2017-08-12 13:55 /user/xyz/hbase-staging > drwx--x--x - hbase hadoop 0 2017-08-12 13:55 > /user/xyz/hbase-staging/DONOTERASE > {noformat} > Here, > 1. RegionServer is started using "xyz" linux user. > 2. HDFS umask (fs.permissions.umask-mode) has been set as 077, so file/dir > permission will not be wider than 700. "/user/xyz" directory (doesn't exist > earlier) permission will be 700 and "/user/xyz/hbase-staging" will be 711 as > we are just setting permission of staging directory not the parent > directories which are created (fs.mkdirs()) by RegionServer. > Secure bulkload will fail as other user doesn't have EXECUTE permission on > "/user/xyz" directory. > *Steps to reproduce:* > == > 1. Configure org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > in "hbase.coprocessor.region.classes" at client side. > 2. Login to machine as "root" linux user. > 3. kinit to any kerberos user except RegionServer kerberos user (say admin). > 4. ImportTSV will create the user temp directory (hbase.fs.tmp.dir) while > writing partition file, > {noformat} > drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root > drwxrwxrwx - admin hadoop 0 2017-08-12 14:52 /user/root/hbase-staging > {noformat} > 4. During LoadIncrementalHFiles job, > - a. prepareBulkLoad() step - Random dir will be created by RegionServer > credentials, > {noformat} > drwxrwxrwx - hbase hadoop 0 2017-08-12 14:58 > /user/xyz/hbase-staging/hbase__t1__e67b23m2ghe6fkn1bqrb95ak41ferj8957cdhsep4ebmpohm22nvi54vh8g3qh1 > {noformat} > - b. secureBulkLoadHFiles() step - Family dir existence check and creation is > done by using client user credentials. Here client operation will fail as > below, > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=admin, access=EXECUTE, > inode="/user/xyz/hbase-staging/admin__t1__e1f3m4r2prud9117thg5pdg91lkg0le0fdvtbbpg03epqg0f14lv54j8sqd8s0n6/cf1":hbase:hadoop:drwx-- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:342) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:279) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:223) > at > com.huawei.hadoop.adapter.hdfs.plugin.HWAccessControlEnforce.checkPermission(HWAccessControlEnforce.java:69) > {noformat} > So the root cause is "admin" user doesn't have EXECUTE permission over > "/user/xyz", because RegionServer has created this intermediate parent > directory during opening (SecureBulkLoadEndpoint) a region where the default > permission is set as 700 based on the hdfs UMASK 077. > *Solution:* > = > However it can be handled by the creating /user/xyz manually and setting > sufficient permission explicitly. But we should handle this by setting > sufficient permission to intermediate staging directories which is created by > RegionServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20243) [Shell] Add shell command to create a new table by cloning the existent table
[ https://issues.apache.org/jira/browse/HBASE-20243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangxu Cheng updated HBASE-20243: -- Attachment: HBASE-20243.master.010.patch > [Shell] Add shell command to create a new table by cloning the existent table > - > > Key: HBASE-20243 > URL: https://issues.apache.org/jira/browse/HBASE-20243 > Project: HBase > Issue Type: Improvement > Components: shell >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-20243.master.001.patch, > HBASE-20243.master.002.patch, HBASE-20243.master.003.patch, > HBASE-20243.master.004.patch, HBASE-20243.master.005.patch, > HBASE-20243.master.006.patch, HBASE-20243.master.007.patch, > HBASE-20243.master.008.patch, HBASE-20243.master.008.patch, > HBASE-20243.master.009.patch, HBASE-20243.master.010.patch > > > In the production environment, we need to create a new table every day. The > schema and the split keys of the table are the same as that of yesterday's > table, only the name of the table is different. For example, > x_20180321,x_20180322 etc.But now there is no convenient command to > do this. So we may need such a command(clone_table) to create a new table by > cloning the existent table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20068) Hadoopcheck project health check uses default maven repo instead of yetus managed ones
[ https://issues.apache.org/jira/browse/HBASE-20068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433381#comment-16433381 ] Reid Chan commented on HBASE-20068: --- LGTM. > Hadoopcheck project health check uses default maven repo instead of yetus > managed ones > -- > > Key: HBASE-20068 > URL: https://issues.apache.org/jira/browse/HBASE-20068 > Project: HBase > Issue Type: Bug > Components: community, test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: HBASE-20068.0.patch, HBASE-20068.1.patch > > > Recently had a precommit run fail hadoop check for all 3 versions with > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-install-plugin:2.5.2:install (default-install) > on project hbase-thrift: Failed to install metadata > org.apache.hbase:hbase-thrift:3.0.0-SNAPSHOT/maven-metadata.xml: Could not > parse metadata > /home/jenkins/.m2/repository/org/apache/hbase/hbase-thrift/3.0.0-SNAPSHOT/maven-metadata-local.xml: > in epilog non whitespace content is not allowed but got / (position: END_TAG > seen ...\n/... @25:2) -> [Help 1] > {code} > Looks like maven repo corruption. > Also the path {{/home/jenkins/.m2/repository}} means that those invocations > are using the jenkins user repo, which isn't safe since there are multiple > executors. either the plugin isn't using the yetus provided maven repo path > or our yetus invocation isn't telling yetus to provide its own maven repo > path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20188) [TESTING] Performance
[ https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433379#comment-16433379 ] Duo Zhang commented on HBASE-20188: --- {quote} Scans in 2.0 are slower because scans are also like preads now. {quote} If it is a long scan then we will switch to stream later. > [TESTING] Performance > - > > Key: HBASE-20188 > URL: https://issues.apache.org/jira/browse/HBASE-20188 > Project: HBase > Issue Type: Umbrella > Components: Performance >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: CAM-CONFIG-V01.patch, HBASE-20188-xac.sh, > HBASE-20188.sh, HBase 2.0 performance evaluation - 8GB(1).pdf, HBase 2.0 > performance evaluation - 8GB.pdf, HBase 2.0 performance evaluation - Basic vs > None_ system settings.pdf, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, > ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, > ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, > ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, > ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, > YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, > YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, > flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, > hbase-site.xml, lock.127.workloadc.20180402T200918Z.svg, > lock.2.memsize2.c.20180403T160257Z.svg, run_ycsb.sh, tree.txt, workloadx > > > How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor > that it is much slower, that the problem is the asyncwal writing. Does > in-memory compaction slow us down or speed us up? What happens when you > enable offheaping? > Keep notes here in this umbrella issue. Need to be able to say something > about perf when 2.0.0 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433377#comment-16433377 ] Duo Zhang commented on HBASE-19768: --- OK I know what is the problem now. In general we need to call recover lease to close the file. I'm not sure whether HDFS allows overwriting a file which is being written. If it can, then maybe we can bypass the recover lease, but the endFileLease must be called otherwise the file will be opened for ever unless we restart the RS. > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433358#comment-16433358 ] Reid Chan edited comment on HBASE-20352 at 4/11/18 4:13 AM: Thanks for Chia-Ping and Stephen reviews. Replied your questions in RB. Edited, Saw the discussion in RB, if {{hbase.cleaner.scan.dir.concurrent.size}} is to added in hbase docs, should we update docs in each related branch? And RN in HBASE-18398 has the documentation as Yu said. In fact, i feel perplexed to address new comments when doing backport. was (Author: reidchan): Thanks for Chia-Ping and Stephen reviews. Replied your questions in RB and attached v2(mainly docs). > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433369#comment-16433369 ] chenxu commented on HBASE-19768: The reason is like this, in our test env, if i kill the a DN, the RS on the same host will roll the WAL. when create a new WAL, if connect to local DN failed, IOException is throw. in catch block overwrite variable is set to true, and recoverFileLease is execute in the finally block. but Lease recover will fail: logs in RS like this: util.FSHDFSUtils: Failed to recover lease, attempt=0... logs in NN like this: File ... has not been closed. Lease recovery is in progress request to the RS will blocked a while, if bypass the Lease recover, there will be no block hope you can follow > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20352: -- Attachment: (was: HBASE-20352.branch-1.002.patch) > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433359#comment-16433359 ] Duo Zhang commented on HBASE-19768: --- Sorry I still can not follow. How? > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433358#comment-16433358 ] Reid Chan commented on HBASE-20352: --- Thanks for Chia-Ping and Stephen reviews. Replied your questions in RB and attached v2(mainly docs). > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch, > HBASE-20352.branch-1.002.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20352: -- Attachment: HBASE-20352.branch-1.002.patch > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch, > HBASE-20352.branch-1.002.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433355#comment-16433355 ] chenxu commented on HBASE-19768: {code:java} DFSClient client = dfs.getClient(); String clientName = client.getClientName(); ClientProtocol namenode = client.getNamenode(); int createMaxRetries = conf.getInt(ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES, DEFAULT_ASYNC_DFS_OUTPUT_CREATE_MAX_RETRIES); DatanodeInfo[] excludesNodes = EMPTY_DN_ARRAY; for (int retry = 0;; retry++) { HdfsFileStatus stat; {code} if the file lease already own by the client, when retry, it can reuse it > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433345#comment-16433345 ] Duo Zhang commented on HBASE-19768: --- What do you mean by 'reuse' it? > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19768) RegionServer startup failing when DN is dead
[ https://issues.apache.org/jira/browse/HBASE-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433343#comment-16433343 ] chenxu commented on HBASE-19768: *FanOutOneBlockAsyncDFSOutputHelper#createOutput* if overwrite mode is true, is there any need to recover the file lease? the client can reuse it. how about modify it like this {code:java} } finally { if (!succ) { if (futureList != null) { for (Future f : futureList) { f.addListener(new FutureListener() { @Override public void operationComplete(Future future) throws Exception { if (future.isSuccess()) { future.getNow().close(); } } }); } } if(!overwrite) { endFileLease(client, stat.getFileId()); fsUtils.recoverFileLease(dfs, new Path(src), conf, new CancelOnClose(client)); } } } {code} > RegionServer startup failing when DN is dead > > > Key: HBASE-19768 > URL: https://issues.apache.org/jira/browse/HBASE-19768 > Project: HBase > Issue Type: Bug >Reporter: Jean-Marc Spaggiari >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.0.0-beta-2, 2.0.0 > > Attachments: HBASE-19768.patch > > > When starting HBase, if the datanode hosted on the same host is dead but not > yet detected by the namenode, HBase will fail to start > {code} > 515691223393/node8.distparser.com%2C16020%2C1515691223393.1515691238778 > failed, retry = 7 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} > and will also get stuck to stop: > {code} > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase^C > hbase@node2:~/hbase-2.0.0-beta-1$ bin/stop-hbase.sh > stopping > hbase.. > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hbase/hbase-2.0.0-beta-1/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > {code} > The most interesting is that it seems to fail the same way even if the DN is > declared dead on HDFS side: > {code} > 515692041367/node8.distparser.com%2C16020%2C1515692041367.1515692057716 > failed, retry = 4 > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: > syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010 > at > org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown > Source) > Caused by: > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException: > syscall:getsockopt(..) failed: Connexion refusée > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared
[ https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433324#comment-16433324 ] Xiaolin Ha edited comment on HBASE-20368 at 4/11/18 2:50 AM: - [~stack]The case is that all the regionservers are stopped(It's better to wait for a while), and then we start one or more of them or add new servers to this rsgroup. There are some differences between this case and restarting all the regionservers in the rsgroup. '...regions on this rsgroup will be reassigned, but there is no available servers of this rsgroup' It means when all the regionservers in the rsgroup are offline, the assginment of regions will be failed. But the problem is that when some servers in the rsgroup are online again, the assignment of the regions will not be continued because AM's pendingAssginQueue was cleared after the last assginment though no available servers were found. We can see the stuck of RIT by logs or UI, and DML of tables in this rsgroup will also show it. Logs are: 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t3, region=c8890704468083ceae6a6c3b5e24b968 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t3, region=97591999e282ac4dc54300693bba4263 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t1, region=62bdc6fb8e9af1c21a323c9191313613 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t2, region=9bdf2635e2d76c0d0388a5708ce21e3c 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t1, region=47ba2a3d6968ad09a79e05bdd6db5694 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t2, region=ddef57619b45a023e076c3d5bcf30a04 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=t2, region=c0fb941b4c27fa04211e119494cf34d1 was (Author: xiaolin ha): [~stack]The case is that all the regionservers are stopped(It's better to wait for a while), and then we start one or more of them or add new servers to this rsgroup. There are some differences between this case and restarting all the regions servers in the rsgroup. '...regions on this rsgroup will be reassigned, but there is no available servers of this rsgroup' It means when all the regionservers in the rsgroup are offline, the assginment of regions will be failed. But the problem is that when some servers in the rsgroup are online again, the assignment of the regions will not be continued because AM's pendingAssginQueue was cleared after the last assginment though no available servers were found. We can see the stuck of RIT by logs or UI, and DML of tables in this rsgroup will also show it. Logs are: 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t3, region=c8890704468083ceae6a6c3b5e24b968 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t3, region=97591999e282ac4dc54300693bba4263 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t1, region=62bdc6fb8e9af1c21a323c9191313613 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t2, region=9bdf2635e2d76c0d0388a5708ce21e3c 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t1, region=47ba2a3d6968ad09a79e05bdd6db5694 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE,
[jira] [Updated] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
[ https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-19963: Attachment: HBASE-19963.master.001.patch > TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+ > > > Key: HBASE-19963 > URL: https://issues.apache.org/jira/browse/HBASE-19963 > Project: HBase > Issue Type: Task > Components: test >Reporter: Mike Drob >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-19963.master.001.patch > > > We try to accommodate HDFS changing ports when testing if it is the same FS > in our tests: > https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162 > {code} > if (isHadoop3) { > // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427 > testIsSameHdfs(9820); > } else { > // pre hadoop 3.0.0 defaults to port 8020 > testIsSameHdfs(8020); > } > {code} > But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990. > So our tests will fail against the snapshot and against future releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared
[ https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433324#comment-16433324 ] Xiaolin Ha commented on HBASE-20368: [~stack]The case is that all the regionservers are stopped(It's better to wait for a while), and then we start one or more of them or add new servers to this rsgroup. There are some differences between this case and restarting all the regions servers in the rsgroup. '...regions on this rsgroup will be reassigned, but there is no available servers of this rsgroup' It means when all the regionservers in the rsgroup are offline, the assginment of regions will be failed. But the problem is that when some servers in the rsgroup are online again, the assignment of the regions will not be continued because AM's pendingAssginQueue was cleared after the last assginment though no available servers were found. We can see the stuck of RIT by logs or UI, and DML of tables in this rsgroup will also show it. Logs are: 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t3, region=c8890704468083ceae6a6c3b5e24b968 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t3, region=97591999e282ac4dc54300693bba4263 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t1, region=62bdc6fb8e9af1c21a323c9191313613 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t2, region=9bdf2635e2d76c0d0388a5708ce21e3c 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=hh:t1, region=47ba2a3d6968ad09a79e05bdd6db5694 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st23.bj,40100,1523172960034, table=t2, region=ddef57619b45a023e076c3d5bcf30a04 2018-04-09,11:48:39,421 WARN org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK Region-In-Transition rit=OFFLINE, location=c3-hadoop-tst-st26.bj,40100,1523172965147, table=t2, region=c0fb941b4c27fa04211e119494cf34d1 > Fix RIT stuck when a rsgroup has no online servers but AM's > pendingAssginQueue is cleared > - > > Key: HBASE-20368 > URL: https://issues.apache.org/jira/browse/HBASE-20368 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: HBASE-20368.branch-2.0.001.patch > > > This error can be reproduced by shutting down all servers in a rsgroups and > starting them soon afterwards. > The regions on this rsgroup will be reassigned, but there is no available > servers of this rsgroup. > They will be added to AM's pendingAssginQueue, which AM will clear regardless > of the result of assigning in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-15291) FileSystem not closed in secure bulkLoad
[ https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-15291: --- Fix Version/s: 2.0.1 1.4.4 1.3.3 1.2.7 1.5.0 3.0.0 > FileSystem not closed in secure bulkLoad > > > Key: HBASE-15291 > URL: https://issues.apache.org/jira/browse/HBASE-15291 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.2, 0.98.16.1 >Reporter: Yong Zhang >Assignee: Ashish Singhi >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 1.4.4, 2.0.1 > > Attachments: HBASE-15291-revert-master.patch, HBASE-15291.001.patch, > HBASE-15291.002.patch, HBASE-15291.003.patch, HBASE-15291.004.patch, > HBASE-15291.addendum, HBASE-15291.patch, HBASE-15291.v1.patch, > HBASE-15291.v2.patch, HBASE-15291.v2.patch, patch > > > FileSystem not closed in secure bulkLoad after bulkLoad finish, it will > cause memory used more and more if too many bulkLoad . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433265#comment-16433265 ] Mike Drob commented on HBASE-17554: --- +1 > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
[ https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433246#comment-16433246 ] Mike Drob commented on HBASE-19963: --- I thought I read there was an issue with the 3.0.1 client jars. maybe we need to wait for .2? > TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+ > > > Key: HBASE-19963 > URL: https://issues.apache.org/jira/browse/HBASE-19963 > Project: HBase > Issue Type: Task > Components: test >Reporter: Mike Drob >Assignee: Wei-Chiu Chuang >Priority: Major > > We try to accommodate HDFS changing ports when testing if it is the same FS > in our tests: > https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162 > {code} > if (isHadoop3) { > // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427 > testIsSameHdfs(9820); > } else { > // pre hadoop 3.0.0 defaults to port 8020 > testIsSameHdfs(8020); > } > {code} > But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990. > So our tests will fail against the snapshot and against future releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20182) Can not locate region after split and merge
[ https://issues.apache.org/jira/browse/HBASE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433241#comment-16433241 ] stack commented on HBASE-20182: --- I meant this finding of yours on review of the code => "And I checked the code, now the flag will only be set to split parent. In DisableTableProcedure, finally we will create bunch of UnassignProcedures and at the last of the procedure we will set the region state to CLOSED, and will not change the offLine flag. So I think the check here is redundant with the isSplitParent check." It looks like we can strip the second sentence from the comment on 'private boolean offLine = false;'. Later we should remove these 'state' flags from regions altogether including the 'spliit' one. 'offline' is a table attribute kept in the table enabled/disabled flag... and whether region is CLOSED or OPEN. TODO is undo the split flag similarly. Thanks. > Can not locate region after split and merge > --- > > Key: HBASE-20182 > URL: https://issues.apache.org/jira/browse/HBASE-20182 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20182-UT.patch, HBASE-20182-addendum.patch, > HBASE-20182-v1.patch, HBASE-20182-v2.patch, HBASE-20182-v3.patch, > HBASE-20182-v3.patch, HBASE-20182-v3.patch, HBASE-20182.patch > > > When implementing serial replication feature in HBASE-20046, I found that > when splitting a region, we will not remove the parent region, instead we > will mark it offline. > And when locating a region, we will only scan one row so if we locate to the > offlined region then we are dead. > This will not happen for splitting, since one of the new daughter regions > have the same start row with the parent region, and the timestamp is greater > so when doing reverse scan we will always hit the daughter first. > But if we also consider merge then bad things happen. Consider we have two > regions A and B, we split B to C and D, and then merge A and C to E, then > ideally the regions should be E and D, but actually the regions in meta will > be E, B and D, and they all have different start rows. If you use a row > within the range of old region C, then we will always locate to B and throw > exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20182) Can not locate region after split and merge
[ https://issues.apache.org/jira/browse/HBASE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433237#comment-16433237 ] Duo Zhang commented on HBASE-20182: --- Thanks [~stack]. And pardon me, what is 'offline comment'? > Can not locate region after split and merge > --- > > Key: HBASE-20182 > URL: https://issues.apache.org/jira/browse/HBASE-20182 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20182-UT.patch, HBASE-20182-addendum.patch, > HBASE-20182-v1.patch, HBASE-20182-v2.patch, HBASE-20182-v3.patch, > HBASE-20182-v3.patch, HBASE-20182-v3.patch, HBASE-20182.patch > > > When implementing serial replication feature in HBASE-20046, I found that > when splitting a region, we will not remove the parent region, instead we > will mark it offline. > And when locating a region, we will only scan one row so if we locate to the > offlined region then we are dead. > This will not happen for splitting, since one of the new daughter regions > have the same start row with the parent region, and the timestamp is greater > so when doing reverse scan we will always hit the daughter first. > But if we also consider merge then bad things happen. Consider we have two > regions A and B, we split B to C and D, and then merge A and C to E, then > ideally the regions should be E and D, but actually the regions in meta will > be E, B and D, and they all have different start rows. If you use a row > within the range of old region C, then we will always locate to B and throw > exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433231#comment-16433231 ] stack commented on HBASE-17554: --- Thats nice that only the book built. Quick review appreciated. > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433214#comment-16433214 ] Hadoop QA commented on HBASE-17554: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 23s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 17s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f | | JIRA Issue | HBASE-17554 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918485/HBASE-17554.master.001.patch | | Optional Tests | asflicense refguide | | uname | Linux c69924f4114b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 95ca38a539 | | maven | version: Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/12383/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/12383/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 83 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/12383/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20182) Can not locate region after split and merge
[ https://issues.apache.org/jira/browse/HBASE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433213#comment-16433213 ] stack commented on HBASE-20182: --- To be clear, +1 for branch-2.0. On commit update the offline comment w/ your notes above? Yes, this flag should go away. > Can not locate region after split and merge > --- > > Key: HBASE-20182 > URL: https://issues.apache.org/jira/browse/HBASE-20182 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20182-UT.patch, HBASE-20182-addendum.patch, > HBASE-20182-v1.patch, HBASE-20182-v2.patch, HBASE-20182-v3.patch, > HBASE-20182-v3.patch, HBASE-20182-v3.patch, HBASE-20182.patch > > > When implementing serial replication feature in HBASE-20046, I found that > when splitting a region, we will not remove the parent region, instead we > will mark it offline. > And when locating a region, we will only scan one row so if we locate to the > offlined region then we are dead. > This will not happen for splitting, since one of the new daughter regions > have the same start row with the parent region, and the timestamp is greater > so when doing reverse scan we will always hit the daughter first. > But if we also consider merge then bad things happen. Consider we have two > regions A and B, we split B to C and D, and then merge A and C to E, then > ideally the regions should be E and D, but actually the regions in meta will > be E, B and D, and they all have different start rows. If you use a row > within the range of old region C, then we will always locate to B and throw > exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20182) Can not locate region after split and merge
[ https://issues.apache.org/jira/browse/HBASE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433211#comment-16433211 ] stack commented on HBASE-20182: --- Ok by me [~Apache9] +1 Nice work. > Can not locate region after split and merge > --- > > Key: HBASE-20182 > URL: https://issues.apache.org/jira/browse/HBASE-20182 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20182-UT.patch, HBASE-20182-addendum.patch, > HBASE-20182-v1.patch, HBASE-20182-v2.patch, HBASE-20182-v3.patch, > HBASE-20182-v3.patch, HBASE-20182-v3.patch, HBASE-20182.patch > > > When implementing serial replication feature in HBASE-20046, I found that > when splitting a region, we will not remove the parent region, instead we > will mark it offline. > And when locating a region, we will only scan one row so if we locate to the > offlined region then we are dead. > This will not happen for splitting, since one of the new daughter regions > have the same start row with the parent region, and the timestamp is greater > so when doing reverse scan we will always hit the daughter first. > But if we also consider merge then bad things happen. Consider we have two > regions A and B, we split B to C and D, and then merge A and C to E, then > ideally the regions should be E and D, but actually the regions in meta will > be E, B and D, and they all have different start rows. If you use a row > within the range of old region C, then we will always locate to B and throw > exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
[ https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-19963: --- Assignee: Wei-Chiu Chuang > TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+ > > > Key: HBASE-19963 > URL: https://issues.apache.org/jira/browse/HBASE-19963 > Project: HBase > Issue Type: Task > Components: test >Reporter: Mike Drob >Assignee: Wei-Chiu Chuang >Priority: Major > > We try to accommodate HDFS changing ports when testing if it is the same FS > in our tests: > https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162 > {code} > if (isHadoop3) { > // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427 > testIsSameHdfs(9820); > } else { > // pre hadoop 3.0.0 defaults to port 8020 > testIsSameHdfs(8020); > } > {code} > But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990. > So our tests will fail against the snapshot and against future releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19997) [rolling upgrade] 1.x => 2.x
[ https://issues.apache.org/jira/browse/HBASE-19997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433204#comment-16433204 ] stack commented on HBASE-19997: --- Moved this out. Its not being worked on actively. Perhaps it'll get love in 2.1. > [rolling upgrade] 1.x => 2.x > > > Key: HBASE-19997 > URL: https://issues.apache.org/jira/browse/HBASE-19997 > Project: HBase > Issue Type: Umbrella >Reporter: stack >Priority: Blocker > Fix For: 2.1.0 > > > An umbrella issue of issues needed so folks can do a rolling upgrade from > hbase-1.x to hbase-2.x. > (Recent) Notables: > * hbase-1.x can't read hbase-2.x WALs -- hbase-1.x doesn't know the > AsyncProtobufLogWriter class used writing the WAL -- see > https://issues.apache.org/jira/browse/HBASE-19166?focusedCommentId=16362897=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16362897 > for exception. > ** Might be ok... means WAL split fails on an hbase1 RS... must wait till an > hbase-2.x RS picks up the WAL for it to be split. > * hbase-1 can't open regions from tables created by hbase-2; it can't find > the Table descriptor. See > https://issues.apache.org/jira/browse/HBASE-19116?focusedCommentId=16363276=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16363276 > ** This might be ok if the tables we are doing rolling upgrade over were > written with hbase-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-19997) [rolling upgrade] 1.x => 2.x
[ https://issues.apache.org/jira/browse/HBASE-19997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19997: -- Fix Version/s: (was: 2.0.0) 2.1.0 > [rolling upgrade] 1.x => 2.x > > > Key: HBASE-19997 > URL: https://issues.apache.org/jira/browse/HBASE-19997 > Project: HBase > Issue Type: Umbrella >Reporter: stack >Priority: Blocker > Fix For: 2.1.0 > > > An umbrella issue of issues needed so folks can do a rolling upgrade from > hbase-1.x to hbase-2.x. > (Recent) Notables: > * hbase-1.x can't read hbase-2.x WALs -- hbase-1.x doesn't know the > AsyncProtobufLogWriter class used writing the WAL -- see > https://issues.apache.org/jira/browse/HBASE-19166?focusedCommentId=16362897=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16362897 > for exception. > ** Might be ok... means WAL split fails on an hbase1 RS... must wait till an > hbase-2.x RS picks up the WAL for it to be split. > * hbase-1 can't open regions from tables created by hbase-2; it can't find > the Table descriptor. See > https://issues.apache.org/jira/browse/HBASE-19116?focusedCommentId=16363276=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16363276 > ** This might be ok if the tables we are doing rolling upgrade over were > written with hbase-1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17554: -- Component/s: documentation > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
[ https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433198#comment-16433198 ] Wei-Chiu Chuang commented on HBASE-19963: - Hi Mike, I'm interested in this issue since it pertains to HDFS. I think it makes sense to update hadoop-three.version in pom.xml to 3.0.1, and restore the existing NameNode port. > TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+ > > > Key: HBASE-19963 > URL: https://issues.apache.org/jira/browse/HBASE-19963 > Project: HBase > Issue Type: Task > Components: test >Reporter: Mike Drob >Priority: Major > > We try to accommodate HDFS changing ports when testing if it is the same FS > in our tests: > https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162 > {code} > if (isHadoop3) { > // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427 > testIsSameHdfs(9820); > } else { > // pre hadoop 3.0.0 defaults to port 8020 > testIsSameHdfs(8020); > } > {code} > But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990. > So our tests will fail against the snapshot and against future releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17554: -- Assignee: stack Status: Patch Available (was: Open) > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17554: -- Attachment: HBASE-17554.master.001.patch > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17554.master.001.patch > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-17554) Figure 2.0.0 Hadoop Version Support; update refguide
[ https://issues.apache.org/jira/browse/HBASE-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433172#comment-16433172 ] stack commented on HBASE-17554: --- bq. should we include a warning about how the table might be inaccurate for our hbase 2.0 alpha/beta releases? esp calling out that we'd like folks to report problems so we can have it be accurate for GA? Just saw this. Yeah, we should have done as you suggested (smile). I've been running tests on 2.8.3. I'll update the refguide to mark 2.8.3 as tested. For hadoop3, I'll leave it as NT. > Figure 2.0.0 Hadoop Version Support; update refguide > > > Key: HBASE-17554 > URL: https://issues.apache.org/jira/browse/HBASE-17554 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Priority: Blocker > Fix For: 2.0.0 > > > Refguide has hbase-2.0.0 working with 2.6.1+ and 2.7.1+ but I just tried tip > of master against hadoop-2.7.3 and it fails with a netty version complaint > (same as up in HADOOP-13866 which is trying to update netty for hadoop3 and > 2.9?). This issue is about determining proper hadoop versions we work with > when hbase2 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20350) NullPointerException in Scanner during close()
[ https://issues.apache.org/jira/browse/HBASE-20350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433157#comment-16433157 ] Umesh Agashe commented on HBASE-20350: -- oh! okay. +1, lgtm. > NullPointerException in Scanner during close() > -- > > Key: HBASE-20350 > URL: https://issues.apache.org/jira/browse/HBASE-20350 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-2 >Reporter: Umesh Agashe >Assignee: Appy >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20350.master.001.patch > > > From logs: > {code} > 2018-04-03 02:06:00,630 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Replaying edits from > hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180403004104/834545a2ae1baa47082a3bc7aab2be2f/recovered.edits/1032167 > 2018-04-03 02:06:00,724 INFO > org.apache.hadoop.hbase.regionserver.RSRpcServices: Scanner > 2120114333978460945 lease expired on region > IntegrationTestBigLinkedList_20180403004104,\xF1\xFE\xCB\x98e1\xF8\xD4,1522742825561.ce0d91585a2d188123173c36d0b693a5. > 2018-04-03 02:06:00,730 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: * ABORTING region > server vd0510.halxg.cloudera.com,22101,1522626204176: Uncaught exception in > executorService thread > regionserver/vd0510.halxg.cloudera.com/10.17.226.13:22101.leaseChecker * > java.lang.NullPointerException > at > org.apache.hadoop.hbase.CellComparatorImpl.compareRows(CellComparatorImpl.java:202) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:74) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:61) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:207) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:190) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178) > at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721) > at java.util.PriorityQueue.siftDown(PriorityQueue.java:687) > at java.util.PriorityQueue.poll(PriorityQueue.java:595) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:228) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:483) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:464) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:224) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$ScannerListener.leaseExpired(RSRpcServices.java:460) > at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:122) > at java.lang.Thread.run(Thread.java:748) > 2018-04-03 02:06:00,731 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: > loaded coprocessors are: > [org.apache.hadoop.hbase.security.access.AccessController, > org.apache.hadoop.hbase.security.token.TokenProvider, > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, > com.cloudera.navigator.audit.hbase.RegionAuditCoProcessor] > 2018-04-03 02:06:00,737 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics as JSON > on abort: { > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433153#comment-16433153 ] Hadoop QA commented on HBASE-20219: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 13s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} hbase-server: The patch generated 0 new + 5 unchanged - 4 fixed = 5 total (was 9) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 14s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 1s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 3s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 12s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}107m 31s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f | | JIRA Issue | HBASE-20219 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917493/HBASE-20219.master.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4e402475c5a7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 95ca38a539 | | maven | version: Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC3 | | hadoopcheck |
[jira] [Commented] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433144#comment-16433144 ] Hudson commented on HBASE-20149: Results for branch branch-2 [build #595 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/595/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/595//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/595//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/595//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20380) Put up 2.0.0RC0
[ https://issues.apache.org/jira/browse/HBASE-20380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433127#comment-16433127 ] stack commented on HBASE-20380: --- Pushed tag 2.0.0RC0 to origin at 011dd2dae33456b3a2bcc2513e9fdd29de23be46 Yesterday i'd copied over master docs. I just put up the RC... so let me resolve this. > Put up 2.0.0RC0 > --- > > Key: HBASE-20380 > URL: https://issues.apache.org/jira/browse/HBASE-20380 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > JIRA to hang 2.0.0RC0-making steps on. > I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES > to what is in branch-2.0. > {code} > $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l > --sortorder=newer --skip-credits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20380) Put up 2.0.0RC0
[ https://issues.apache.org/jira/browse/HBASE-20380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-20380. --- Resolution: Fixed > Put up 2.0.0RC0 > --- > > Key: HBASE-20380 > URL: https://issues.apache.org/jira/browse/HBASE-20380 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > JIRA to hang 2.0.0RC0-making steps on. > I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES > to what is in branch-2.0. > {code} > $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l > --sortorder=newer --skip-credits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-17553) Make a 2.0.0 Release
[ https://issues.apache.org/jira/browse/HBASE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433120#comment-16433120 ] Hudson commented on HBASE-17553: Results for branch branch-2.0 [build #155 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Make a 2.0.0 Release > > > Key: HBASE-17553 > URL: https://issues.apache.org/jira/browse/HBASE-17553 > Project: HBase > Issue Type: Task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > > Umbrella issue to keep account of tasks to make a 2.0.0 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433121#comment-16433121 ] Hudson commented on HBASE-20149: Results for branch branch-2.0 [build #155 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/155//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20385) Purge md5-making from our little make_rc.sh script
[ https://issues.apache.org/jira/browse/HBASE-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20385: -- Attachment: HBASE-20385.master.001.patch > Purge md5-making from our little make_rc.sh script > -- > > Key: HBASE-20385 > URL: https://issues.apache.org/jira/browse/HBASE-20385 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Minor > Attachments: HBASE-20385.master.001.patch > > > Don't generate md5s anymore. New Apache release policy via Apache > Infrastructure asks that we not provide md5 as md5 is 'broken for many > purposes; we should movea awy from it.' Remove the md5' making from our > make-rc.sh script. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20385) Purge md5-making from our little make_rc.sh script
stack created HBASE-20385: - Summary: Purge md5-making from our little make_rc.sh script Key: HBASE-20385 URL: https://issues.apache.org/jira/browse/HBASE-20385 Project: HBase Issue Type: Bug Reporter: stack Don't generate md5s anymore. New Apache release policy via Apache Infrastructure asks that we not provide md5 as md5 is 'broken for many purposes; we should movea awy from it.' Remove the md5' making from our make-rc.sh script. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20384) [AMv2] Logging format improvements; use encoded name rather than full region name marking transitions
[ https://issues.apache.org/jira/browse/HBASE-20384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20384: -- Attachment: HBASE-20384.branch-2.0.001.patch > [AMv2] Logging format improvements; use encoded name rather than full region > name marking transitions > -- > > Key: HBASE-20384 > URL: https://issues.apache.org/jira/browse/HBASE-20384 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Minor > Attachments: HBASE-20384.branch-2.0.001.patch > > > We use encoded name near everywhere. Makes logging regular-looking at least > and eases tracing. In a few places we still do full region name. Let me fix > (ran into it trying to debug...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20384) [AMv2] Logging format improvements; use encoded name rather than full region name marking transitions
stack created HBASE-20384: - Summary: [AMv2] Logging format improvements; use encoded name rather than full region name marking transitions Key: HBASE-20384 URL: https://issues.apache.org/jira/browse/HBASE-20384 Project: HBase Issue Type: Bug Reporter: stack We use encoded name near everywhere. Makes logging regular-looking at least and eases tracing. In a few places we still do full region name. Let me fix (ran into it trying to debug...) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20383) [AMv2] AssignmentManager: Failed transition XYZ is not OPEN
stack created HBASE-20383: - Summary: [AMv2] AssignmentManager: Failed transition XYZ is not OPEN Key: HBASE-20383 URL: https://issues.apache.org/jira/browse/HBASE-20383 Project: HBase Issue Type: Bug Reporter: stack Seeing a bunch of this testing 2.0.0: {code} 2018-04-10 13:57:09,430 WARN [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] assignment.AssignmentManager: Failed transition org.apache.hadoop.hbase.client.DoNotRetryRegionException: 19a2cd6f88abae0036415ee1ea041c2e is not OPEN at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193) at org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:769) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.updateRegionSplitTransition(AssignmentManager.java:911) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.reportRegionStateTransition(AssignmentManager.java:819) at org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1538) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:11093) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) {code} Looks like report back from Master OK'ing a split to go ahead but the split is already running. Figure how to shut these down. They are noisy at least. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20383) [AMv2] AssignmentManager: Failed transition XYZ is not OPEN
[ https://issues.apache.org/jira/browse/HBASE-20383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20383: -- Component/s: amv2 > [AMv2] AssignmentManager: Failed transition XYZ is not OPEN > --- > > Key: HBASE-20383 > URL: https://issues.apache.org/jira/browse/HBASE-20383 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: stack >Priority: Major > > Seeing a bunch of this testing 2.0.0: > {code} > 2018-04-10 13:57:09,430 WARN > [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] > assignment.AssignmentManager: Failed transition > > > org.apache.hadoop.hbase.client.DoNotRetryRegionException: > 19a2cd6f88abae0036415ee1ea041c2e is not OPEN > at > org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193) > at > org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112) > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:769) > > > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.updateRegionSplitTransition(AssignmentManager.java:911) > > > at > org.apache.hadoop.hbase.master.assignment.AssignmentManager.reportRegionStateTransition(AssignmentManager.java:819) > at > org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1538) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:11093) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > > >at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > Looks like report back from Master OK'ing a split to go ahead but the split > is already running. Figure how to shut these down. They are noisy at least. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20382) If RSGroups not enabled, rsgroup.jsp prints stack trace
[ https://issues.apache.org/jira/browse/HBASE-20382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433060#comment-16433060 ] Mike Drob commented on HBASE-20382: --- Should be able to fix this by adding a table existence check in RSGroupTableAccessor? > If RSGroups not enabled, rsgroup.jsp prints stack trace > --- > > Key: HBASE-20382 > URL: https://issues.apache.org/jira/browse/HBASE-20382 > Project: HBase > Issue Type: Bug > Components: rsgroup, UI >Reporter: Mike Drob >Priority: Major > Labels: beginner > Fix For: 2.0.0 > > > Going to {{rsgroup.jsp?name=foo}} I get the following stack trace: > {noformat} > org.apache.hadoop.hbase.TableNotFoundException: hbase:rsgroup > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:842) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:733) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:719) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:690) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:571) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getRegionLocation(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:73) > at > org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:359) > at > org.apache.hadoop.hbase.RSGroupTableAccessor.getRSGroupInfo(RSGroupTableAccessor.java:75) > at > org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:78) > at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Updated] (HBASE-20382) If RSGroups not enabled, rsgroup.jsp prints stack trace
[ https://issues.apache.org/jira/browse/HBASE-20382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20382: -- Labels: beginner (was: ) > If RSGroups not enabled, rsgroup.jsp prints stack trace > --- > > Key: HBASE-20382 > URL: https://issues.apache.org/jira/browse/HBASE-20382 > Project: HBase > Issue Type: Bug > Components: rsgroup, UI >Reporter: Mike Drob >Priority: Major > Labels: beginner > Fix For: 2.0.0 > > > Going to {{rsgroup.jsp?name=foo}} I get the following stack trace: > {noformat} > org.apache.hadoop.hbase.TableNotFoundException: hbase:rsgroup > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:842) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:733) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:719) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:690) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:571) > at > org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getRegionLocation(ConnectionUtils.java:131) > at > org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:73) > at > org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) > at org.apache.hadoop.hbase.client.HTable.get(HTable.java:359) > at > org.apache.hadoop.hbase.RSGroupTableAccessor.getRSGroupInfo(RSGroupTableAccessor.java:75) > at > org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:78) > at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at >
[jira] [Created] (HBASE-20382) If RSGroups not enabled, rsgroup.jsp prints stack trace
Mike Drob created HBASE-20382: - Summary: If RSGroups not enabled, rsgroup.jsp prints stack trace Key: HBASE-20382 URL: https://issues.apache.org/jira/browse/HBASE-20382 Project: HBase Issue Type: Bug Components: rsgroup, UI Reporter: Mike Drob Fix For: 2.0.0 Going to {{rsgroup.jsp?name=foo}} I get the following stack trace: {noformat} org.apache.hadoop.hbase.TableNotFoundException: hbase:rsgroup at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:842) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:733) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:719) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:690) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131) at org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:571) at org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getRegionLocation(ConnectionUtils.java:131) at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:73) at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:359) at org.apache.hadoop.hbase.RSGroupTableAccessor.getRSGroupInfo(RSGroupTableAccessor.java:75) at org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:78) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at
[jira] [Commented] (HBASE-20350) NullPointerException in Scanner during close()
[ https://issues.apache.org/jira/browse/HBASE-20350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433043#comment-16433043 ] stack commented on HBASE-20350: --- Push it [~appy] > NullPointerException in Scanner during close() > -- > > Key: HBASE-20350 > URL: https://issues.apache.org/jira/browse/HBASE-20350 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-2 >Reporter: Umesh Agashe >Assignee: Appy >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20350.master.001.patch > > > From logs: > {code} > 2018-04-03 02:06:00,630 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Replaying edits from > hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180403004104/834545a2ae1baa47082a3bc7aab2be2f/recovered.edits/1032167 > 2018-04-03 02:06:00,724 INFO > org.apache.hadoop.hbase.regionserver.RSRpcServices: Scanner > 2120114333978460945 lease expired on region > IntegrationTestBigLinkedList_20180403004104,\xF1\xFE\xCB\x98e1\xF8\xD4,1522742825561.ce0d91585a2d188123173c36d0b693a5. > 2018-04-03 02:06:00,730 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: * ABORTING region > server vd0510.halxg.cloudera.com,22101,1522626204176: Uncaught exception in > executorService thread > regionserver/vd0510.halxg.cloudera.com/10.17.226.13:22101.leaseChecker * > java.lang.NullPointerException > at > org.apache.hadoop.hbase.CellComparatorImpl.compareRows(CellComparatorImpl.java:202) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:74) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:61) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:207) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:190) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178) > at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721) > at java.util.PriorityQueue.siftDown(PriorityQueue.java:687) > at java.util.PriorityQueue.poll(PriorityQueue.java:595) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:228) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:483) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:464) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:224) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$ScannerListener.leaseExpired(RSRpcServices.java:460) > at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:122) > at java.lang.Thread.run(Thread.java:748) > 2018-04-03 02:06:00,731 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: > loaded coprocessors are: > [org.apache.hadoop.hbase.security.access.AccessController, > org.apache.hadoop.hbase.security.token.TokenProvider, > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, > com.cloudera.navigator.audit.hbase.RegionAuditCoProcessor] > 2018-04-03 02:06:00,737 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics as JSON > on abort: { > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19343) Restore snapshot makes split parent region online
[ https://issues.apache.org/jira/browse/HBASE-19343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432980#comment-16432980 ] Hudson commented on HBASE-19343: Results for branch branch-1.2 [build #295 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/295/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/295//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/295//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/295//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Restore snapshot makes split parent region online > -- > > Key: HBASE-19343 > URL: https://issues.apache.org/jira/browse/HBASE-19343 > Project: HBase > Issue Type: Bug > Components: snapshots >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.4 > > Attachments: 19343.tst, HBASE-19343-branch-1-v2.patch, > HBASE-19343-branch-1.2.patch, HBASE-19343-branch-1.3.patch, > HBASE-19343-branch-1.patch, Snapshot.jpg > > > Restore snapshot makes parent split region online as shown in the attached > snapshot. > Steps to reproduce > = > 1. Create table > 2. Insert few records into the table > 3. flush the table > 4. Split the table > 5. Create snapshot before catalog janitor clears the parent region entry from > meta. > 6. Restore snapshot > We can see the problem in meta entries, > Meta content before restore snapshot: > {noformat} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:regioninfo, timestamp=1511537565964, value={ENCODED => > 077a12b0b3c91b053fa95223635f9543, NAME => > 't1,,1511537529449.077a12b0b3c91b053fa95223635f9543.', STARTKEY => > '', ENDKEY => > '', OFFLINE => true, SPLIT => true} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:seqnumDuringOpen, timestamp=1511537530107, > value=\x00\x00\x00\x00\x00\x00\x00\x02 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:server, timestamp=1511537530107, value=host-xx:16020 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:serverstartcode, timestamp=1511537530107, value=1511537511523 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:splitA, timestamp=1511537565964, value={ENCODED => > 3c7c866d4df370c586131a4cbe0ef6a8, NAME => > 't1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8.', STARTKEY => '', > ENDKEY => 'm'} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:splitB, timestamp=1511537565964, value={ENCODED => > dc7facd824c85b94e5bf6a2e6b5f5efc, NAME => > 't1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc.', STARTKEY => 'm > ', ENDKEY => ''} > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:regioninfo, timestamp=1511537566075, value={ENCODED => > 3c7c866d4df370c586131a4cbe0ef6a8, NAME => > 't1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8.', STARTKEY => > '', ENDKEY => > 'm'} > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:seqnumDuringOpen, timestamp=1511537566075, > value=\x00\x00\x00\x00\x00\x00\x00\x02 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:server, timestamp=1511537566075, value=host-xx:16020 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:serverstartcode, timestamp=1511537566075, value=1511537511523 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:regioninfo, timestamp=1511537566069, value={ENCODED => > dc7facd824c85b94e5bf6a2e6b5f5efc, NAME => > 't1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc.', STARTKEY = > > 'm', ENDKEY => > ''} > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:seqnumDuringOpen, timestamp=1511537566069, > value=\x00\x00\x00\x00\x00\x00\x00\x08 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:server, timestamp=1511537566069, value=host-xx:16020
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432968#comment-16432968 ] James Taylor commented on HBASE-20219: -- FYI, we've workaround the issue at the Phoenix level (PHOENIX-4658) by disabling loadColumnFamiliesOnDemand when a reverse scan is being done. We need to do this so that the problem is fixed for existing versions of HBase. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20381) precommit failing w/rat on shadedjars plugin
[ https://issues.apache.org/jira/browse/HBASE-20381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432947#comment-16432947 ] Sean Busbey commented on HBASE-20381: - the debug build passed. :( started again without debug, but with "archive rat files" still in place. > precommit failing w/rat on shadedjars plugin > > > Key: HBASE-20381 > URL: https://issues.apache.org/jira/browse/HBASE-20381 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > > see HBASE-20219 and related builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20381) precommit failing w/rat on shadedjars plugin
[ https://issues.apache.org/jira/browse/HBASE-20381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432948#comment-16432948 ] Sean Busbey commented on HBASE-20381: - https://builds.apache.org/job/PreCommit-HBASE-Build/12382/ > precommit failing w/rat on shadedjars plugin > > > Key: HBASE-20381 > URL: https://issues.apache.org/jira/browse/HBASE-20381 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > > see HBASE-20219 and related builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432945#comment-16432945 ] Sean Busbey commented on HBASE-20219: - well that's too bad. Let me do one more without debug on just to make sure. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432912#comment-16432912 ] Hadoop QA commented on HBASE-20219: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 55s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} hbase-server: The patch generated 0 new + 5 unchanged - 4 fixed = 5 total (was 9) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 58s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}125m 36s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}172m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f | | JIRA Issue | HBASE-20219 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917493/HBASE-20219.master.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux b06b4ceb78ce 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 95ca38a539 | | maven | version: Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/12381/testReport/ | | Max. process+thread count | 4321 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/12381/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This
[jira] [Commented] (HBASE-20243) [Shell] Add shell command to create a new table by cloning the existent table
[ https://issues.apache.org/jira/browse/HBASE-20243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432903#comment-16432903 ] Hadoop QA commented on HBASE-20243: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 46s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} rubocop {color} | {color:red} 0m 17s{color} | {color:red} The patch generated 6 new + 775 unchanged - 8 fixed = 781 total (was 783) {color} | | {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 49 new + 1277 unchanged - 0 fixed = 1326 total (was 1277) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 53s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 6s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 50s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 10s{color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 0s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}192m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.client.TestAsyncTableGetMultiThreaded | | | hadoop.hbase.master.procedure.TestDisableTableProcedure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f | | JIRA Issue | HBASE-20243 | |
[jira] [Commented] (HBASE-20352) [Chore] Backport HBASE-18309 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432892#comment-16432892 ] Tak Lon (Stephen) Wu commented on HBASE-20352: -- +1 thanks for making this changes. > [Chore] Backport HBASE-18309 to branch-1 > > > Key: HBASE-20352 > URL: https://issues.apache.org/jira/browse/HBASE-20352 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Major > Attachments: HBASE-20352.branch-1.001.patch > > > Using multiple threads to scan directory and to clean old WALs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20350) NullPointerException in Scanner during close()
[ https://issues.apache.org/jira/browse/HBASE-20350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432882#comment-16432882 ] Appy commented on HBASE-20350: -- That's probably because poll() can return null value when PQ heap is empty, whereas, it won't happen if we iterate like this. > NullPointerException in Scanner during close() > -- > > Key: HBASE-20350 > URL: https://issues.apache.org/jira/browse/HBASE-20350 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-2 >Reporter: Umesh Agashe >Assignee: Appy >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-20350.master.001.patch > > > From logs: > {code} > 2018-04-03 02:06:00,630 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Replaying edits from > hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180403004104/834545a2ae1baa47082a3bc7aab2be2f/recovered.edits/1032167 > 2018-04-03 02:06:00,724 INFO > org.apache.hadoop.hbase.regionserver.RSRpcServices: Scanner > 2120114333978460945 lease expired on region > IntegrationTestBigLinkedList_20180403004104,\xF1\xFE\xCB\x98e1\xF8\xD4,1522742825561.ce0d91585a2d188123173c36d0b693a5. > 2018-04-03 02:06:00,730 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: * ABORTING region > server vd0510.halxg.cloudera.com,22101,1522626204176: Uncaught exception in > executorService thread > regionserver/vd0510.halxg.cloudera.com/10.17.226.13:22101.leaseChecker * > java.lang.NullPointerException > at > org.apache.hadoop.hbase.CellComparatorImpl.compareRows(CellComparatorImpl.java:202) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:74) > at > org.apache.hadoop.hbase.CellComparatorImpl.compare(CellComparatorImpl.java:61) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:207) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:190) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178) > at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:721) > at java.util.PriorityQueue.siftDown(PriorityQueue.java:687) > at java.util.PriorityQueue.poll(PriorityQueue.java:595) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:228) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:483) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:464) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:224) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices$ScannerListener.leaseExpired(RSRpcServices.java:460) > at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:122) > at java.lang.Thread.run(Thread.java:748) > 2018-04-03 02:06:00,731 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: > loaded coprocessors are: > [org.apache.hadoop.hbase.security.access.AccessController, > org.apache.hadoop.hbase.security.token.TokenProvider, > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, > com.cloudera.navigator.audit.hbase.RegionAuditCoProcessor] > 2018-04-03 02:06:00,737 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics as JSON > on abort: { > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20068) Hadoopcheck project health check uses default maven repo instead of yetus managed ones
[ https://issues.apache.org/jira/browse/HBASE-20068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432832#comment-16432832 ] Josh Elser commented on HBASE-20068: I don't fully understand the intricacies of what's being changed here (smile), but it looks fine to me +1 > Hadoopcheck project health check uses default maven repo instead of yetus > managed ones > -- > > Key: HBASE-20068 > URL: https://issues.apache.org/jira/browse/HBASE-20068 > Project: HBase > Issue Type: Bug > Components: community, test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: HBASE-20068.0.patch, HBASE-20068.1.patch > > > Recently had a precommit run fail hadoop check for all 3 versions with > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-install-plugin:2.5.2:install (default-install) > on project hbase-thrift: Failed to install metadata > org.apache.hbase:hbase-thrift:3.0.0-SNAPSHOT/maven-metadata.xml: Could not > parse metadata > /home/jenkins/.m2/repository/org/apache/hbase/hbase-thrift/3.0.0-SNAPSHOT/maven-metadata-local.xml: > in epilog non whitespace content is not allowed but got / (position: END_TAG > seen ...\n/... @25:2) -> [Help 1] > {code} > Looks like maven repo corruption. > Also the path {{/home/jenkins/.m2/repository}} means that those invocations > are using the jenkins user repo, which isn't safe since there are multiple > executors. either the plugin isn't using the yetus provided maven repo path > or our yetus invocation isn't telling yetus to provide its own maven repo > path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20291) Fix for The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no dependency information available with hadoop.profile=3.0
[ https://issues.apache.org/jira/browse/HBASE-20291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432828#comment-16432828 ] Josh Elser commented on HBASE-20291: I still don't think this is fixing the right problem. The full warning message I see on master is: {code:java} [INFO] ---< org.apache.hbase:hbase-hadoop2-compat > [INFO] Building Apache HBase - Hadoop Two Compatibility 3.0.0-SNAPSHOT [12/42] [INFO] [ jar ]- Downloading from project.local: file:/Users/jelser/projects/hbase-copy.git/hbase-hadoop2-compat/src/site/resources/repo/net/minidev/json-smart/2.3-SNAPSHOT/json-smart-2.3-SNAPSHOT.pom Downloading from apache.snapshots: https://repository.apache.org/snapshots/net/minidev/json-smart/2.3-SNAPSHOT/json-smart-2.3-SNAPSHOT.pom [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no dependency information available{code} For some reason, there is a transitive dependency which tries to specify 2.3-SNAPSHOT: {noformat} [INFO] | +- org.apache.hadoop:hadoop-auth:jar:3.0.0:compile [INFO] | | +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile [INFO] | | | +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile [INFO] | | | \- net.minidev:json-smart:jar:2.3:compile (version selected from constraint [1.3.1,2.3]){noformat} But the resolution eventually picks a non-snapshot version. The intermediate warning comes from the hack we have in the build to support some custom code we need to build the HBase site. Excluding this dependency from hadoop-auth should fix the problem, but I'm not sure if it's safe for us to do that (how do we know if code in hadoop-auth actually needs that?). I think understanding where that 2.3-SNAPSHOT version is coming from is the next step. > Fix for The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no > dependency information available with hadoop.profile=3.0 > --- > > Key: HBASE-20291 > URL: https://issues.apache.org/jira/browse/HBASE-20291 > Project: HBase > Issue Type: Bug >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20291.v01.patch, HBASE-20291.v02.patch > > > receiving message > {code:java} > The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no dependency > information available{code} > when running with > {code:java} > mvn clean install -DHBasePatchProcess -Dhadoop-three.version=3.0.0 > -Dhadoop.profile=3.0 -DskipTests{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20188) [TESTING] Performance
[ https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432783#comment-16432783 ] Eshcar Hillel commented on HBASE-20188: --- Attaching additional benchmark results where we update and read single column rows [^HBase 2.0 performance evaluation - 8GB(1).pdf] In workloadx (write-only) with a single (wider) column -- Adaptive outperforms None by 15%. In workloads a and c with a single wide column – Adaptive and None are comparable. > [TESTING] Performance > - > > Key: HBASE-20188 > URL: https://issues.apache.org/jira/browse/HBASE-20188 > Project: HBase > Issue Type: Umbrella > Components: Performance >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: CAM-CONFIG-V01.patch, HBASE-20188-xac.sh, > HBASE-20188.sh, HBase 2.0 performance evaluation - 8GB(1).pdf, HBase 2.0 > performance evaluation - 8GB.pdf, HBase 2.0 performance evaluation - Basic vs > None_ system settings.pdf, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, > ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, > ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, > ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, > ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, > YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, > YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, > flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, > hbase-site.xml, lock.127.workloadc.20180402T200918Z.svg, > lock.2.memsize2.c.20180403T160257Z.svg, run_ycsb.sh, tree.txt, workloadx > > > How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor > that it is much slower, that the problem is the asyncwal writing. Does > in-memory compaction slow us down or speed us up? What happens when you > enable offheaping? > Keep notes here in this umbrella issue. Need to be able to say something > about perf when 2.0.0 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20188) [TESTING] Performance
[ https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eshcar Hillel updated HBASE-20188: -- Attachment: HBase 2.0 performance evaluation - 8GB(1).pdf > [TESTING] Performance > - > > Key: HBASE-20188 > URL: https://issues.apache.org/jira/browse/HBASE-20188 > Project: HBase > Issue Type: Umbrella > Components: Performance >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > Attachments: CAM-CONFIG-V01.patch, HBASE-20188-xac.sh, > HBASE-20188.sh, HBase 2.0 performance evaluation - 8GB(1).pdf, HBase 2.0 > performance evaluation - 8GB.pdf, HBase 2.0 performance evaluation - Basic vs > None_ system settings.pdf, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, > ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, > ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, > ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, > ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, > YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, > YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, > flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, > hbase-site.xml, lock.127.workloadc.20180402T200918Z.svg, > lock.2.memsize2.c.20180403T160257Z.svg, run_ycsb.sh, tree.txt, workloadx > > > How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor > that it is much slower, that the problem is the asyncwal writing. Does > in-memory compaction slow us down or speed us up? What happens when you > enable offheaping? > Keep notes here in this umbrella issue. Need to be able to say something > about perf when 2.0.0 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432744#comment-16432744 ] stack commented on HBASE-20219: --- [~brfrn169] Sure, particularly if needed by Phoenix. I added 2.0.0 version to it. Lets backport after figure the shaded jar issue [~busbey] is trying to debug. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20219: -- Fix Version/s: 2.0.0 > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20219: -- Component/s: phoenix > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20219: -- Priority: Critical (was: Major) > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug > Components: phoenix >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19079) Support setting up two clusters with A and S state
[ https://issues.apache.org/jira/browse/HBASE-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432738#comment-16432738 ] Hadoop QA commented on HBASE-19079: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} HBASE-19064 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 20s{color} | {color:green} HBASE-19064 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 34s{color} | {color:red} hbase-server in HBASE-19064 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} HBASE-19064 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 15s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} HBASE-19064 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} HBASE-19064 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 28s{color} | {color:red} hbase-server generated 174 new + 14 unchanged - 2 fixed = 188 total (was 16) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} hbase-server: The patch generated 0 new + 12 unchanged - 2 fixed = 12 total (was 14) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 9s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 16m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}158m 2s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}200m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f | | JIRA Issue | HBASE-19079 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918394/HBASE-19079-HBASE-19064-v4.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0fd183f7dd9c 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | HBASE-19064 / e7b37cf934 | | maven | version: Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) | | Default Java | 1.8.0_162 | | compile | https://builds.apache.org/job/PreCommit-HBASE-Build/12378/artifact/patchprocess/branch-compile-hbase-server.txt | | findbugs | v3.1.0-RC3 | | javac |
[jira] [Commented] (HBASE-20376) RowCounter and CellCounter documentations are incorrect
[ https://issues.apache.org/jira/browse/HBASE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432724#comment-16432724 ] Hadoop QA commented on HBASE-20376: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 54s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 50s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 11 unchanged - 3 fixed = 11 total (was 14) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} root: The patch generated 0 new + 11 unchanged - 3 fixed = 11 total (was 14) {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 34s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 54s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color}
[jira] [Commented] (HBASE-20376) RowCounter and CellCounter documentations are incorrect
[ https://issues.apache.org/jira/browse/HBASE-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432690#comment-16432690 ] Hadoop QA commented on HBASE-20376: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 31s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 11 unchanged - 3 fixed = 11 total (was 14) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 25s{color} | {color:green} root: The patch generated 0 new + 11 unchanged - 3 fixed = 11 total (was 14) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 3m 19s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 54s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}200m 9s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} |
[jira] [Commented] (HBASE-20367) Write a replication barrier for regions when disabling a table
[ https://issues.apache.org/jira/browse/HBASE-20367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432682#comment-16432682 ] Hadoop QA commented on HBASE-20367: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 57s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} hbase-server: The patch generated 0 new + 9 unchanged - 29 fixed = 9 total (was 38) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 58s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 6s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}139m 55s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 2s{color} | {color:green} The patch does not generate ASF License warnings.
[jira] [Commented] (HBASE-20248) [ITBLL] UNREFERENCED rows
[ https://issues.apache.org/jira/browse/HBASE-20248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432662#comment-16432662 ] stack commented on HBASE-20248: --- A second 10B run just verified as all Cells present/linked. Starting another > [ITBLL] UNREFERENCED rows > - > > Key: HBASE-20248 > URL: https://issues.apache.org/jira/browse/HBASE-20248 > Project: HBase > Issue Type: Sub-task > Components: dataloss >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0 > > > From parent, saw unreferenced rows in a run yesterday against tip of > branch-2. Saw similar in a run from a week or so ago. > Enabling DEBUG and rerunning to see if I can get to root of dataloss. See > https://docs.google.com/document/d/14Tvu5yWYNBDFkh8xCqLkU9tlyNWhJv3GjDGOkqZU1eE/edit# > for old debugging trickery. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20253) Error message is missing for restore_snapshot
[ https://issues.apache.org/jira/browse/HBASE-20253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432647#comment-16432647 ] Sean Busbey commented on HBASE-20253: - ruby-lint warnings look like just not understanding jruby. several of the rubocop issues are things that need to be cleaned up in the patch. Could you try to bring down the warning count? > Error message is missing for restore_snapshot > - > > Key: HBASE-20253 > URL: https://issues.apache.org/jira/browse/HBASE-20253 > Project: HBase > Issue Type: Sub-task > Components: shell >Affects Versions: 2.0.0 >Reporter: Peter Somogyi >Assignee: Gabor Bota >Priority: Minor > Attachments: HBASE-20253.master.001.patch, > HBASE-20253.master.002.patch, HBASE-20253.master.003.patch > > > When the table is not disabled and restore_snapshot executed the error > message is useless, only displays the table name. > hbase(main):007:0> restore_snapshot 'tsnap' > ERROR: t > Restore a specified snapshot. > The restore will replace the content of the original table, > bringing back the content to the snapshot state. > The table must be disabled. > Examples: > hbase> restore_snapshot 'snapshotName' > Following command will restore all acl from snapshot table into the table. > hbase> restore_snapshot 'snapshotName', \{RESTORE_ACL=>true} > Took 0.1044 seconds -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432639#comment-16432639 ] Sean Busbey commented on HBASE-20219: - fyi, I'm running debug builds against this patch to figure out what's going on. it'll probably comment here. please make sure the issue stays in Patch Available. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20381) precommit failing w/rat on shadedjars plugin
[ https://issues.apache.org/jira/browse/HBASE-20381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432637#comment-16432637 ] Sean Busbey commented on HBASE-20381: - started a debug build with an extra arg to save any rat.txt files that get generated: https://builds.apache.org/job/PreCommit-HBASE-Build/12381/ > precommit failing w/rat on shadedjars plugin > > > Key: HBASE-20381 > URL: https://issues.apache.org/jira/browse/HBASE-20381 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > > see HBASE-20219 and related builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20381) precommit failing w/rat on shadedjars plugin
Sean Busbey created HBASE-20381: --- Summary: precommit failing w/rat on shadedjars plugin Key: HBASE-20381 URL: https://issues.apache.org/jira/browse/HBASE-20381 Project: HBase Issue Type: Bug Components: test Reporter: Sean Busbey Assignee: Sean Busbey see HBASE-20219 and related builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-20381) precommit failing w/rat on shadedjars plugin
[ https://issues.apache.org/jira/browse/HBASE-20381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-20381 started by Sean Busbey. --- > precommit failing w/rat on shadedjars plugin > > > Key: HBASE-20381 > URL: https://issues.apache.org/jira/browse/HBASE-20381 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > > see HBASE-20219 and related builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432632#comment-16432632 ] Sean Busbey commented on HBASE-20219: - I think solving it probably requires access to the asf builds machine. I'm tracking working on it in HBASE-20381. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20380) Put up 2.0.0RC0
[ https://issues.apache.org/jira/browse/HBASE-20380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432628#comment-16432628 ] stack commented on HBASE-20380: --- Smile [~chia7712] It does not include your requested backport but no worries, I'm sure there'll be an RC1 (and an RC2). We can get it then. > Put up 2.0.0RC0 > --- > > Key: HBASE-20380 > URL: https://issues.apache.org/jira/browse/HBASE-20380 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > JIRA to hang 2.0.0RC0-making steps on. > I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES > to what is in branch-2.0. > {code} > $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l > --sortorder=newer --skip-credits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-20149: Release Note: We no longer include dev or dev test javadocs in our binary bundle. We still build them; they are just not included because they were half the size of the resultant tarball. Here is our story on javadoc as of this commit: * apidocs - user facing main api javadocs. currently for a release line, published on website and linked from menu. included in the bin tarball * devapidocs - hbase internal javadocs. currently for a release line, published on the website but not linked from the menu. no longer included in the bin tarball. * testapidocs - user facing test scope api javadocs. currently for a release line, not published. included in the bin tarball. * testdevapidocs - hbase internal test scope javadocs. currently for a release line, not published. no longer included in the bin tarball was: We no longer include dev or dev test javadocs in our binary bundle. We still build them; they are just not included because they were half the size of the resultant tarball. Here is our story on javadoc as of this commit: * apidocs - user facing main api javadocs. currently for a release line, published on website and linked from menu. included in the bin tarball * devapidocs - hbase internal javadocs. currently for a release line, published on the website but not linked from the menu. included in the bin tarball now, but not after this patch. * testapidocs - user facing test scope api javadocs. currently for a release line, not published. included in the bin tarball now and with this patch. * testdevapidocs - hbase internal test scope javadocs. currently for a release line, not published. included in the bin tarball now but not after this patch. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared
[ https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432625#comment-16432625 ] stack commented on HBASE-20368: --- [~Xiaolin Ha] May we see log showing the problem you describe? {quote} This error can be reproduced by shutting down all servers in a rsgroups and starting them soon afterwards. The regions on this rsgroup will be reassigned, but there is no available servers of this rsgroup. They will be added to AM's pendingAssginQueue, which AM will clear regardless of the result of assigning in this case. {quote} Please help me understand '...regions on this rsgroup will be reassigned, but there is no available servers of this rsgroup'. We just restarted the servers so why no available servers? (And IIRC, RSGroups identifies servers 'generally', with port and name rather than with port, name, and startcode.. so why does it not find the restarted servers?) bq. They will be added to AM's pendingAssginQueue, which AM will clear regardless of the result of assigning in this case. We clear from pendingAssignQ whether failed or success? Thanks. > Fix RIT stuck when a rsgroup has no online servers but AM's > pendingAssginQueue is cleared > - > > Key: HBASE-20368 > URL: https://issues.apache.org/jira/browse/HBASE-20368 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: HBASE-20368.branch-2.0.001.patch > > > This error can be reproduced by shutting down all servers in a rsgroups and > starting them soon afterwards. > The regions on this rsgroup will be reassigned, but there is no available > servers of this rsgroup. > They will be added to AM's pendingAssginQueue, which AM will clear regardless > of the result of assigning in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20380) Put up 2.0.0RC0
[ https://issues.apache.org/jira/browse/HBASE-20380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432616#comment-16432616 ] Chia-Ping Tsai commented on HBASE-20380: WHAT A LOVELY DAY! > Put up 2.0.0RC0 > --- > > Key: HBASE-20380 > URL: https://issues.apache.org/jira/browse/HBASE-20380 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > JIRA to hang 2.0.0RC0-making steps on. > I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES > to what is in branch-2.0. > {code} > $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l > --sortorder=newer --skip-credits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-20149. --- Resolution: Fixed Assignee: stack (was: Artem Ervits) Hadoop Flags: Reviewed Resolving. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20149: -- Release Note: We no longer include dev or dev test javadocs in our binary bundle. We still build them; they are just not included because they were half the size of the resultant tarball. Here is our story on javadoc as of this commit: * apidocs - user facing main api javadocs. currently for a release line, published on website and linked from menu. included in the bin tarball * devapidocs - hbase internal javadocs. currently for a release line, published on the website but not linked from the menu. included in the bin tarball now, but not after this patch. * testapidocs - user facing test scope api javadocs. currently for a release line, not published. included in the bin tarball now and with this patch. * testdevapidocs - hbase internal test scope javadocs. currently for a release line, not published. included in the bin tarball now but not after this patch. was:We no longer include user or test javadocs in our binary bundle. We still build them; they are just not included because they were half the size of the resultant tarball. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: Artem Ervits >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432602#comment-16432602 ] stack commented on HBASE-20149: --- Pushed to branch-2.0+. Thanks [~busbey]. Pasting your list into release notes. When you say 'published on the website', it makes it sound like we have an automated process rather than the manual hackery we currently do -- but I'll buy it. Wondering now if we should publish dev and user doc and no test APIs. HBASE-19663 makes it so user and dev api are currently the same thing when you click 'User API' and 'Dev API'. Needs work. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: Artem Ervits >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20149) Purge dev javadoc from bin tarball (or make a separate tarball of javadoc)
[ https://issues.apache.org/jira/browse/HBASE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20149: -- Release Note: We no longer include user or test javadocs in our binary bundle. We still build them; they are just not included because they were half the size of the resultant tarball. > Purge dev javadoc from bin tarball (or make a separate tarball of javadoc) > -- > > Key: HBASE-20149 > URL: https://issues.apache.org/jira/browse/HBASE-20149 > Project: HBase > Issue Type: Sub-task > Components: build, community, documentation >Reporter: stack >Assignee: Artem Ervits >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-20149.branch-2.0.001.patch, > HBASE-20149.branch-2.0.002.patch, HBASE-20149.branch-2.0.003.patch > > > The bin tarball is too fat (Chia-Ping and Josh noticed it on the beta-2 > vote). A note to the dev list subsequently resulted in suggestion that we > just purge dev javadoc (or even all javadoc) from bin tarball (Andrew). Sean > was good w/ it and suggested perhaps we could do a javadoc only tgz. Let me > look into this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20243) [Shell] Add shell command to create a new table by cloning the existent table
[ https://issues.apache.org/jira/browse/HBASE-20243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432590#comment-16432590 ] Guangxu Cheng commented on HBASE-20243: --- Address rubocop warnings as [~busbey] suggestions. Retry again:) > [Shell] Add shell command to create a new table by cloning the existent table > - > > Key: HBASE-20243 > URL: https://issues.apache.org/jira/browse/HBASE-20243 > Project: HBase > Issue Type: Improvement > Components: shell >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-20243.master.001.patch, > HBASE-20243.master.002.patch, HBASE-20243.master.003.patch, > HBASE-20243.master.004.patch, HBASE-20243.master.005.patch, > HBASE-20243.master.006.patch, HBASE-20243.master.007.patch, > HBASE-20243.master.008.patch, HBASE-20243.master.008.patch, > HBASE-20243.master.009.patch > > > In the production environment, we need to create a new table every day. The > schema and the split keys of the table are the same as that of yesterday's > table, only the name of the table is different. For example, > x_20180321,x_20180322 etc.But now there is no convenient command to > do this. So we may need such a command(clone_table) to create a new table by > cloning the existent table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20243) [Shell] Add shell command to create a new table by cloning the existent table
[ https://issues.apache.org/jira/browse/HBASE-20243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangxu Cheng updated HBASE-20243: -- Attachment: HBASE-20243.master.009.patch > [Shell] Add shell command to create a new table by cloning the existent table > - > > Key: HBASE-20243 > URL: https://issues.apache.org/jira/browse/HBASE-20243 > Project: HBase > Issue Type: Improvement > Components: shell >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-20243.master.001.patch, > HBASE-20243.master.002.patch, HBASE-20243.master.003.patch, > HBASE-20243.master.004.patch, HBASE-20243.master.005.patch, > HBASE-20243.master.006.patch, HBASE-20243.master.007.patch, > HBASE-20243.master.008.patch, HBASE-20243.master.008.patch, > HBASE-20243.master.009.patch > > > In the production environment, we need to create a new table every day. The > schema and the split keys of the table are the same as that of yesterday's > table, only the name of the table is different. For example, > x_20180321,x_20180322 etc.But now there is no convenient command to > do this. So we may need such a command(clone_table) to create a new table by > cloning the existent table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20380) Put up 2.0.0RC0
[ https://issues.apache.org/jira/browse/HBASE-20380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-20380: -- Description: JIRA to hang 2.0.0RC0-making steps on. I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES to what is in branch-2.0. {code} $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l --sortorder=newer --skip-credits {code} > Put up 2.0.0RC0 > --- > > Key: HBASE-20380 > URL: https://issues.apache.org/jira/browse/HBASE-20380 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Major > Fix For: 2.0.0 > > > JIRA to hang 2.0.0RC0-making steps on. > I ran the below out of yetus and copied over new CHANGELOG and RELEASENOTES > to what is in branch-2.0. > {code} > $ ./release-doc-maker/releasedocmaker.py -p HBASE --fileversions -v 2.0.0 -l > --sortorder=newer --skip-credits > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20380) Put up 2.0.0RC0
stack created HBASE-20380: - Summary: Put up 2.0.0RC0 Key: HBASE-20380 URL: https://issues.apache.org/jira/browse/HBASE-20380 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0 Reporter: stack Assignee: stack Fix For: 2.0.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432572#comment-16432572 ] Toshihiro Suzuki commented on HBASE-20219: -- Thank you [~busbey]. How can I resolve the shadedjars fail? > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20379) shadedjars yetus plugin should add a footer link
Sean Busbey created HBASE-20379: --- Summary: shadedjars yetus plugin should add a footer link Key: HBASE-20379 URL: https://issues.apache.org/jira/browse/HBASE-20379 Project: HBase Issue Type: Improvement Components: test Reporter: Sean Busbey Assignee: Sean Busbey investigating the failure on HBASE-20219, it would be nice if we posted a footer link to what failed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432563#comment-16432563 ] Sean Busbey commented on HBASE-20219: - please don't push until we can resolve shadedjars failing on master. > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20219) An error occurs when scanning with reversed=true and loadColumnFamiliesOnDemand=true
[ https://issues.apache.org/jira/browse/HBASE-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432562#comment-16432562 ] Toshihiro Suzuki commented on HBASE-20219: -- [~stack] Do you want the patch in 2.0? > An error occurs when scanning with reversed=true and > loadColumnFamiliesOnDemand=true > > > Key: HBASE-20219 > URL: https://issues.apache.org/jira/browse/HBASE-20219 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20219-UT.patch, HBASE-20219.master.001.patch, > HBASE-20219.master.002.patch, HBASE-20219.master.003.patch, > HBASE-20219.master.004.patch > > > I'm facing the following error when scanning with reversed=true and > loadColumnFamiliesOnDemand=true: > {code} > java.lang.IllegalStateException: requestSeek cannot be called on > ReversedKeyValueHeap > at > org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:66) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6725) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6652) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6364) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3108) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3345) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41548) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > {code} > I will attach a UT patch to reproduce this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432561#comment-16432561 ] stack commented on HBASE-19663: --- This ain't a blocker. Yesterday my pom was missing the workaround HBASE-19670. The implication is that when you click on user api or on dev api, you see the same thing; all api. Not pretty but not end of the world. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: website >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian JIRA (v7.6.3#76005)