[jira] [Updated] (HBASE-19397) Design procedures for ReplicationManager to notify peer change event from master
[ https://issues.apache.org/jira/browse/HBASE-19397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19397: -- Issue Type: New Feature (was: Sub-task) Parent: (was: HBASE-15867) > Design procedures for ReplicationManager to notify peer change event from > master > - > > Key: HBASE-19397 > URL: https://issues.apache.org/jira/browse/HBASE-19397 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > > After we store peer states / peer queues information into hbase table, RS > can not track peer config change by adding watcher znode. > So we need design procedures for ReplicationManager to notify peer change > event. the replication rpc interfaces which may be implemented by > procedures are following: > {code} > 1. addReplicationPeer > 2. removeReplicationPeer > 3. enableReplicationPeer > 4. disableReplicationPeer > 5. updateReplicationPeerConfig > {code} > BTW, our RS states will still be store in zookeeper, so when RS crash, the > tracker which will trigger to transfer queues of crashed RS will still be a > Zookeeper Tracker. we need NOT implement that by procedures. > As we will release 2.0 in next weeks, and the HBASE-15867 can not be > resolved before the release, so I'd prefer to create a new feature branch > for HBASE-15867. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19216) Implement a general framework to execute remote procedure on RS
[ https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283178#comment-16283178 ] Duo Zhang commented on HBASE-19216: --- I planned to implement a general framework and also a test procedure to show that the framework is OK. But finally I found that we have different queues in MasterProcedureScheduler, and both of ServerQueue and TableQueue can not represent the operations like peer state change, and it is weird to add a 'test' queue type to it. So I plan to convert HBASE-19397 to a top level issue, and change the title of this issue to 'Master side change of peer state change procedure' and make it a sub task, and create another sub task for [~openinx]'s RS side change. Since the feature can only be implemented after both issues are resolved, I plan to create a feature branch HBASE-19397, and merge it back after the two sub-tasks are both resolved. Thanks. > Implement a general framework to execute remote procedure on RS > --- > > Key: HBASE-19216 > URL: https://issues.apache.org/jira/browse/HBASE-19216 > Project: HBase > Issue Type: Improvement >Reporter: Duo Zhang > > When building the basic framework for HBASE-19064, I found that the > enable/disable peer is built upon the watcher of zk. > The problem of using watcher is that, you do not know the exact time when all > RSes in the cluster have done the change, it is a 'eventually done'. > And for synchronous replication, when changing the state of a replication > peer, we need to know the exact time as we can only enable read/write after > that time. So I think we'd better use procedure to do this. Change the flag > on zk, and then execute a procedure on all RSes to reload the flag from zk. > Another benefit is that, after the change, zk will be mainly used as a > storage, so it will be easy to implement another replication peer storage to > replace zk so that we can reduce the dependency on zk. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19134) Make WALKey an Interface; expose Read-Only version to CPs
[ https://issues.apache.org/jira/browse/HBASE-19134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283177#comment-16283177 ] stack commented on HBASE-19134: --- .003 Fix tests (dumb attempt at reusing empty WALKeys went awry). > Make WALKey an Interface; expose Read-Only version to CPs > - > > Key: HBASE-19134 > URL: https://issues.apache.org/jira/browse/HBASE-19134 > Project: HBase > Issue Type: Bug > Components: Coprocessors, wal >Reporter: stack >Assignee: stack > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19134.master.001.patch, > HBASE-19134.master.002.patch, HBASE-19134.master.003.patch > > > WALKey has been made IA.Private in hbase2. Even so, given we don't have an > alternative to expose at this time, it is exposed to coprocessors still at a > few (now deprecated) locations. > In review of HBASE-18770, [~chia7712] makes reasonable suggestion that what > we expose to CPs be a read-only WALKey. He gets pushback on doing this for > hbase2 (Do we even want to expose WALKey to CPs, is WALKey right going > forward, etc.). Chia-Ping comes back w/ the below (copied from HBASE-18770): > What we want to fix for WALKey are shown below. > * expose some methods to CP user safety > * refactor/redo > As I see it, adding an interface exposed to CP user for WALKey is a right > choice because it can bring some benefit. > * We can expose part of WALKey's methods to CP users - a read-only interface > or an interface with some modifiable setting. > * The related CP hooks won't be deprecated > * Doing the refactor for WALKey doesn't essentially impact the CP hook after > 2.0 release. > Although, it will be better to redo WALKey before 2.0 release. In short, I > think it warrants such an interface. > (We both agree this would be a load of work given WALKey is written to > HFiles. Warrants a look though). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19134) Make WALKey an Interface; expose Read-Only version to CPs
[ https://issues.apache.org/jira/browse/HBASE-19134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19134: -- Attachment: HBASE-19134.master.003.patch > Make WALKey an Interface; expose Read-Only version to CPs > - > > Key: HBASE-19134 > URL: https://issues.apache.org/jira/browse/HBASE-19134 > Project: HBase > Issue Type: Bug > Components: Coprocessors, wal >Reporter: stack >Assignee: stack > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19134.master.001.patch, > HBASE-19134.master.002.patch, HBASE-19134.master.003.patch > > > WALKey has been made IA.Private in hbase2. Even so, given we don't have an > alternative to expose at this time, it is exposed to coprocessors still at a > few (now deprecated) locations. > In review of HBASE-18770, [~chia7712] makes reasonable suggestion that what > we expose to CPs be a read-only WALKey. He gets pushback on doing this for > hbase2 (Do we even want to expose WALKey to CPs, is WALKey right going > forward, etc.). Chia-Ping comes back w/ the below (copied from HBASE-18770): > What we want to fix for WALKey are shown below. > * expose some methods to CP user safety > * refactor/redo > As I see it, adding an interface exposed to CP user for WALKey is a right > choice because it can bring some benefit. > * We can expose part of WALKey's methods to CP users - a read-only interface > or an interface with some modifiable setting. > * The related CP hooks won't be deprecated > * Doing the refactor for WALKey doesn't essentially impact the CP hook after > 2.0 release. > Although, it will be better to redo WALKey before 2.0 release. In short, I > think it warrants such an interface. > (We both agree this would be a load of work given WALKey is written to > HFiles. Warrants a look though). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19451) Reduce default Block Cache size percentage
[ https://issues.apache.org/jira/browse/HBASE-19451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283154#comment-16283154 ] Anoop Sam John commented on HBASE-19451: The default heap % for the on heap Block Cache can be diff based on whether BC is there or not. Ya this is possible.. We can say if only LRU cache, we will continue with 40% or else 20%. I did not get ur mean on auto sizing thing. For the initial decision the auto sizing way not needed. That is at run time and if enabled ya that will auto tune the LRU cache size within a range. That range max will be this new max what we arrived at. Sorry Stack if u mean some thing else pls correct me > Reduce default Block Cache size percentage > -- > > Key: HBASE-19451 > URL: https://issues.apache.org/jira/browse/HBASE-19451 > Project: HBase > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0-beta-1 > > > This is 40% by default now. Reduce this to be 20%? Or even 10%? > It only needs to keep index and bloom blocks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-15482: - Status: Open (was: Patch Available) > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch, HBASE-15482.master.002.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-15482: - Status: Patch Available (was: Open) > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch, HBASE-15482.master.002.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-15482: - Attachment: HBASE-15482.master.002.patch Upload patch 002 to address the result of checkstyle > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch, HBASE-15482.master.002.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283130#comment-16283130 ] stack commented on HBASE-16890: --- No. I didn't have that one. Thanks. Retry. > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0-beta-1 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png, > ycsb_FSHlog.vs.Async.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19024) provide a configurable option to hsync WAL edits to the disk for better durability
[ https://issues.apache.org/jira/browse/HBASE-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19024: -- Priority: Critical (was: Major) > provide a configurable option to hsync WAL edits to the disk for better > durability > -- > > Key: HBASE-19024 > URL: https://issues.apache.org/jira/browse/HBASE-19024 > Project: HBase > Issue Type: Improvement > Components: wal > Environment: >Reporter: Vikas Vishwakarma >Assignee: Harshal Jain >Priority: Critical > Attachments: HBASE-19024.branch-1.2.001.patch, > HBASE-19024.branch-1.2.002.patch, HBASE-19024.branch-1.2.003.patch, > HBASE-19024.branch-1.2.004.patch, HBASE-19024.branch-1.2.005.patch, > branch-1.branch-1.patch, branch-1.v1.branch-1.patch, master.patch, > master.v2.patch, master.v3.patch, master.v5.patch, master.v5.patch, > master.v6.patch, master.v9.patch > > > At present we do not have an option to hsync WAL edits to the disk for better > durability. In our local tests we see 10-15% latency impact of using hsync > instead of hflush which is not very high. > We should have a configurable option to hysnc WAL edits instead of just > sync/hflush which will call the corresponding API on the hadoop side. > Currently HBase handles both SYNC_WAL and FSYNC_WAL as the same calling > FSDataOutputStream sync/hflush on the hadoop side. This can be modified to > let FSYNC_WAL call hsync on the hadoop side instead of sync/hflush. We can > keep the default value to sync as the current behavior and hsync can be > enabled based on explicit configuration. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-13819) Make RPC layer CellBlock buffer a DirectByteBuffer
[ https://issues.apache.org/jira/browse/HBASE-13819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283110#comment-16283110 ] Anoop Sam John commented on HBASE-13819: U mean at the NIO level right? Ya better say that in book. The BBPool there by default pools on heap BBs. Not DBBs. One can enable it to be DBB. And if on heap BB passed to NIO, ya the NIO layer DBBs are created. We can say abt the JVM flag clearly. Thanks for the sharing.. > Make RPC layer CellBlock buffer a DirectByteBuffer > -- > > Key: HBASE-13819 > URL: https://issues.apache.org/jira/browse/HBASE-13819 > Project: HBase > Issue Type: Sub-task > Components: Scanners >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-13819.patch, HBASE-13819_branch-1.patch, > HBASE-13819_branch-1.patch, HBASE-13819_branch-1.patch, q.png > > > In RPC layer, when we make a cellBlock to put as RPC payload, we will make an > on heap byte buffer (via BoundedByteBufferPool). The pool will keep upto > certain number of buffers. This jira aims at testing possibility for making > this buffers off heap ones. (DBB) The advantages > 1. Unsafe based writes to off heap is faster than that to on heap. Now we are > not using unsafe based writes at all. Even if we add, DBB will be better > 2. When Cells are backed by off heap (HBASE-11425) off heap to off heap > writes will be better > 3. When checked the code in SocketChannel impl, if we pass a HeapByteBuffer > to the socket channel, it will create a temp DBB and copy data to there and > only DBBs will be moved to Sockets. If we make DBB 1st hand itself, we can > avoid this one more level of copying. > Will do different perf testing with changed and report back. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283109#comment-16283109 ] Duo Zhang commented on HBASE-16890: --- Does your build have this commit in place? {quote} commit ebd8841e0ee9ca1ab7b6dab55178761360b8d85a Author: Chia-Ping TsaiDate: Wed Dec 6 21:54:45 2017 +0800 HBASE-18112 (addendum) fix the out-of-bounds index {quote} There was a bug in NettyRpcFrameDecoder when counting the bytes of a rpc frame. Thanks. > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0-beta-1 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png, > ycsb_FSHlog.vs.Async.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283106#comment-16283106 ] stack commented on HBASE-16890: --- Trying your suggestion [~Apache9] but seems like ycsb hangs. Trying to figure the why. See lots of this in the hosting RS: {code} 738405 2017-12-07 17:34:18,635 DEBUG [RS-EventLoopGroup-1-13] ipc.NettyRpcServer: Connection from /10.17.240.20:42263 catch unexpected exception from downstream. 738406 java.lang.IndexOutOfBoundsException: PooledUnsafeDirectByteBuf(ridx: 4, widx: 272, cap: 272).slice(4, 270) 738407 at org.apache.hadoop.hbase.shaded.io.netty.buffer.AbstractUnpooledSlicedByteBuf.checkSliceOutOfBounds(AbstractUnpooledSlicedByteBuf.java:492) 738408 at org.apache.hadoop.hbase.shaded.io.netty.buffer.PooledSlicedByteBuf.newInstance(PooledSlicedByteBuf.java:44) 738409 at org.apache.hadoop.hbase.shaded.io.netty.buffer.PooledByteBuf.retainedSlice(PooledByteBuf.java:150) 738410 at org.apache.hadoop.hbase.ipc.NettyRpcFrameDecoder.decode(NettyRpcFrameDecoder.java:106) 738411 at org.apache.hadoop.hbase.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) 738412 at org.apache.hadoop.hbase.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) 738413 at org.apache.hadoop.hbase.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) 738414 at org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) 738415 at org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) 738416 at org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) 738417 at org.apache.hadoop.hbase.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) 738418 at org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) 738419 at org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) 738420 at org.apache.hadoop.hbase.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) 738421 at org.apache.hadoop.hbase.shaded.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:979) 738422 at org.apache.hadoop.hbase.shaded.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:401) 738423 at org.apache.hadoop.hbase.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306) 738424 at org.apache.hadoop.hbase.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) 738425 at org.apache.hadoop.hbase.shaded.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) 738426 at java.lang.Thread.run(Thread.java:745) {code} Going to just retry since heading to bed... > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0-beta-1 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png, > ycsb_FSHlog.vs.Async.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283107#comment-16283107 ] Hadoop QA commented on HBASE-19449: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 52s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 14 unchanged - 5 fixed = 14 total (was 19) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 53m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 36s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19449 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901190/HBASE-19449.3.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux cafa6c64a7b6 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 033e64a8b1 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10297/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10297/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically
[jira] [Commented] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283103#comment-16283103 ] Hadoop QA commented on HBASE-19450: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 53s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 24s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 52m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19450 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901197/HBASE-19450.master.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux e5a0963887f1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 5034411438 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10300/testReport/ | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10300/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Add log about average execution time for
[jira] [Commented] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283093#comment-16283093 ] Hadoop QA commented on HBASE-15482: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 47s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} hbase-mapreduce: The patch generated 3 new + 17 unchanged - 0 fixed = 20 total (was 17) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 30s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 59m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 54s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-15482 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901194/HBASE-15482.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 7a1faeb05d01 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 5034411438 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/10299/artifact/patchprocess/diff-checkstyle-hbase-mapreduce.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10299/testReport/ | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10299/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message
[jira] [Commented] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283087#comment-16283087 ] Ted Yu commented on HBASE-15482: You can choose the form which is intuitive to you. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283085#comment-16283085 ] Xiang Li commented on HBASE-15482: -- Hi [~tedyu], thanks very much for the review and comment. I got your idea. I was thinking that if the current statements are more easy to understand than you suggested. The current statements explicitly show that we have 3 conditions to handle: # When hostAndWeights.length == 0 # When hostAndWeights.length == 1 || numTopsAtMost <= 1 # Others (hostAndWeights.length >= 2 && numTopsAtMost >= 2) Please let me know your idea. Thanks! > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283074#comment-16283074 ] Reid Chan edited comment on HBASE-19450 at 12/8/17 5:21 AM: This is the basic one, and (HBASE-19459, HBASE-19460) these thoughts are currently in my head, FYI [~tedyu], [~mdrob], [~carp84]. was (Author: reidchan): This is the basic one, and (HBASE-19459, HBASE-19460) these thoughts are currently in my head, FYI [~tedyu] [~mdrob] [~carp84]. > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283074#comment-16283074 ] Reid Chan commented on HBASE-19450: --- This is the basic one, and (HBASE-18459, HBASE-18460) these thoughts are currently in my head, FYI [~tedyu] [~mdrob] [~carp84]. > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283074#comment-16283074 ] Reid Chan edited comment on HBASE-19450 at 12/8/17 5:20 AM: This is the basic one, and (HBASE-19459, HBASE-19460) these thoughts are currently in my head, FYI [~tedyu] [~mdrob] [~carp84]. was (Author: reidchan): This is the basic one, and (HBASE-18459, HBASE-18460) these thoughts are currently in my head, FYI [~tedyu] [~mdrob] [~carp84]. > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19460) A self-learning ChoreService to schedule chores
[ https://issues.apache.org/jira/browse/HBASE-19460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19460: -- Description: After HBASE-19450, we can obtain all chores's avg execution time with which chores service may have the ability to learn the trend and re-schedule them. > A self-learning ChoreService to schedule chores > --- > > Key: HBASE-19460 > URL: https://issues.apache.org/jira/browse/HBASE-19460 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan > > After HBASE-19450, we can obtain all chores's avg execution time with which > chores service may have the ability to learn the trend and re-schedule them. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19460) A self-learning ChoreService to schedule chores
Reid Chan created HBASE-19460: - Summary: A self-learning ChoreService to schedule chores Key: HBASE-19460 URL: https://issues.apache.org/jira/browse/HBASE-19460 Project: HBase Issue Type: Improvement Reporter: Reid Chan -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19459) Support more moving average algorithms to calculate the execution time of a chore
Reid Chan created HBASE-19459: - Summary: Support more moving average algorithms to calculate the execution time of a chore Key: HBASE-19459 URL: https://issues.apache.org/jira/browse/HBASE-19459 Project: HBase Issue Type: Improvement Reporter: Reid Chan Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283061#comment-16283061 ] Reid Chan commented on HBASE-19450: --- bq. I think the average of the interval (like last 5 times or last 5 minutes) will be more helpful to indicate potential issues Ya, you got me, that's what i'm planning in the coming jiras, i just want to start a small one to avoid a sudden whack.. > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19457) Debugging flaky TestTruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283058#comment-16283058 ] Appy commented on HBASE-19457: -- Findings from today: - Checked ~10 runs, these two are failing in all. org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure.testRecoveryAndDoubleExecutionPreserveSplits org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure.testSimpleTruncatePreserveSplits - Tried debugging testSimpleTruncatePreserveSplits (since it's simpler test), dug around, added a bunch of logging messages (will commit shortly), in the end, it's not even the culprit. It finished perfectly, but then times out cleaning up tables in @After TestTableDDLProcedureBase#teadDown. - More importantly, table testRecoveryAndDoubleExecutionPreserveSplits is the issue. {noformat} 2017-12-06 22:42:44,216 DEBUG [ProcExecWrkr-1] procedure.TruncateTableProcedure(134): truncate 'testSimpleTruncatePreserveSplits' completed 2017-12-06 22:42:44,260 INFO [ProcExecWrkr-1] procedure2.ProcedureExecutor(1246): Finished pid=99, state=SUCCESS; TruncateTableProcedure (table=testSimpleTruncatePreserveSplits preserveSplits=true) in 2.0060sec 2017-12-06 22:42:44,508 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2017-12-06 22:42:44,515 INFO [Time-limited test] hbase.Waiter(189): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2017-12-06 22:42:44,520 DEBUG [Time-limited test] util.CommonFSUtils(736): Current file system: 2017-12-06 22:42:44,521 DEBUG [Time-limited test] util.CommonFSUtils(754): |-.tabledesc/ 2017-12-06 22:42:44,522 DEBUG [Time-limited test] util.CommonFSUtils(757): |.tableinfo.01 2017-12-06 22:42:44,522 DEBUG [Time-limited test] util.CommonFSUtils(754): |-.tmp/ 2017-12-06 22:42:44,523 DEBUG [Time-limited test] util.CommonFSUtils(754): |-1c6cd8294a789f4998b71741869e1aa7/ 2017-12-06 22:42:44,524 DEBUG [Time-limited test] util.CommonFSUtils(757): |.regioninfo 2017-12-06 22:42:44,524 DEBUG [Time-limited test] util.CommonFSUtils(754): |f1/ 2017-12-06 22:42:44,524 DEBUG [Time-limited test] util.CommonFSUtils(754): |f2/ 2017-12-06 22:42:44,525 DEBUG [Time-limited test] util.CommonFSUtils(754): |recovered.edits/ 2017-12-06 22:42:44,526 DEBUG [Time-limited test] util.CommonFSUtils(757): |---2.seqid 2017-12-06 22:42:44,526 DEBUG [Time-limited test] util.CommonFSUtils(754): |-5bd5770f4cb5fda71c84441d6be2c0e7/ 2017-12-06 22:42:44,527 DEBUG [Time-limited test] util.CommonFSUtils(757): |.regioninfo 2017-12-06 22:42:44,527 DEBUG [Time-limited test] util.CommonFSUtils(754): |f1/ 2017-12-06 22:42:44,527 DEBUG [Time-limited test] util.CommonFSUtils(754): |f2/ 2017-12-06 22:42:44,528 DEBUG [Time-limited test] util.CommonFSUtils(754): |recovered.edits/ 2017-12-06 22:42:44,529 DEBUG [Time-limited test] util.CommonFSUtils(757): |---2.seqid 2017-12-06 22:42:44,529 DEBUG [Time-limited test] util.CommonFSUtils(754): |-c24958fa636e29e21028e350d32623fb/ 2017-12-06 22:42:44,530 DEBUG [Time-limited test] util.CommonFSUtils(757): |.regioninfo 2017-12-06 22:42:44,530 DEBUG [Time-limited test] util.CommonFSUtils(754): |f1/ 2017-12-06 22:42:44,530 DEBUG [Time-limited test] util.CommonFSUtils(754): |f2/ 2017-12-06 22:42:44,531 DEBUG [Time-limited test] util.CommonFSUtils(754): |recovered.edits/ 2017-12-06 22:42:44,532 DEBUG [Time-limited test] util.CommonFSUtils(757): |---2.seqid 2017-12-06 22:42:44,532 DEBUG [Time-limited test] util.CommonFSUtils(754): |-e9f2e3d66bf51aa5d73745df63badf9a/ 2017-12-06 22:42:44,533 DEBUG [Time-limited test] util.CommonFSUtils(757): |.regioninfo 2017-12-06 22:42:44,533 DEBUG [Time-limited test] util.CommonFSUtils(754): |f1/ 2017-12-06 22:42:44,533 DEBUG [Time-limited test] util.CommonFSUtils(754): |f2/ 2017-12-06 22:42:44,534 DEBUG [Time-limited test] util.CommonFSUtils(754): |recovered.edits/ 2017-12-06 22:42:44,535 DEBUG [Time-limited test] util.CommonFSUtils(757): |---2.seqid 2017-12-06 22:42:44,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=43202] ipc.CallRunner(141): callId: 233 service: ClientService methodName: Scan size: 138 connection: 67.195.81.155:48320 deadline: 1512600224556 .some scanner errors for NotServingRegionException, but then retry passes 2017-12-06 22:42:44,998 DEBUG [Time-limited test] client.ClientScanner(241): Advancing internal scanner to startKey at 'a', inclusive 2017-12-06 22:42:45,000 DEBUG [Time-limited test] client.ClientScanner(241): Advancing internal scanner to startKey at 'b', inclusive 2017-12-06 22:42:45,002 DEBUG [Time-limited test] client.ClientScanner(241): Advancing internal scanner to startKey at 'c', inclusive 2017-12-06 22:42:45,005 WARN [Time-limited test] procedure2.ProcedureTestingUtility(146): Set Kill before store update to: false
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Status: Patch Available (was: Open) > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Attachment: (was: HBASE-19450.master.002.patch) > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Status: Open (was: Patch Available) > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Attachment: HBASE-19450.master.002.patch > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
[ https://issues.apache.org/jira/browse/HBASE-19458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-19458: -- Description: Currently the 1.3 branch cannot be built against Hadoop 2.8.2. (that latest stable 2.x release). > Allow building HBase 1.3.x against Hadoop 2.8.2 > --- > > Key: HBASE-19458 > URL: https://issues.apache.org/jira/browse/HBASE-19458 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.1 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 1.3.2 > > Attachments: 19458.txt > > > Currently the 1.3 branch cannot be built against Hadoop 2.8.2. (that latest > stable 2.x release). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
[ https://issues.apache.org/jira/browse/HBASE-19458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl reassigned HBASE-19458: - Assignee: Lars Hofhansl > Allow building HBase 1.3.x against Hadoop 2.8.2 > --- > > Key: HBASE-19458 > URL: https://issues.apache.org/jira/browse/HBASE-19458 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.1 >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 1.3.2 > > Attachments: 19458.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
[ https://issues.apache.org/jira/browse/HBASE-19458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-19458: -- Fix Version/s: 1.3.2 > Allow building HBase 1.3.x against Hadoop 2.8.2 > --- > > Key: HBASE-19458 > URL: https://issues.apache.org/jira/browse/HBASE-19458 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.1 >Reporter: Lars Hofhansl > Fix For: 1.3.2 > > Attachments: 19458.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
[ https://issues.apache.org/jira/browse/HBASE-19458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-19458: -- Attachment: 19458.txt Simple patch for anyone who cares. > Allow building HBase 1.3.x against Hadoop 2.8.2 > --- > > Key: HBASE-19458 > URL: https://issues.apache.org/jira/browse/HBASE-19458 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl > Attachments: 19458.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
Lars Hofhansl created HBASE-19458: - Summary: Allow building HBase 1.3.x against Hadoop 2.8.2 Key: HBASE-19458 URL: https://issues.apache.org/jira/browse/HBASE-19458 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19458) Allow building HBase 1.3.x against Hadoop 2.8.2
[ https://issues.apache.org/jira/browse/HBASE-19458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-19458: -- Affects Version/s: 1.3.1 > Allow building HBase 1.3.x against Hadoop 2.8.2 > --- > > Key: HBASE-19458 > URL: https://issues.apache.org/jira/browse/HBASE-19458 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.1 >Reporter: Lars Hofhansl > Fix For: 1.3.2 > > Attachments: 19458.txt > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283047#comment-16283047 ] ramkrishna.s.vasudevan commented on HBASE-19435: When ever a compaction happens we try to remove that hfile that was compacted from the cache. So all those blocks related to that hfile is removed. But since this is bucket cache - what we have is buckets that has these blocks and those are always in the fixed set of files that we have configured for file mode bucket cache. And when we evict a block for the compacted file there is nothing like we go and close the file associated with it inside bucket cache becuase that does not have any knowledge on what file's blocks are evicted or cached. May be there are too many file channels open due to heavy compaction and that internally closes the socket channel open on this bucket cache's files? > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283043#comment-16283043 ] Ted Yu edited comment on HBASE-15482 at 12/8/17 4:44 AM: - {code} + List locations = new ArrayList<>(Math.min(numTopsAtMost, hostAndWeights.length)); ... + for (int i = 1; i < hostAndWeights.length; i++) { {code} Shouldn't the value of Math.min() be used as the upper bound of the loop above ? {code} + return locations; +} else { // hostAndWeights.length >= 2 && numTopsAtMost >= 2 {code} nit: you can omit the 'else' keyword following the return in previous if block. was (Author: yuzhih...@gmail.com): {code} + List locations = new ArrayList<>(Math.min(numTopsAtMost, hostAndWeights.length)); ... + for (int i = 1; i < hostAndWeights.length; i++) { {code} Shouldn't the value of Math.min() be used as the upper bound above ? {code} + return locations; +} else { // hostAndWeights.length >= 2 && numTopsAtMost >= 2 {code} nit: you can omit the 'else' keyword following the return in previous if block. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283043#comment-16283043 ] Ted Yu commented on HBASE-15482: {code} + List locations = new ArrayList<>(Math.min(numTopsAtMost, hostAndWeights.length)); ... + for (int i = 1; i < hostAndWeights.length; i++) { {code} Shouldn't the value of Math.min() be used as the upper bound above ? {code} + return locations; +} else { // hostAndWeights.length >= 2 && numTopsAtMost >= 2 {code} nit: you can omit the 'else' keyword following the return in previous if block. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing
[ https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283042#comment-16283042 ] Hudson commented on HBASE-19163: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4187 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4187/]) HBASE-19163 Maximum lock count exceeded from region server's batch (huaxiangsun: rev 428e5672e77d6dea9cfdafb5a3052415b9926d12) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java > "Maximum lock count exceeded" from region server's batch processing > --- > > Key: HBASE-19163 > URL: https://issues.apache.org/jira/browse/HBASE-19163 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3 >Reporter: huaxiang sun >Assignee: huaxiang sun > Attachments: HBASE-19163-master-v001.patch, > HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, > HBASE-19163.master.004.patch, HBASE-19163.master.005.patch, > HBASE-19163.master.006.patch, HBASE-19163.master.007.patch, > HBASE-19163.master.008.patch, HBASE-19163.master.009.patch, > HBASE-19163.master.009.patch, HBASE-19163.master.010.patch, unittest-case.diff > > > In one of use cases, we found the following exception and replication is > stuck. > {code} > 2017-10-25 19:41:17,199 WARN [hconnection-0x28db294f-shared--pool4-t936] > client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last > exception: java.io.IOException: java.io.IOException: Maximum lock count > exceeded > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165) > Caused by: java.lang.Error: Maximum lock count exceeded > at > java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528) > at > java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327) > at > java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871) > at > org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > ... 3 more > {code} > While we are still examining the data pattern, it is sure that there are too > many mutations in the batch against the same row, this exceeds the maximum > 64k shared lock count and it throws an error and failed the whole batch. > There are two approaches to solve this issue. > 1). Let's say there are mutations against the same row in the batch, we just > need to acquire the lock once for the same row vs to acquire the lock for > each mutation. > 2). We catch the error and start to process whatever it gets and loop back. > With HBASE-17924, approach 1 seems easy to implement now. > Create the jira and will post update/patch when investigation moving forward. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283035#comment-16283035 ] Xiang Li edited comment on HBASE-15482 at 12/8/17 4:27 AM: --- [~tedyu], thanks very much for your comments! patch 001 is uploaded to address your comments as well as the errors reported by checkstyle. * "hbase.TableSnapshotInputFormat.locality" is changed into "hbase.TableSnapshotInputFormat.locality.enable". * The truncation of locations is moved into getBestLocations(). * The errors reported by checkstyle are corrected. Regarding {{moving the truncation of locations into getBestLocations()}}: The code has different logic for different combinations of hostAndWeights.length and numTopsAtMost. And there is a small behavior change on getBestLocations() when hostAndWeights.length is 0: * Originally, it returns an empty list. * After the change, it returns null. I think we do not need to allocate an empty list here, as the locations will be used to construct TableSnapshotInputFormatImpl.InputSplit and null will be checked as follow {code:title=hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java|borderStyle=solid} public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, List locations, Scan scan, Path restoreDir) { this.htd = htd; this.regionInfo = regionInfo; if (locations == null || locations.isEmpty()) { // <--- here this.locations = new String[0]; } else { this.locations = locations.toArray(new String[locations.size()]); } try { this.scan = scan != null ? TableMapReduceUtil.convertScanToString(scan) : ""; } catch (IOException e) { LOG.warn("Failed to convert Scan to String", e); } this.restoreDir = restoreDir.toString(); } {code} And TableSnapshotInputFormatImpl is @InterfaceAudience.Private, there is no other calls of getBestLocations() in the whole HBase project except UTs. A UT is updated according to the change above. was (Author: water): [~tedyu], thanks very much for your comments! patch 001 is updated to address your comments as well as the errors reported by checkstyle. * "hbase.TableSnapshotInputFormat.locality" is changed into "hbase.TableSnapshotInputFormat.locality.enable". * The truncation of locations is moved into getBestLocations(). * The errors reported by checkstyle are corrected. Regarding {{moving the truncation of locations into getBestLocations()}}: The code has different logic for different combinations of hostAndWeights.length and numTopsAtMost. And there is a small behavior change on getBestLocations() when hostAndWeights.length is 0: * Originally, it returns an empty list. * After the change, it returns null. I think we do not need to allocate an empty list here, as the locations will be used to construct TableSnapshotInputFormatImpl.InputSplit and null will be checked as follow {code:title=hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java|borderStyle=solid} public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, List locations, Scan scan, Path restoreDir) { this.htd = htd; this.regionInfo = regionInfo; if (locations == null || locations.isEmpty()) { // <--- here this.locations = new String[0]; } else { this.locations = locations.toArray(new String[locations.size()]); } try { this.scan = scan != null ? TableMapReduceUtil.convertScanToString(scan) : ""; } catch (IOException e) { LOG.warn("Failed to convert Scan to String", e); } this.restoreDir = restoreDir.toString(); } {code} And TableSnapshotInputFormatImpl is @InterfaceAudience.Private, there is no other calls of getBestLocations() in the whole HBase project except UTs. A UT is updated according to the change above. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be
[jira] [Work started] (HBASE-19457) Debugging flaky TestTruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-19457 started by Appy. > Debugging flaky TestTruncateTableProcedure > -- > > Key: HBASE-19457 > URL: https://issues.apache.org/jira/browse/HBASE-19457 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283035#comment-16283035 ] Xiang Li edited comment on HBASE-15482 at 12/8/17 4:26 AM: --- [~tedyu], thanks very much for your comments! patch 001 is updated to address your comments as well as the errors reported by checkstyle. * "hbase.TableSnapshotInputFormat.locality" is changed into "hbase.TableSnapshotInputFormat.locality.enable". * The truncation of locations is moved into getBestLocations(). * The errors reported by checkstyle are corrected. Regarding {{moving the truncation of locations into getBestLocations()}}: The code has different logic for different combinations of hostAndWeights.length and numTopsAtMost. And there is a small behavior change on getBestLocations() when hostAndWeights.length is 0: * Originally, it returns an empty list. * After the change, it returns null. I think we do not need to allocate an empty list here, as the locations will be used to construct TableSnapshotInputFormatImpl.InputSplit and null will be checked as follow {code:title=hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java|borderStyle=solid} public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, List locations, Scan scan, Path restoreDir) { this.htd = htd; this.regionInfo = regionInfo; if (locations == null || locations.isEmpty()) { // <--- here this.locations = new String[0]; } else { this.locations = locations.toArray(new String[locations.size()]); } try { this.scan = scan != null ? TableMapReduceUtil.convertScanToString(scan) : ""; } catch (IOException e) { LOG.warn("Failed to convert Scan to String", e); } this.restoreDir = restoreDir.toString(); } {code} And TableSnapshotInputFormatImpl is @InterfaceAudience.Private, there is no other calls of getBestLocations() in the whole HBase project except UTs. A UT is updated according to the change above. was (Author: water): [~tedyu], thanks very much for your comments! patch 001 is updated to address your comments as well as the errors reported by checkstyle. * "hbase.TableSnapshotInputFormat.locality" is changed into "hbase.TableSnapshotInputFormat.locality.enable". * The truncation of locations is moved into getBestLocations(). * The errors reported by checkstyle are corrected. Regarding {{moving the truncation of locations into getBestLocations()}}: The code has different logic for different combinations of hostAndWeights.length and numTopsAtMost. And there is a small behavior change on getBestLocations() when hostAndWeights.length is 0: * Originally, it returns a empty list. * After the change, it returns null. I think we do not need to allocate an empty list here, as the locations will be used to construct TableSnapshotInputFormatImpl.InputSplit and null will be checked as follow {code:title=hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java|borderStyle=solid} public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, List locations, Scan scan, Path restoreDir) { this.htd = htd; this.regionInfo = regionInfo; if (locations == null || locations.isEmpty()) { // <--- here this.locations = new String[0]; } else { this.locations = locations.toArray(new String[locations.size()]); } try { this.scan = scan != null ? TableMapReduceUtil.convertScanToString(scan) : ""; } catch (IOException e) { LOG.warn("Failed to convert Scan to String", e); } this.restoreDir = restoreDir.toString(); } {code} And TableSnapshotInputFormatImpl is @InterfaceAudience.Private, there is no other calls of getBestLocations() in the whole HBase project except UTs. A UT is updated according to the change above. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great
[jira] [Commented] (HBASE-19457) Debugging flaky TestTruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283036#comment-16283036 ] Appy commented on HBASE-19457: -- This is so weird, test fails consistently upstream (individual test functions time out after 180sec), but passes locally (~80sec for whole test class , 6 tests)! > Debugging flaky TestTruncateTableProcedure > -- > > Key: HBASE-19457 > URL: https://issues.apache.org/jira/browse/HBASE-19457 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-15482: - Status: Patch Available (was: Open) > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19457) Debugging flaky TestTruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-19457: - Summary: Debugging flaky TestTruncateTableProcedure (was: Debugging TestTruncateTableProcedure failures) > Debugging flaky TestTruncateTableProcedure > -- > > Key: HBASE-19457 > URL: https://issues.apache.org/jira/browse/HBASE-19457 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Li updated HBASE-15482: - Attachment: HBASE-15482.master.001.patch > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch, > HBASE-15482.master.001.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19457) Debugging TestTruncateTableProcedure failures
Appy created HBASE-19457: Summary: Debugging TestTruncateTableProcedure failures Key: HBASE-19457 URL: https://issues.apache.org/jira/browse/HBASE-19457 Project: HBase Issue Type: Bug Reporter: Appy Assignee: Appy -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-15482) Provide an option to skip calculating block locations for SnapshotInputFormat
[ https://issues.apache.org/jira/browse/HBASE-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283035#comment-16283035 ] Xiang Li commented on HBASE-15482: -- [~tedyu], thanks very much for your comments! patch 001 is updated to address your comments as well as the errors reported by checkstyle. * "hbase.TableSnapshotInputFormat.locality" is changed into "hbase.TableSnapshotInputFormat.locality.enable". * The truncation of locations is moved into getBestLocations(). * The errors reported by checkstyle are corrected. Regarding {{moving the truncation of locations into getBestLocations()}}: The code has different logic for different combinations of hostAndWeights.length and numTopsAtMost. And there is a small behavior change on getBestLocations() when hostAndWeights.length is 0: * Originally, it returns a empty list. * After the change, it returns null. I think we do not need to allocate an empty list here, as the locations will be used to construct TableSnapshotInputFormatImpl.InputSplit and null will be checked as follow {code:title=hbase/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java|borderStyle=solid} public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, List locations, Scan scan, Path restoreDir) { this.htd = htd; this.regionInfo = regionInfo; if (locations == null || locations.isEmpty()) { // <--- here this.locations = new String[0]; } else { this.locations = locations.toArray(new String[locations.size()]); } try { this.scan = scan != null ? TableMapReduceUtil.convertScanToString(scan) : ""; } catch (IOException e) { LOG.warn("Failed to convert Scan to String", e); } this.restoreDir = restoreDir.toString(); } {code} And TableSnapshotInputFormatImpl is @InterfaceAudience.Private, there is no other calls of getBestLocations() in the whole HBase project except UTs. A UT is updated according to the change above. > Provide an option to skip calculating block locations for SnapshotInputFormat > - > > Key: HBASE-15482 > URL: https://issues.apache.org/jira/browse/HBASE-15482 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Liyin Tang >Assignee: Xiang Li >Priority: Minor > Fix For: 2.1.0 > > Attachments: HBASE-15482.master.000.patch > > > When a MR job is reading from SnapshotInputFormat, it needs to calculate the > splits based on the block locations in order to get best locality. However, > this process may take a long time for large snapshots. > In some setup, the computing layer, Spark, Hive or Presto could run out side > of HBase cluster. In these scenarios, the block locality doesn't matter. > Therefore, it will be great to have an option to skip calculating the block > locations for every job. That will super useful for the Hive/Presto/Spark > connectors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283031#comment-16283031 ] Hadoop QA commented on HBASE-19349: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 15m 42s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 8s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 45s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 66m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}173m 7s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}282m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.rsgroup.TestRSGroups | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19349 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901160/0002-HBASE-19349-Introduce-wrong-version-depencency-of-se.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile | | uname | Linux e5d5ea07dabb 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 428e5672e7 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/10296/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10296/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10296/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Introduce
[jira] [Commented] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283025#comment-16283025 ] Hadoop QA commented on HBASE-19450: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 54s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 5s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s{color} | {color:red} hbase-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s{color} | {color:red} hbase-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} hbase-common: The patch generated 1 new + 0 unchanged - 5 fixed = 1 total (was 5) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 0m 53s{color} | {color:red} patch has 14 errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 33s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 13s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 52s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 32s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 13s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 53s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 33s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 12s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 51s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 32s{color} | {color:red} The patch causes 15 errors with Hadoop v3.0.0-alpha4. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hbase-common in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s{color} | {color:red} hbase-common in the patch failed. {color} | | {color:green}+1{color} |
[jira] [Commented] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283022#comment-16283022 ] stack commented on HBASE-19449: --- Yeah, my name is messed up in JIRA. Too generic and one of the lads figured its too many aliases and JIRA baulks. SLF4J is generally non-native. Comes in via third-parties like ZooKeeper. We do commons-logging ourselves and backing logger is log4j. bq. I'm sorry I got a bit overzealous and just threw this patch out there. Thank you for your interest and the dialogue. No worries. We love contribs. Glad to have them. Thanks for writing up your motivations. If interested in the logging story, yeah, we need a revamp. Last time we were thinking about this was a while back in HBASE-10092 when we were looking at upping to log4j2... But it ain't ready yet. Our logging comes up out of what hadoop does. Thats our lineage. Could change it but it would be substantial change so a new logging basis would need a bit of a writeup and getting some buy-in up on dev list. Thanks Beluga. > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same
[ https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283020#comment-16283020 ] Yu Li commented on HBASE-16890: --- I could see some good offline discussion and more testing and maybe we could update the conclusion here [~chancelq] [~Apache9]? Thanks. bq. Yu Li What is your feedback with AsyncWAL and its perf in case you have already tried in your prod cluster? That will be a good take away here rather than small tests. Sorry just noticed the question sir [~ram_krish]. No we haven't used AsyncWAL in prod but as you might see we're interested and probably will do after more testing (and maybe some more improvements), let's see (smile). > Analyze the performance of AsyncWAL and fix the same > > > Key: HBASE-16890 > URL: https://issues.apache.org/jira/browse/HBASE-16890 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0-beta-1 > > Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 > (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, > AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, > HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, > HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, > Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 > PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at > 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, > classic.svg, contention.png, contention_defaultWAL.png, > ycsb_FSHlog.vs.Async.png > > > Tests reveal that AsyncWAL under load in single node cluster performs slower > than the Default WAL. This task is to analyze and see if we could fix it. > See some discussions in the tail of JIRA HBASE-15536. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19433) ChangeSplitPolicyAction modifies an immutable HTableDescriptor
[ https://issues.apache.org/jira/browse/HBASE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19433: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the reviews. > ChangeSplitPolicyAction modifies an immutable HTableDescriptor > -- > > Key: HBASE-19433 > URL: https://issues.apache.org/jira/browse/HBASE-19433 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: Josh Elser >Assignee: Ted Yu >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: 19433-unsupported.out, 19433.v1.txt, 19433.v2.txt, > 19433.v3.txt, 19433.v4.txt, 19433.v5.txt, 19433.v6.txt, 19433.v7.txt > > > {noformat} > 2017-12-01 23:18:51,433 WARN [ChaosMonkeyThread] policies.Policy: Exception > occurred during performing action: java.lang.UnsupportedOperationException: > HTableDescriptor is read-only > at > org.apache.hadoop.hbase.client.ImmutableHTableDescriptor.getDelegateeForModification(ImmutableHTableDescriptor.java:59) > at > org.apache.hadoop.hbase.HTableDescriptor.setRegionSplitPolicyClassName(HTableDescriptor.java:333) > at > org.apache.hadoop.hbase.chaos.actions.ChangeSplitPolicyAction.perform(ChangeSplitPolicyAction.java:54) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:745) > {noformat} > Found during some internal testing. Need to make sure this Action, in > addition to the other, don't fall into the trap of modifying the > TableDescriptor obtained from Admin. > [~tedyu], want to take a stab at it? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19452) Turn ON off heap Bucket Cache by default
[ https://issues.apache.org/jira/browse/HBASE-19452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated HBASE-19452: -- Summary: Turn ON off heap Bucket Cache by default (was: Turn ON off heap Bucket Cache b default) > Turn ON off heap Bucket Cache by default > > > Key: HBASE-19452 > URL: https://issues.apache.org/jira/browse/HBASE-19452 > Project: HBase > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0-beta-1 > > > BC's hbase.bucketcache.ioengine by default is empty now. Means now BC. > Make this default to be 'offheap'. And the default off heap size for the BC > also to be provided. This can be 8 GB? > Also we should provide a new option 'none' for this > hbase.bucketcache.ioengine now for users who dont need BC at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283007#comment-16283007 ] Yu Li commented on HBASE-19450: --- Currently it's an overall count from the very beginning thus the average of all time, while I think the average of the interval (like last 5 times or last 5 minutes) will be more helpful to indicate potential issues. What do you think? > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Status: Patch Available (was: Open) > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Attachment: HBASE-19450.master.002.patch Use nano instead of millis. > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch, > HBASE-19450.master.002.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19450) Add log about average execution time for ScheduledChore
[ https://issues.apache.org/jira/browse/HBASE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-19450: -- Status: Open (was: Patch Available) > Add log about average execution time for ScheduledChore > --- > > Key: HBASE-19450 > URL: https://issues.apache.org/jira/browse/HBASE-19450 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Attachments: HBASE-19450.master.001.patch > > > So far, there is no information about the exact execution time for a chore, > we can provide log information about it. It also brings other benefits, like > discovering inefficient chores which show rooms for improvement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282995#comment-16282995 ] Appy commented on HBASE-19454: -- Pushed to master and branch-2. Will come back (in separate jira) after couple of runs have accumulated. Thanks for review [~tedyu] and [~stack]. > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19454.master.001.patch > > > It's flaky test. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282993#comment-16282993 ] Yu Li commented on HBASE-19358: --- I see, good to know. Will wait for the new design doc and patch on review board then (smile). > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Attachments: newLogic.jpg, previousLogic.jpg, split-1-log.png, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region. If the > cluster is small and the number of regions per rs is large, it will create > too many HDFS streams at the same time. Then it is prone to failure since > each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg! > We cached the recovered edits unless exceeds the memory limits we set or > reach the end, then we have a thread pool to do the rest things: write them > to files and move to the destination. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-19454: - Resolution: Fixed Status: Resolved (was: Patch Available) > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19454.master.001.patch > > > It's flaky test. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-19454: - Fix Version/s: 2.0.0-beta-1 > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-19454: - Description: It's flaky test. Attaching a patch (001) {noformat} - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple aborting will close the regions, we want extreme failure testing here. - Adds some logging for easier debugging. - Refactors TestDistributedLogSplitting to use standard junit rules. {noformat} was: It's flaky tests. Attaching a patch (001) {noformat} - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple aborting will close the regions, we want extreme failure testing here. - Adds some logging for easier debugging. - Refactors TestDistributedLogSplitting to use standard junit rules. {noformat} > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19454.master.001.patch > > > It's flaky test. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282991#comment-16282991 ] Appy commented on HBASE-19454: -- In that case, will split patches into separate logging-only and refactor ones. Getting logging-only patches in quicker will make debugging easier. Thanks. > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282977#comment-16282977 ] BELUGA BEHR commented on HBASE-19449: - ... if there is interest in going this route, I can continue to review and update the logging statements across the project in separate JIRAs. > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-19454: - Description: It's flaky tests. Attaching a patch (001) {noformat} - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple aborting will close the regions, we want extreme failure testing here. - Adds some logging for easier debugging. - Refactors TestDistributedLogSplitting to use standard junit rules. {noformat} was: It's flaky tests. Attaching a patch to add some general logging to help in future tests' debugging. > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch (001) > {noformat} > - Changed testThreeRSAbort to kill the RSs intead of aborting. Simple > aborting will close the regions, we want extreme failure testing here. > - Adds some logging for easier debugging. > - Refactors TestDistributedLogSplitting to use standard junit rules. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282974#comment-16282974 ] BELUGA BEHR commented on HBASE-19449: - @stack - I can't quite figure out how to tag you in JIRA... seems "@stack" hits a lot of results and then there's no more selective criteria to use. :) I see that SLF4J is already included in the project, so I didn't have to add it as a dependency. I'm sorry I got a bit overzealous and just threw this patch out there. Thank you for your interest and the dialogue. Commons Logging states that it requires one to use code guards for logging debug/trace statements. This class just happened to have quite a few trace-level logging statements, so I started with this one. SLF4J recommends using parameters for logging which. SLF4J proposes the faster (and I personally think) cleaner method of using parameters in the logging statements that are only resolved if the log level is appropriate. It removes code (the guards) and complexity from the application code. https://www.slf4j.org/faq.html#logging_performance https://commons.apache.org/proper/commons-logging/guide.html#Code_Guards > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19449: Status: Open (was: Patch Available) > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19449: Status: Patch Available (was: Open) > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19449: Attachment: HBASE-19449.3.patch > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch, > HBASE-19449.3.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282962#comment-16282962 ] Mike Drob commented on HBASE-19369: --- {code} +DFSTestUtil.enableAllECPolicies(cluster.getFileSystem()); {code} That's a pretty good looking test, I'm glad you were able to find a nice all-in-one DFSTestUtil method for this. We might need to add in some additional check to only call that method if we're on a version of DFS that supports it. Probably more reflection... In {{ProtobufLogWriter}} you put a conditional check around the call to {{CommonFSUtils.createHelper}}, but that method has it's own conditional already. I don't think we need the double test. It doesn't look like we need the {{Progressable}} in {{createHelper}}. In {{AsyncFSOutputHelper}} we handle the {{create}} case, but not the {{createNonRecursive}} case - I think we need to take care of both. > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19433) ChangeSplitPolicyAction modifies an immutable HTableDescriptor
[ https://issues.apache.org/jira/browse/HBASE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282929#comment-16282929 ] stack commented on HBASE-19433: --- We going to commit this one? (I like seeing the numbers of issues against beta-1 go down...) > ChangeSplitPolicyAction modifies an immutable HTableDescriptor > -- > > Key: HBASE-19433 > URL: https://issues.apache.org/jira/browse/HBASE-19433 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: Josh Elser >Assignee: Ted Yu >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: 19433-unsupported.out, 19433.v1.txt, 19433.v2.txt, > 19433.v3.txt, 19433.v4.txt, 19433.v5.txt, 19433.v6.txt, 19433.v7.txt > > > {noformat} > 2017-12-01 23:18:51,433 WARN [ChaosMonkeyThread] policies.Policy: Exception > occurred during performing action: java.lang.UnsupportedOperationException: > HTableDescriptor is read-only > at > org.apache.hadoop.hbase.client.ImmutableHTableDescriptor.getDelegateeForModification(ImmutableHTableDescriptor.java:59) > at > org.apache.hadoop.hbase.HTableDescriptor.setRegionSplitPolicyClassName(HTableDescriptor.java:333) > at > org.apache.hadoop.hbase.chaos.actions.ChangeSplitPolicyAction.perform(ChangeSplitPolicyAction.java:54) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:745) > {noformat} > Found during some internal testing. Need to make sure this Action, in > addition to the other, don't fall into the trap of modifying the > TableDescriptor obtained from Admin. > [~tedyu], want to take a stab at it? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19349: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Checked it again. Works. Pushed to master and branch-2. Thanks for review [~appy] Thanks for the detective work up front [~zghaobac] > Introduce wrong version depencency of servlet-api jar > - > > Key: HBASE-19349 > URL: https://issues.apache.org/jira/browse/HBASE-19349 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: > 0002-HBASE-19349-Introduce-wrong-version-depencency-of-se.patch, 19349.txt > > > Build a tarball. > {code} > mvn -DskipTests clean install && mvn -DskipTests package assembly:single > tar zxvf hbase-2.0.0-beta-1-SNAPSHOT-bin.tar.gz > {code} > Then I found there is a servlet-api-2.5.jar in the lib directory. The right > depencency should be javax.servlet-api-3.1.0.jar. > Start a distributed cluster with this tarball. And got exception when access > Master/RS info jsp. > {code} > 2017-11-27,10:02:05,066 WARN org.eclipse.jetty.server.HttpChannel: / > java.lang.NoSuchMethodError: > javax.servlet.http.HttpServletRequest.isAsyncSupported()Z > at > org.eclipse.jetty.server.ResourceService.sendData(ResourceService.java:689) > at > org.eclipse.jetty.server.ResourceService.doGet(ResourceService.java:294) > at > org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:458) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) > {code} > Try mvn depencency:tree but didn't find why servlet-api-2.5.jar was > introduced. > I download hbase-2.0.0-alpha4-bin.tar.gz and didn't find servlet-api-2.5.jar. > And build a tar from hbase-2.0.0-alpha4-src.tar.gz and didn't find > servlet-api-2.5.jar, too. So this may be introduced by recently commits. And > should fix this when release 2.0.0-beta1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18625) Splitting of region with replica, doesn't update region list in serverHolding. A server crash leads to overlap.
[ https://issues.apache.org/jira/browse/HBASE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282876#comment-16282876 ] huaxiang sun commented on HBASE-18625: -- I have some interesting results with 1.2.0 code. I will upgrade to the latest 1.2 code to see if it can be reproduced. > Splitting of region with replica, doesn't update region list in > serverHolding. A server crash leads to overlap. > --- > > Key: HBASE-18625 > URL: https://issues.apache.org/jira/browse/HBASE-18625 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.6 >Reporter: Igloo >Assignee: huaxiang sun > Fix For: 1.2.8 > > > The situation can appear in following steps in release hbase1.2.6 > 1. create 'testtable', 'info', {REGION_REPLICATION=>2} > 2. write somerecords into 'testtable' > 3. split the table 'testtable' > 4. after the spliting, the serverHoldings in RegionStates still holds the > regioninfo for the replica of parent region > 5. restart the regionserver where the parent replica-region located > 6. the offlined replica of parent region will be assigned in > ServerCrashProcedure. > hbase hbck 'testtable‘ > ERROR: Region { meta => null, hdfs => null, deployed => > qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe195b3cc4d08b2c078a687f6d > ., replicaId => 1 } not in META, but deployed on > qabb-qa-hdp-hbase1,16020,1503022958093 > 18 ERROR: No regioninfo in Meta or HDFS. { meta => null, hdfs => null, > deployed => > qabb-qa-hdp-hbase1,16020,1503022958093;testtable,,1503022907686_0001.42d11cfe >195b3cc4d08b2c078a687f6d., replicaId => 1 } -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282907#comment-16282907 ] stack commented on HBASE-19454: --- Just push logging stuff like this [~appy] > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch to add some general logging to help in future tests' > debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282900#comment-16282900 ] stack commented on HBASE-19349: --- bq. Why doesn't it show up in dependency tree? Maven dependency tool seems to have holes and doesn't do a complete job in my experience (dependency:analysis/dependency:tree -- the cleaning of the poms was using these tools but then had to fine-tune by running against jenkins because I couldn't repo its failures locally). bq. It's not from javax.servlet:servlet-api, right? We already exclude that one in top poms. Would be nuts if it's that one. I don't thing so. I made the association with hadoop-hdfs by adding -X running assembly step. Thanks for review and questions [~appy] Let me try this patch on clean checkout and if it works, will commit. > Introduce wrong version depencency of servlet-api jar > - > > Key: HBASE-19349 > URL: https://issues.apache.org/jira/browse/HBASE-19349 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: > 0002-HBASE-19349-Introduce-wrong-version-depencency-of-se.patch, 19349.txt > > > Build a tarball. > {code} > mvn -DskipTests clean install && mvn -DskipTests package assembly:single > tar zxvf hbase-2.0.0-beta-1-SNAPSHOT-bin.tar.gz > {code} > Then I found there is a servlet-api-2.5.jar in the lib directory. The right > depencency should be javax.servlet-api-3.1.0.jar. > Start a distributed cluster with this tarball. And got exception when access > Master/RS info jsp. > {code} > 2017-11-27,10:02:05,066 WARN org.eclipse.jetty.server.HttpChannel: / > java.lang.NoSuchMethodError: > javax.servlet.http.HttpServletRequest.isAsyncSupported()Z > at > org.eclipse.jetty.server.ResourceService.sendData(ResourceService.java:689) > at > org.eclipse.jetty.server.ResourceService.doGet(ResourceService.java:294) > at > org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:458) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) > {code} > Try mvn depencency:tree but didn't find why servlet-api-2.5.jar was > introduced. > I download hbase-2.0.0-alpha4-bin.tar.gz and didn't find servlet-api-2.5.jar. > And build a tar from hbase-2.0.0-alpha4-src.tar.gz and didn't find > servlet-api-2.5.jar, too. So this may be introduced by recently commits. And > should fix this when release 2.0.0-beta1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282894#comment-16282894 ] stack commented on HBASE-19435: --- [~zyork] Any log or exception sir? During compactions shouldn't be an issue. The swap into place of the compacted file for the old files is a time where 'interesting' issues might arise. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282891#comment-16282891 ] Appy commented on HBASE-19349: -- Just throwing some perplexing questions in case you found the answers to them: - Why doesn't it show up in dependency tree? - It's not from javax.servlet:servlet-api, right? We already exclude that one in top poms. Would be nuts if it's that one. Though it feels wrong that we know the absolute reason, patch seems reasonable so +1 on that. > Introduce wrong version depencency of servlet-api jar > - > > Key: HBASE-19349 > URL: https://issues.apache.org/jira/browse/HBASE-19349 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: > 0002-HBASE-19349-Introduce-wrong-version-depencency-of-se.patch, 19349.txt > > > Build a tarball. > {code} > mvn -DskipTests clean install && mvn -DskipTests package assembly:single > tar zxvf hbase-2.0.0-beta-1-SNAPSHOT-bin.tar.gz > {code} > Then I found there is a servlet-api-2.5.jar in the lib directory. The right > depencency should be javax.servlet-api-3.1.0.jar. > Start a distributed cluster with this tarball. And got exception when access > Master/RS info jsp. > {code} > 2017-11-27,10:02:05,066 WARN org.eclipse.jetty.server.HttpChannel: / > java.lang.NoSuchMethodError: > javax.servlet.http.HttpServletRequest.isAsyncSupported()Z > at > org.eclipse.jetty.server.ResourceService.sendData(ResourceService.java:689) > at > org.eclipse.jetty.server.ResourceService.doGet(ResourceService.java:294) > at > org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:458) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) > {code} > Try mvn depencency:tree but didn't find why servlet-api-2.5.jar was > introduced. > I download hbase-2.0.0-alpha4-bin.tar.gz and didn't find servlet-api-2.5.jar. > And build a tar from hbase-2.0.0-alpha4-src.tar.gz and didn't find > servlet-api-2.5.jar, too. So this may be introduced by recently commits. And > should fix this when release 2.0.0-beta1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282810#comment-16282810 ] Zach York commented on HBASE-19435: --- I have traced the original issue to happening during/after compactions (typically major compactions). However, this is one area of HBase I haven't dug into at all. Does anyone have any suggestions on where to start looking? Particularly any place that has relation to the cache (flushing, removing items, repopulating, etc). Thanks! > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19449) Implement SLF4J and SLF4J Parameter Substitution
[ https://issues.apache.org/jira/browse/HBASE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282872#comment-16282872 ] stack commented on HBASE-19449: --- We use apache commons for logging [~belugabehr] Whats w/ the selective replacement? Thanks. > Implement SLF4J and SLF4J Parameter Substitution > > > Key: HBASE-19449 > URL: https://issues.apache.org/jira/browse/HBASE-19449 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 2.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0 > > Attachments: HBASE-19449.1.patch, HBASE-19449.2.patch > > > For the {{HFileArchiver.java}} class... > * Use SLF4J logging > * Use SLF4J parameter substitution > * Fix some small issues with missing spaces between words in the log message > and the like > https://www.slf4j.org/faq.html#logging_performance -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282870#comment-16282870 ] Hadoop QA commented on HBASE-19454: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 8s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s{color} | {color:red} hbase-server: The patch generated 2 new + 278 unchanged - 4 fixed = 280 total (was 282) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 4s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 46m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}109m 44s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19454 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901139/HBASE-19454.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c9a8e62b484c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 428e5672e7 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/10293/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10293/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10293/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was
[jira] [Commented] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282859#comment-16282859 ] stack commented on HBASE-19349: --- Thanks' [~appy] Sounds like I'm wrong then (╯°□°)╯ The patch does the right thing though for me (w/o it I see what [~zghaobac] was seeing). On htrace, they were already excluded up in the top-level pom. > Introduce wrong version depencency of servlet-api jar > - > > Key: HBASE-19349 > URL: https://issues.apache.org/jira/browse/HBASE-19349 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: > 0002-HBASE-19349-Introduce-wrong-version-depencency-of-se.patch, 19349.txt > > > Build a tarball. > {code} > mvn -DskipTests clean install && mvn -DskipTests package assembly:single > tar zxvf hbase-2.0.0-beta-1-SNAPSHOT-bin.tar.gz > {code} > Then I found there is a servlet-api-2.5.jar in the lib directory. The right > depencency should be javax.servlet-api-3.1.0.jar. > Start a distributed cluster with this tarball. And got exception when access > Master/RS info jsp. > {code} > 2017-11-27,10:02:05,066 WARN org.eclipse.jetty.server.HttpChannel: / > java.lang.NoSuchMethodError: > javax.servlet.http.HttpServletRequest.isAsyncSupported()Z > at > org.eclipse.jetty.server.ResourceService.sendData(ResourceService.java:689) > at > org.eclipse.jetty.server.ResourceService.doGet(ResourceService.java:294) > at > org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:458) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) > {code} > Try mvn depencency:tree but didn't find why servlet-api-2.5.jar was > introduced. > I download hbase-2.0.0-alpha4-bin.tar.gz and didn't find servlet-api-2.5.jar. > And build a tar from hbase-2.0.0-alpha4-src.tar.gz and didn't find > servlet-api-2.5.jar, too. So this may be introduced by recently commits. And > should fix this when release 2.0.0-beta1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282755#comment-16282755 ] Hudson commented on HBASE-19435: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4186 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4186/]) HBASE-19435 Reopen Files for ClosedChannelException in BucketCache (tedyu: rev f55e81e6c03f8cf1667340bcd3f7fa6890f1a770) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282647#comment-16282647 ] Appy commented on HBASE-19454: -- never used as variable, just to return true. > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch to add some general logging to help in future tests' > debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282654#comment-16282654 ] Ted Yu commented on HBASE-19454: lgtm > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch to add some general logging to help in future tests' > debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19349) Introduce wrong version depencency of servlet-api jar
[ https://issues.apache.org/jira/browse/HBASE-19349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-19349: -- Attachment: 19349.txt This seems to do the 'right' thing but it is awful. Running mvn w/ -X it seems like the dependency comes in via hadoop-hdfs test jar. Let me see if I can do better. > Introduce wrong version depencency of servlet-api jar > - > > Key: HBASE-19349 > URL: https://issues.apache.org/jira/browse/HBASE-19349 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: 19349.txt > > > Build a tarball. > {code} > mvn -DskipTests clean install && mvn -DskipTests package assembly:single > tar zxvf hbase-2.0.0-beta-1-SNAPSHOT-bin.tar.gz > {code} > Then I found there is a servlet-api-2.5.jar in the lib directory. The right > depencency should be javax.servlet-api-3.1.0.jar. > Start a distributed cluster with this tarball. And got exception when access > Master/RS info jsp. > {code} > 2017-11-27,10:02:05,066 WARN org.eclipse.jetty.server.HttpChannel: / > java.lang.NoSuchMethodError: > javax.servlet.http.HttpServletRequest.isAsyncSupported()Z > at > org.eclipse.jetty.server.ResourceService.sendData(ResourceService.java:689) > at > org.eclipse.jetty.server.ResourceService.doGet(ResourceService.java:294) > at > org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:458) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:841) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) > {code} > Try mvn depencency:tree but didn't find why servlet-api-2.5.jar was > introduced. > I download hbase-2.0.0-alpha4-bin.tar.gz and didn't find servlet-api-2.5.jar. > And build a tar from hbase-2.0.0-alpha4-src.tar.gz and didn't find > servlet-api-2.5.jar, too. So this may be introduced by recently commits. And > should fix this when release 2.0.0-beta1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282737#comment-16282737 ] Hudson commented on HBASE-19435: FAILURE: Integrated in Jenkins build HBase-1.4 #1056 (See [https://builds.apache.org/job/HBase-1.4/1056/]) HBASE-19435 Reopen Files for ClosedChannelException in BucketCache (tedyu: rev 698f70180b96a35e62a94311edc41748e1b062f9) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19435: --- Resolution: Fixed Fix Version/s: 1.4.1 Status: Resolved (was: Patch Available) Thanks for the patch, Zach. Thanks for the reviews, Ram, Anoop and Stack. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 1.4.1, 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282734#comment-16282734 ] Ted Yu commented on HBASE-19435: {code} commit a39d5bed1b0ed0fd78ba61bae4c94edc52710ade Author: Zach YorkDate: Mon Dec 4 12:11:21 2017 -0800 HBASE-19435 Reopen Files for ClosedChannelException in BucketCache Signed-off-by: tedyu {code} Committing to branch-1 > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282726#comment-16282726 ] stack commented on HBASE-19435: --- [~tedyu] I see it on master but not branch-2? Can you commit everywhere please? Thanks. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19456) RegionMover's region server hostname option is no longer case insensitive
Josh Elser created HBASE-19456: -- Summary: RegionMover's region server hostname option is no longer case insensitive Key: HBASE-19456 URL: https://issues.apache.org/jira/browse/HBASE-19456 Project: HBase Issue Type: Bug Components: tooling Reporter: Romil Choksi Assignee: Josh Elser Fix For: 2.0.0-beta-1 With the move from the ruby-based to java-based RegionMover implementation, it appears that the case-insensitivity "feature" was dropped. If the user provides a RS hostname in the wrong case, the class will fail to perform its actions. DNS hostnames are case insensitive, so this would be nice to restore. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282684#comment-16282684 ] Hadoop QA commented on HBASE-19435: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HBASE-19435 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-19435 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901153/HBASE-19435.master.007.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10295/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282734#comment-16282734 ] Ted Yu edited comment on HBASE-19435 at 12/7/17 11:44 PM: -- For branch-2: {code} commit a39d5bed1b0ed0fd78ba61bae4c94edc52710ade Author: Zach YorkDate: Mon Dec 4 12:11:21 2017 -0800 HBASE-19435 Reopen Files for ClosedChannelException in BucketCache Signed-off-by: tedyu {code} Committing to branch-1 was (Author: yuzhih...@gmail.com): {code} commit a39d5bed1b0ed0fd78ba61bae4c94edc52710ade Author: Zach York Date: Mon Dec 4 12:11:21 2017 -0800 HBASE-19435 Reopen Files for ClosedChannelException in BucketCache Signed-off-by: tedyu {code} Committing to branch-1 > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282692#comment-16282692 ] Hadoop QA commented on HBASE-19435: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 55s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 18s{color} | {color:red} hbase-server: The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 40s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 1s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.TestAssignmentManagerOnCluster | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-19435 | | JIRA Patch URL |
[jira] [Commented] (HBASE-19454) Debugging TestDistributedLogSplitting#testThreeRSAbort
[ https://issues.apache.org/jira/browse/HBASE-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282691#comment-16282691 ] Appy commented on HBASE-19454: -- Makes sense. Thanks for review. :) > Debugging TestDistributedLogSplitting#testThreeRSAbort > -- > > Key: HBASE-19454 > URL: https://issues.apache.org/jira/browse/HBASE-19454 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19454.master.001.patch > > > It's flaky tests. > Attaching a patch to add some general logging to help in future tests' > debugging. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19134) Make WALKey an Interface; expose Read-Only version to CPs
[ https://issues.apache.org/jira/browse/HBASE-19134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282719#comment-16282719 ] Hadoop QA commented on HBASE-19134: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 31 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 54s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 15s{color} | {color:red} hbase-server: The patch generated 9 new + 695 unchanged - 19 fixed = 704 total (was 714) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 34s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 52m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 26s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 58s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}204m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.wal.TestSecureAsyncWALReplay | | | hadoop.hbase.regionserver.wal.TestWALReplay | | | hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleAsyncWAL | | | hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint | | | hadoop.hbase.regionserver.TestHRegionReplayEvents | | | hadoop.hbase.replication.multiwal.TestReplicationKillMasterRSCompressedWithMultipleAsyncWAL | | | hadoop.hbase.regionserver.wal.TestAsyncWALReplayCompressed | | | hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleAsyncWAL | | | hadoop.hbase.regionserver.TestRegionReplicaFailover | | | hadoop.hbase.wal.TestWALSplitCompressed | | |
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282721#comment-16282721 ] stack commented on HBASE-19435: --- Oh. Yeah, update help Ted. Thanks. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache
[ https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282714#comment-16282714 ] Ted Yu commented on HBASE-19435: Pardon - forgot to note the commit earlier. If there is any change needed for master branch, please use addendum. > Reopen Files for ClosedChannelException in BucketCache > -- > > Key: HBASE-19435 > URL: https://issues.apache.org/jira/browse/HBASE-19435 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.3.1 >Reporter: Zach York >Assignee: Zach York > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19435.branch-1.001.patch, > HBASE-19435.master.001.patch, HBASE-19435.master.002.patch, > HBASE-19435.master.003.patch, HBASE-19435.master.004.patch, > HBASE-19435.master.005.patch, HBASE-19435.master.006.patch, > HBASE-19435.master.007.patch, HBASE-19435.master.007.patch > > > When using the FileIOEngine for BucketCache, the cache will be disabled if > the connection is interrupted or closed. HBase will then get > ClosedChannelExceptions trying to access the file. After 60s, the RS will > disable the cache. This causes severe read performance degradation for > workloads that rely on this cache. FileIOEngine never tries to reopen the > connection. This JIRA is to reopen files when the BucketCache encounters a > ClosedChannelException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282693#comment-16282693 ] Hadoop QA commented on HBASE-19369: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 7s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 8s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s{color} | {color:red} hbase-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 29s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s{color} | {color:red} hbase-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 29s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} hbase-common: The patch generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s{color} | {color:red} hbase-server: The patch generated 22 new + 3 unchanged - 0 fixed = 25 total (was 3) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 0m 55s{color} | {color:red} patch has 32 errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 38s{color} | {color:red} The patch causes 32 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 20s{color} | {color:red} The patch causes 32 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 4s{color} | {color:red} The patch causes 32 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 47s{color} | {color:red} The patch causes 32 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 30s{color} | {color:red} The patch causes 32 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 14s{color} | {color:red} The patch causes 32 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 57s{color} | {color:red} The patch causes 32 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 40s{color} | {color:red} The patch causes 32 errors with Hadoop v2.7.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck