Re: [DISCUSS] HBase-2.0 SHOULD be rolling upgradable and wire-compatible with 1.x
I think everyone wants rolling upgrade. the discussion should probably be around how much compatibility code do we want to keep around. using as example HBASE-16060, we need to decide how much are we rolling upgradable and from where. I'm not too convinced that we should have extra code in master to "simulate the old states", I'll rather have cleaner code in 2.0 and force the users to move to one of the latest 1.x.y there are not many changes in the 1.x releases, so we should be able to say: if you are on 1.1 move to the latest 1.1.x, if you are on 1.2 move to the latest 1.2.x and so on. also there are some operations that may not be needed during rolling upgrades, and we can cut on compatibility to have some code removed. an example here is HBASE-15521 where we are no longer able to clone/restore snapshot during 1.x -> 2.x rolling upgrade, until the two master are on 2.x. but this may be extended to you can't perform some operation until all the machines are on 2.x for some future change. I think we should aim for something like: - data path: HTable put/get/scan/... must work during a rolling upgrade - replication: must? work during rolling upgrade - admin: some operation may not be working during rolling upgrade - upgrade to the latest 1.x.y before the 2.x upgrade (we can add in 2.x master and rs the ability to check the client version) Matteo On Tue, Jun 21, 2016 at 12:05 AM, Dima Spivak wrote: > If there’s no technical limitation, we should definitely do it. As you > note, customers running in production hate when they have to shut down > clusters and with some of the testing infrastructure being rolled out, this > is definitely something we can set up automated testing for. +1 > > -Dima > > On Mon, Jun 20, 2016 at 2:58 PM, Enis Söztutar wrote: > > > Time to formalize 2.0 rolling upgrade scenario? > > > > 0.94 -> 0.96 singularity was a real pain for operators and for our users. > > If possible we should not have the users suffer through the same thing > > unless there is a very compelling reason. For the current stuff in > master, > > there is nothing that will prevent us to not have rolling upgrade support > > for 2.0. So I say, we should decide on the rolling upgrade requirement > now, > > and start to evaluate incoming patches accordingly. Otherwise, we risk > the > > option to go deeper down the hole. > > > > What do you guys think. Previous threads [1] and [2] seems to be in > favor. > > Should we vote? > > > > Ref: > > [1] > > > > > http://search-hadoop.com/m/YGbbsd4An1aso5E1&subj=HBase+1+x+to+2+0+upgrade+goals+ > > > > [2] > > > > > http://search-hadoop.com/m/YGbb1CBXTL8BTI&subj=thinking+about+supporting+upgrades+to+HBase+1+x+and+2+x > > >
[jira] [Created] (HBASE-16077) Replication status doesnt show failed RS metrics in CLI
Bibin A Chundatt created HBASE-16077: Summary: Replication status doesnt show failed RS metrics in CLI Key: HBASE-16077 URL: https://issues.apache.org/jira/browse/HBASE-16077 Project: HBase Issue Type: Bug Reporter: Bibin A Chundatt Steps to reproduce # Create 2 clusters and configure replication # Create TABLE 1 and enable table replication # Shutdown Cluster 2 for short period. # Load data to TABLE 1 # Shutdown Region Server whr Region of TABLE 1 is available # Check metrics using CLI {noformat} hbase(main):003:0* status 'replication' 2016-06-14 00:58:04,664 INFO [main] ipc.AbstractRpcClient: RPC Server Kerberos principal name for service=MasterService is hbase/hadoop.hadoop@hadoop.com version 1.0.2 3 live servers host-10-19-92-200: SOURCE: PeerID=11, SizeOfLogQueue=0, ShippedBatches=30, ShippedOps=1351, ShippedBytes=1513127672, LogReadInBytes=662648911, LogEditsRead=1546, LogEditsFiltered=1409, SizeOfLogToReplicate=0, TimeWillBeTakenForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=0, TimeStampsOfLastShippedOp=Tue Jun 14 00:58:01 IST 2016, Replication Lag=0 SINK : AppliedBatches=2, AppliedOps=5, AppliedHFiles=3, AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Mon Jun 13 02:18:06 IST 2016 host-10-19-92-187: SOURCE: PeerID=11, SizeOfLogQueue=0, ShippedBatches=0, ShippedOps=0, ShippedBytes=0, LogReadInBytes=65719, LogEditsRead=112, LogEditsFiltered=112, SizeOfLogToReplicate=0, TimeWillBeTakenForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=0, TimeStampsOfLastShippedOp=Tue Jun 14 00:58:01 IST 2016, Replication Lag=0 SINK : AppliedBatches=0, AppliedOps=0, AppliedHFiles=0, AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Mon Jun 13 09:07:20 IST 2016 host-10-19-92-188: SOURCE: PeerID=11, SizeOfLogQueue=0, ShippedBatches=39, ShippedOps=1730, ShippedBytes=1937609744, LogReadInBytes=848439638, LogEditsRead=1671, LogEditsFiltered=1497, SizeOfLogToReplicate=0, TimeWillBeTakenForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=0, TimeStampsOfLastShippedOp=Tue Jun 14 00:58:03 IST 2016, Replication Lag=0 SINK : AppliedBatches=1, AppliedOps=1, AppliedHFiles=0, AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Mon Jun 13 01:53:53 IST 2016 {noformat} *JMX output* {noformat} { "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication", "modelerType" : "RegionServer,sub=Replication", "tag.Context" : "regionserver", "tag.Hostname" : "host-10-19-92-200", "source.11.sizeOfLogToReplicate" : 537, "source.11-host-10-19-92-187,21302,1465787242095.sizeOfLogToReplicate" : 282766680, "source.shippedHFiles" : 0, "source.ageOfLastShippedOp" : 0, "source.11.shippedHFiles" : 0, "source.11-host-10-19-92-187,21302,1465787242095.ageOfLastShippedOp" : 0, "source.shippedKBs" : 1477663, "source.sizeOfHFileRefsQueue" : 0, "source.logReadInBytes" : 691148656, "source.11-host-10-19-92-187,21302,1465787242095.logEditsRead" : 39, "source.11-host-10-19-92-187,21302,1465787242095.shippedOps" : 0, "source.11.logEditsFiltered" : 1244, "source.sizeOfLogQueue" : 4, "source.timeWillBeTakenForLogToReplicate" : 1, "sink.ageOfLastAppliedOp" : 0, "source.11-host-10-19-92-187,21302,1465787242095.timeWillBeTakenForLogToReplicate" : 0, "source.logEditsRead" : 1420, "source.11.sizeOfLogQueue" : 0, "source.11-host-10-19-92-187,21302,1465787242095.logEditsFiltered" : 32, "source.11-host-10-19-92-187,21302,1465787242095.shippedHFiles" : 0, "source.shippedOps" : 1351, "source.11.shippedKBs" : 1477663, "source.11.logReadInBytes" : 662562515, "sink.appliedHFiles" : 3, "source.11.sizeOfHFileRefsQueue" : 0, "source.logEditsFiltered" : 1276, "source.shippedBytes" : 1513127672, "source.11-host-10-19-92-187,21302,1465787242095.shippedBatches" : 0, "source.11.shippedBytes" : 1513127672, "sink.appliedOps" : 5, "source.11-host-10-19-92-187,21302,1465787242095.sizeOfLogQueue" : 4, "source.11.shippedBatches" : 30, "source.11-host-10-19-92-187,21302,1465787242095.sizeOfHFileRefsQueue" : 0, "source.11.timeWillBeTakenForLogToReplicate" : 1, "source.11-host-10-19-92-187,21302,1465787242095.logReadInBytes" : 28586141, "source.11.shippedOps" : 1351, "source.shippedBatches" : 30, "source.11-host-10-19-92-187,21302,1465787242095.shippedKBs" : 0, "source.sizeOfLogToReplicate" : 537, "source.11.ageOfLastShippedOp" : 0, "sink.appliedBatches" : 2, "source.11.logEditsRead" : 1381, "source.11-host-10-19-92-187,21302,1465787242095.shippedBytes" : 0 } {noformat} *source.11-host-10-19-92-187,21302,1465787242095* not available in CLI -- This message was sent by Atlassian JIRA (v6.3.
[jira] [Created] (HBASE-16076) Cannot configure split policy in HBase shell
Youngjoon Kim created HBASE-16076: - Summary: Cannot configure split policy in HBase shell Key: HBASE-16076 URL: https://issues.apache.org/jira/browse/HBASE-16076 Project: HBase Issue Type: Bug Components: shell Affects Versions: 2.0.0 Reporter: Youngjoon Kim Priority: Minor The reference guide explains how to configure split policy in HBase shell([link|http://hbase.apache.org/book.html#_custom_split_policies]). {noformat} Configuring the Split Policy On a Table Using HBase Shell hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME => 'cf1'} {noformat} But if run that command, shell complains 'An argument ignored (unknown or overridden): CONFIG', and the table description has no split policy. {noformat} hbase(main):067:0* create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME => 'cf1'} An argument ignored (unknown or overridden): CONFIG Created table test Took 1.2180 seconds hbase(main):068:0> describe 'test' Table test is ENABLED test COLUMN FAMILIES DESCRIPTION {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', IN_MEMORY_COMPACTION => 'false', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => ' false', BLOCKCACHE => 'true'} 1 row(s) Took 0.0200 seconds {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Flaky tests
A while ago we ran an effort to crack down on broken/flaky tests on branch-1, and I guess some of the branch builds are again in a sorry state, so writing here to raise awareness (kind of). As I'm looking at https://builds.apache.org/job/HBase-1.3 and https://builds.apache.org/job/HBase-1.4 builds, I'm seeing lot more red things than I would like to (and in many cases I can see one out of two configurations (1.7 or 1.8) is passing, while another one is failing. Here're the few ones which started to appear in the logs often: HBASE-16049 TestRowProcessorEndpoint HBASE-16051 TestScannerHeartbeatMessages fails on some machines HBASE-16075 TestAcidGuarantees http://hbase.x10host.com/flaky-tests/ - this is a more broad set, but I'd like to start the discussion whether we should be more aggressive in dealing with flaky tests (either disable them and file a jira to follow up, or cleanup/remove?) Thoughts? -Mikhail
[jira] [Created] (HBASE-16075) TestAcidGuarantees is flaky
Mikhail Antonov created HBASE-16075: --- Summary: TestAcidGuarantees is flaky Key: HBASE-16075 URL: https://issues.apache.org/jira/browse/HBASE-16075 Project: HBase Issue Type: Bug Components: test Affects Versions: 1.3.0 Reporter: Mikhail Antonov https://builds.apache.org/job/HBase-1.3/744/jdk=latest1.7,label=yahoo-not-h2/testReport/junit/TEST-org.apache.hadoop.hbase.TestAcidGuarantees/xml/_init_/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] HBase-2.0 SHOULD be rolling upgradable and wire-compatible with 1.x
If there’s no technical limitation, we should definitely do it. As you note, customers running in production hate when they have to shut down clusters and with some of the testing infrastructure being rolled out, this is definitely something we can set up automated testing for. +1 -Dima On Mon, Jun 20, 2016 at 2:58 PM, Enis Söztutar wrote: > Time to formalize 2.0 rolling upgrade scenario? > > 0.94 -> 0.96 singularity was a real pain for operators and for our users. > If possible we should not have the users suffer through the same thing > unless there is a very compelling reason. For the current stuff in master, > there is nothing that will prevent us to not have rolling upgrade support > for 2.0. So I say, we should decide on the rolling upgrade requirement now, > and start to evaluate incoming patches accordingly. Otherwise, we risk the > option to go deeper down the hole. > > What do you guys think. Previous threads [1] and [2] seems to be in favor. > Should we vote? > > Ref: > [1] > > http://search-hadoop.com/m/YGbbsd4An1aso5E1&subj=HBase+1+x+to+2+0+upgrade+goals+ > > [2] > > http://search-hadoop.com/m/YGbb1CBXTL8BTI&subj=thinking+about+supporting+upgrades+to+HBase+1+x+and+2+x >
[DISCUSS] HBase-2.0 SHOULD be rolling upgradable and wire-compatible with 1.x
Time to formalize 2.0 rolling upgrade scenario? 0.94 -> 0.96 singularity was a real pain for operators and for our users. If possible we should not have the users suffer through the same thing unless there is a very compelling reason. For the current stuff in master, there is nothing that will prevent us to not have rolling upgrade support for 2.0. So I say, we should decide on the rolling upgrade requirement now, and start to evaluate incoming patches accordingly. Otherwise, we risk the option to go deeper down the hole. What do you guys think. Previous threads [1] and [2] seems to be in favor. Should we vote? Ref: [1] http://search-hadoop.com/m/YGbbsd4An1aso5E1&subj=HBase+1+x+to+2+0+upgrade+goals+ [2] http://search-hadoop.com/m/YGbb1CBXTL8BTI&subj=thinking+about+supporting+upgrades+to+HBase+1+x+and+2+x
Re: HBCK options to disable master maintenance threads
Is this change for 2.0 ? For 1.x, this would be incompatible change, right ? On Mon, Jun 20, 2016 at 2:33 PM, Enis Söztutar wrote: > On Mon, Jun 20, 2016 at 1:49 PM, Stephen Jiang > wrote: > > > Enis, what I suggested was that even no repair is suggested, we still > > should disable master maint tasks in online check for more deterministic > > result. > > > > I see, makes sense as long as we are finished with HBASE-16008. > > > > > > Thanks > > Stephen > > > > On Mon, Jun 20, 2016 at 1:44 PM, Enis Söztutar > wrote: > > > > > check out the corresponding shouldXXX commands: > > > > > > public boolean shouldDisableBalancer() { > > > > > > return fixAny || disableBalancer; > > > > > > } > > > If fixAny which is true if any of the -fix is run, we disable the > master > > > chores. > > > > > > For -fixHdfsOverlaps and -fixHdfsHoles, I've mentioned this in the > jira I > > > think, but we should deprecate those, and do -fixOverlaps and -fixHoles > > > separately. These two new commands will look at BOTH hdfs and meta to > > > decide on what to do. > > > > > > Enis > > > > > > On Mon, Jun 20, 2016 at 12:30 PM, Stephen Jiang < > syuanjiang...@gmail.com > > > > > > wrote: > > > > > > > } else if (cmd.equals("-disableBalancer")) { > > > > > > > > setDisableBalancer(); > > > > > > > > } else if (cmd.equals("-disableSplitAndMerge")) { > > > > > > > > setDisableSplitAndMerge(); > > > > > > > > In HBCK, we will either use the options to disable master maintenance > > > work > > > > (see above) or the master maintenance are disabled during repair. > > > > > > > > I think we should always disable master maintenance work during > online > > > > HBCK, because balancer moving regions around during online check; or > > > > split/merge regions during online check would have unexpected side > > > effect. > > > > > > > > How do you think? > > > > > > > > Thanks > > > > Stephen > > > > > > > > Also, I think we have too many options. We really should reduce > > options > > > in > > > > hbck so that it is more user friendly (eg. currently implementation > of > > > > -fixHdfsOverlaps would almost 100% create hole, it does not make > sense > > to > > > > run it alone, it should always run with -fixHdfsHoles option; and > very > > > > likely with -fixMeta option) > > > > > > > > > >
[jira] [Created] (HBASE-16074) ITBLL fails, reports lost big or tine families
Mikhail Antonov created HBASE-16074: --- Summary: ITBLL fails, reports lost big or tine families Key: HBASE-16074 URL: https://issues.apache.org/jira/browse/HBASE-16074 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 1.3.0 Reporter: Mikhail Antonov Assignee: Mikhail Antonov Priority: Blocker Fix For: 1.3.0 Underlying MR jobs succeed but I'm seeing the following in the logs: ERROR test.IntegrationTestBigLinkedList$Verify: Found nodes which lost big or tiny families, count=164 I do know know exactly yet whether it's a bug, a test issue or env setup issue, but need figure it out. Opening this to raise awareness and see if someone saw that recently. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Successful: hbase.apache.org HTML Checker
Successful If successful, the HTML and link-checking report for http://hbase.apache.org is available at https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/48/artifact/link_report/index.html. If failed, see https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/48/console.
Re: HBCK options to disable master maintenance threads
On Mon, Jun 20, 2016 at 1:49 PM, Stephen Jiang wrote: > Enis, what I suggested was that even no repair is suggested, we still > should disable master maint tasks in online check for more deterministic > result. > I see, makes sense as long as we are finished with HBASE-16008. > > Thanks > Stephen > > On Mon, Jun 20, 2016 at 1:44 PM, Enis Söztutar wrote: > > > check out the corresponding shouldXXX commands: > > > > public boolean shouldDisableBalancer() { > > > > return fixAny || disableBalancer; > > > > } > > If fixAny which is true if any of the -fix is run, we disable the master > > chores. > > > > For -fixHdfsOverlaps and -fixHdfsHoles, I've mentioned this in the jira I > > think, but we should deprecate those, and do -fixOverlaps and -fixHoles > > separately. These two new commands will look at BOTH hdfs and meta to > > decide on what to do. > > > > Enis > > > > On Mon, Jun 20, 2016 at 12:30 PM, Stephen Jiang > > > wrote: > > > > > } else if (cmd.equals("-disableBalancer")) { > > > > > > setDisableBalancer(); > > > > > > } else if (cmd.equals("-disableSplitAndMerge")) { > > > > > > setDisableSplitAndMerge(); > > > > > > In HBCK, we will either use the options to disable master maintenance > > work > > > (see above) or the master maintenance are disabled during repair. > > > > > > I think we should always disable master maintenance work during online > > > HBCK, because balancer moving regions around during online check; or > > > split/merge regions during online check would have unexpected side > > effect. > > > > > > How do you think? > > > > > > Thanks > > > Stephen > > > > > > Also, I think we have too many options. We really should reduce > options > > in > > > hbck so that it is more user friendly (eg. currently implementation of > > > -fixHdfsOverlaps would almost 100% create hole, it does not make sense > to > > > run it alone, it should always run with -fixHdfsHoles option; and very > > > likely with -fixMeta option) > > > > > >
Re: Hangout on Slack?
Anyone can join anytime with @apache.org email id using following link. https://apache-hbase.slack.com/x-37639653748-52658243986/signup Sorry, but slack doesn't allow doing same for @gmail.com email ids. On Mon, Jun 20, 2016 at 11:36 AM, Apekshit Sharma wrote: > Here's new link: > https://apache-hbase.slack.com/shared_invite/NTI1OTc5NzQ4ODctMTQ2NjQ0Nzc0NC01ZDg1YjIyZjcw > Sorry can't do anything about expiration, that's slack's policy: no link > active more than 48 hours. > > On Mon, Jun 20, 2016 at 9:14 AM, Sean Busbey wrote: > >> Appy, could you make another link? if there's a "no-expiration" option >> that would be best. >> >> On Fri, Jun 17, 2016 at 7:23 PM, Apekshit Sharma >> wrote: >> > Or you can join the team using this link: >> > >> https://apache-hbase.slack.com/shared_invite/NTIxMTE4NTQwMzYtMTQ2NjIwOTMwMi1kMzc2YzkwYTJm >> > It expires on Sunday. >> > >> > On Fri, Jun 17, 2016 at 4:43 PM, Apekshit Sharma >> wrote: >> > >> >> Created slack team apache-hbase.slack.com. >> >> It has two channels: users and dev. >> >> I still have to figure out how to allow guests to join 'users' group. >> >> I have sent invites to some people to seed the group, i think you >> should >> >> be able to add others. If not, please let me know. >> >> >> >> On Thu, Jun 16, 2016 at 3:20 PM, Dima Spivak >> wrote: >> >> >> >>> +1. Even with all the bots, I feel really lonely in the #hbase IRC. :) >> >>> >> >>> -Dima >> >>> >> >>> On Thu, Jun 16, 2016 at 3:15 PM, Apekshit Sharma >> >>> wrote: >> >>> >> >>> > Brining it up again, because I really feel that we should do this. >> It'll >> >>> > make communicating with community so much easier, both broadcasts >> and >> >>> 1-1 >> >>> > pings (with people not in same org). Inside the slack group, we'll >> also >> >>> be >> >>> > able to create separate channels for users and dev. For those who >> >>> haven't >> >>> > tried Slack yet, I am fairly certain that you'll like it. >> >>> > Unless someone says otherwise, I'll go ahead and do this tonight. >> >>> > We'll post a redirect message on existing IRC. I'll update the hbase >> >>> book >> >>> > too. >> >>> > Thanks >> >>> > >> >>> > -- Appy >> >>> > >> >>> > >> >>> > >> >>> > On Sun, May 15, 2016 at 9:19 PM, Stack wrote: >> >>> > >> >>> > > On Mon, Apr 25, 2016 at 4:05 PM, Apekshit Sharma < >> a...@cloudera.com> >> >>> > > wrote: >> >>> > > >> >>> > > > ... >> >>> > > > Anyways, let's revive the old tradition because it will >> certainly be >> >>> > > useful >> >>> > > > to hang out in a room for real-time discussions. >> >>> > > > >> >>> > > >> >>> > > >> >>> > > Just to day that there are signs of life over in IRC over last few >> >>> days. >> >>> > > Suggest we nurture and then suggest move to Slack if wanted >> (Heard an >> >>> > > argument on friday that Slack has lower barrier to entry... Do >> others >> >>> > > believe this?) >> >>> > > >> >>> > > Thanks, >> >>> > > St.Ack >> >>> > > >> >>> > > >> >>> > > >> >>> > > > -- Appy >> >>> > > > >> >>> > > >> >>> > >> >>> > >> >>> > >> >>> > -- >> >>> > >> >>> > Regards >> >>> > >> >>> > Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, >> California | >> >>> > 650-963-6311 >> >>> > >> >>> >> >> >> >> >> >> >> >> -- >> >> >> >> -- Appy >> >> >> > >> > >> > >> > -- >> > >> > -- Appy >> >> >> >> -- >> busbey >> > > > > -- > > -- Appy > -- -- Appy
Re: HBCK options to disable master maintenance threads
Enis, what I suggested was that even no repair is suggested, we still should disable master maint tasks in online check for more deterministic result. Thanks Stephen On Mon, Jun 20, 2016 at 1:44 PM, Enis Söztutar wrote: > check out the corresponding shouldXXX commands: > > public boolean shouldDisableBalancer() { > > return fixAny || disableBalancer; > > } > If fixAny which is true if any of the -fix is run, we disable the master > chores. > > For -fixHdfsOverlaps and -fixHdfsHoles, I've mentioned this in the jira I > think, but we should deprecate those, and do -fixOverlaps and -fixHoles > separately. These two new commands will look at BOTH hdfs and meta to > decide on what to do. > > Enis > > On Mon, Jun 20, 2016 at 12:30 PM, Stephen Jiang > wrote: > > > } else if (cmd.equals("-disableBalancer")) { > > > > setDisableBalancer(); > > > > } else if (cmd.equals("-disableSplitAndMerge")) { > > > > setDisableSplitAndMerge(); > > > > In HBCK, we will either use the options to disable master maintenance > work > > (see above) or the master maintenance are disabled during repair. > > > > I think we should always disable master maintenance work during online > > HBCK, because balancer moving regions around during online check; or > > split/merge regions during online check would have unexpected side > effect. > > > > How do you think? > > > > Thanks > > Stephen > > > > Also, I think we have too many options. We really should reduce options > in > > hbck so that it is more user friendly (eg. currently implementation of > > -fixHdfsOverlaps would almost 100% create hole, it does not make sense to > > run it alone, it should always run with -fixHdfsHoles option; and very > > likely with -fixMeta option) > > >
Re: HBCK options to disable master maintenance threads
check out the corresponding shouldXXX commands: public boolean shouldDisableBalancer() { return fixAny || disableBalancer; } If fixAny which is true if any of the -fix is run, we disable the master chores. For -fixHdfsOverlaps and -fixHdfsHoles, I've mentioned this in the jira I think, but we should deprecate those, and do -fixOverlaps and -fixHoles separately. These two new commands will look at BOTH hdfs and meta to decide on what to do. Enis On Mon, Jun 20, 2016 at 12:30 PM, Stephen Jiang wrote: > } else if (cmd.equals("-disableBalancer")) { > > setDisableBalancer(); > > } else if (cmd.equals("-disableSplitAndMerge")) { > > setDisableSplitAndMerge(); > > In HBCK, we will either use the options to disable master maintenance work > (see above) or the master maintenance are disabled during repair. > > I think we should always disable master maintenance work during online > HBCK, because balancer moving regions around during online check; or > split/merge regions during online check would have unexpected side effect. > > How do you think? > > Thanks > Stephen > > Also, I think we have too many options. We really should reduce options in > hbck so that it is more user friendly (eg. currently implementation of > -fixHdfsOverlaps would almost 100% create hole, it does not make sense to > run it alone, it should always run with -fixHdfsHoles option; and very > likely with -fixMeta option) >
[jira] [Resolved] (HBASE-14397) PrefixFilter doesn't filter all remaining rows if the prefix is longer than rowkey being compared
[ https://issues.apache.org/jira/browse/HBASE-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov resolved HBASE-14397. - Resolution: Fixed > PrefixFilter doesn't filter all remaining rows if the prefix is longer than > rowkey being compared > - > > Key: HBASE-14397 > URL: https://issues.apache.org/jira/browse/HBASE-14397 > Project: HBase > Issue Type: Improvement > Components: Filters >Affects Versions: 2.0.0 >Reporter: Jianwei Cui >Assignee: Jianwei Cui >Priority: Minor > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-14397-trunk-v1.patch > > > The PrefixFilter will filter rowkey as: > {code} > public boolean filterRowKey(Cell firstRowCell) { > ... > int length = firstRowCell.getRowLength(); > if (length < prefix.length) return true; // ===> return directly if the > prefix is longer > > if ((!isReversed() && cmp > 0) || (isReversed() && cmp < 0)) { > passedPrefix = true; > } > filterRow = (cmp != 0); > return filterRow; > } > {code} > If the prefix is longer than the current rowkey, PrefixFilter#filterRowKey > will filter the rowkey directly without comparing, so that won't set > 'passedPrefix' flag even the current row is larger than the prefix. > For example, if there are three rows 'a', 'b' and 'c' in the table, and we > issue a scan request as: > {code} > hbase(main):001:0> scan 'test_table', {STARTROW => 'a', FILTER => > "(PrefixFilter ('aa'))"} > {code} > The region server will check the three rows before returning. In our > production, the user issue a scan with a PrefixFilter. The prefix is longer > than the rowkeys of following millions of rows, so the region server will > continue to check rows until hit a rowkey longer than the prefix. This make > the client easily timeout. To fix this case, it seems we need to compare the > prefix with the rowkey every serveral rows even when the prefix is longer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
HBCK options to disable master maintenance threads
} else if (cmd.equals("-disableBalancer")) { setDisableBalancer(); } else if (cmd.equals("-disableSplitAndMerge")) { setDisableSplitAndMerge(); In HBCK, we will either use the options to disable master maintenance work (see above) or the master maintenance are disabled during repair. I think we should always disable master maintenance work during online HBCK, because balancer moving regions around during online check; or split/merge regions during online check would have unexpected side effect. How do you think? Thanks Stephen Also, I think we have too many options. We really should reduce options in hbck so that it is more user friendly (eg. currently implementation of -fixHdfsOverlaps would almost 100% create hole, it does not make sense to run it alone, it should always run with -fixHdfsHoles option; and very likely with -fixMeta option)
Re: Hangout on Slack?
Here's new link: https://apache-hbase.slack.com/shared_invite/NTI1OTc5NzQ4ODctMTQ2NjQ0Nzc0NC01ZDg1YjIyZjcw Sorry can't do anything about expiration, that's slack's policy: no link active more than 48 hours. On Mon, Jun 20, 2016 at 9:14 AM, Sean Busbey wrote: > Appy, could you make another link? if there's a "no-expiration" option > that would be best. > > On Fri, Jun 17, 2016 at 7:23 PM, Apekshit Sharma > wrote: > > Or you can join the team using this link: > > > https://apache-hbase.slack.com/shared_invite/NTIxMTE4NTQwMzYtMTQ2NjIwOTMwMi1kMzc2YzkwYTJm > > It expires on Sunday. > > > > On Fri, Jun 17, 2016 at 4:43 PM, Apekshit Sharma > wrote: > > > >> Created slack team apache-hbase.slack.com. > >> It has two channels: users and dev. > >> I still have to figure out how to allow guests to join 'users' group. > >> I have sent invites to some people to seed the group, i think you should > >> be able to add others. If not, please let me know. > >> > >> On Thu, Jun 16, 2016 at 3:20 PM, Dima Spivak > wrote: > >> > >>> +1. Even with all the bots, I feel really lonely in the #hbase IRC. :) > >>> > >>> -Dima > >>> > >>> On Thu, Jun 16, 2016 at 3:15 PM, Apekshit Sharma > >>> wrote: > >>> > >>> > Brining it up again, because I really feel that we should do this. > It'll > >>> > make communicating with community so much easier, both broadcasts and > >>> 1-1 > >>> > pings (with people not in same org). Inside the slack group, we'll > also > >>> be > >>> > able to create separate channels for users and dev. For those who > >>> haven't > >>> > tried Slack yet, I am fairly certain that you'll like it. > >>> > Unless someone says otherwise, I'll go ahead and do this tonight. > >>> > We'll post a redirect message on existing IRC. I'll update the hbase > >>> book > >>> > too. > >>> > Thanks > >>> > > >>> > -- Appy > >>> > > >>> > > >>> > > >>> > On Sun, May 15, 2016 at 9:19 PM, Stack wrote: > >>> > > >>> > > On Mon, Apr 25, 2016 at 4:05 PM, Apekshit Sharma < > a...@cloudera.com> > >>> > > wrote: > >>> > > > >>> > > > ... > >>> > > > Anyways, let's revive the old tradition because it will > certainly be > >>> > > useful > >>> > > > to hang out in a room for real-time discussions. > >>> > > > > >>> > > > >>> > > > >>> > > Just to day that there are signs of life over in IRC over last few > >>> days. > >>> > > Suggest we nurture and then suggest move to Slack if wanted (Heard > an > >>> > > argument on friday that Slack has lower barrier to entry... Do > others > >>> > > believe this?) > >>> > > > >>> > > Thanks, > >>> > > St.Ack > >>> > > > >>> > > > >>> > > > >>> > > > -- Appy > >>> > > > > >>> > > > >>> > > >>> > > >>> > > >>> > -- > >>> > > >>> > Regards > >>> > > >>> > Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, > California | > >>> > 650-963-6311 > >>> > > >>> > >> > >> > >> > >> -- > >> > >> -- Appy > >> > > > > > > > > -- > > > > -- Appy > > > > -- > busbey > -- -- Appy
[RESULT] Re: [VOTE] First release candidate for HBase 1.2.2 (RC0) is available
This vote fails with * +1 (binding): 2 * +1 (non-binding): 1 I'll spin up a new RC soon with a longer voting period. On Tue, Jun 14, 2016 at 11:43 PM, Sean Busbey wrote: > Hi folks! > > I'm happy to announce that the first release candidate of HBase 1.2.2 is > available for download at: > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.2.2RC0/ > > As of this vote, the relevant md5 hashes are: > > hbase-1.2.2-bin.tar.gz: 5E 5E 0C A1 EB 50 98 00 54 36 8E 9B 71 B8 36 C5 > hbase-1.2.2-src.tar.gz: ED 16 3C 50 58 24 4F 24 64 19 30 CA 07 34 F2 C1 > > Maven artifacts are also available in the staging repository > > https://repository.apache.org/content/repositories/orgapachehbase-1140/ > > All artifacts are signed with my code signing key 0D80DB7C, available in > the project KEYS file: > > http://www.apache.org/dist/hbase/KEYS > > These artifacts correspond to commit hash > > e48943455819437667424bebd0f80b7ac80fa493 > > which signed tag 1.2.2RC0 currently points to: > > https://s.apache.org/hbase-1.2.2RC0-tag > > HBase 1.2.2 is the second maintenance release in the HBase 1.2.z line, > continuing on the theme of bringing a stable, reliable database to the > Hadoop and NoSQL communities. This release includes 57 resolved issues > since the 1.2.1 release. > > Notable fixes include: > > * [HBASE-15811] - Batch Get after batch Put does not fetch all Cells > * [HBASE-15698] - Increment TimeRange not serialized to server > * [HBASE-15234] - ReplicationLogCleaner can abort due to transient ZK issues > * [HBASE-15645] - hbase.rpc.timeout is not used in operations of HTable > * [HBASE-15622] - Superusers does not consider the keytab credentials > * [HBASE-15856] - Cached Connection instances can wind up with > addresses never resolved > * [HBASE-15873] - ACL for snapshot restore / clone is not enforced > > > The full list of issues can be found in the CHANGES.txt file included in > the release and online at: > > https://s.apache.org/hbase-1.2.2-jira-releasenotes > > Please take some time to verify the release[1], try out the release > candidate, and vote on releasing it: > > [ ] +1 Release this package as HBase 1.2.2 > [ ] +0 no opinion > [ ] -1 Do not release this package because... > > Vote will be subject to Majority Approval[2] and will close at > 5:00PM UTC on Monday, June 20th, 2016[3]. > > [1]: http://www.apache.org/info/verification.html > [2]: https://www.apache.org/foundation/glossary.html#MajorityApproval > [3]: to find this in your local timezone see: > > https://s.apache.org/hbase-1.2.2RC0-vote-close
Re: MutableQuantiles
MetricMutableHistogram and all ( The classes which were ports of hadoop's classes) have been removed. They are no longer used. However the ones hadoop supplies were very slow so instead we use MutableHistogram. See: HBASE-15222 On Mon, Jun 20, 2016 at 5:19 AM, Lars George wrote: > BTW, I am looking at 1.2 branch, though here the Hadoop one does > exactly the same as what the HBase one does. Where do I see the > difference? Master looks the same too. Are you referring to the > histogram classes? > > On Mon, Jun 20, 2016 at 1:36 PM, Lars George > wrote: > > Ah thanks Andy. It seemed mostly a copy (with some internal > > modification). Now, where is that used at all? > > > > On Sun, Jun 19, 2016 at 7:06 PM, Andrew Purtell > > wrote: > >> We have additional functionality that the Hadoop supplied one does not, > importantly the ability to dump counts by latency bucket rather than > percentile measures at the moment. The former can be used to calculate > mathematically meaningful percentile measures over the whole fleet and over > longer timescales, the latter cannot. > >> > >> > >>> On Jun 19, 2016, at 9:36 AM, Lars George > wrote: > >>> > >>> Hi, > >>> > >>> As per https://issues.apache.org/jira/browse/HBASE-6409 we rolled our > >>> own class. Is that still needed? Since 2012 lot's has changed and we > >>> should have all in place to use the Hadoop supplied one? > >>> > >>> Just curious. > >>> > >>> Cheers, > >>> Lars >
[jira] [Created] (HBASE-16073) update compatibility_checker for jacc dropping comma sep args
Sean Busbey created HBASE-16073: --- Summary: update compatibility_checker for jacc dropping comma sep args Key: HBASE-16073 URL: https://issues.apache.org/jira/browse/HBASE-16073 Project: HBase Issue Type: Task Components: build, documentation Reporter: Sean Busbey Priority: Critical the japi-compliance-checker has a change in place (post the 1.7 release) that removes the ability to give a comma separated list of jars on the cli. we should switch to generating descriptor xml docs since that will still be supported, or update to use the expanded tooling suggested in the issue: https://github.com/lvc/japi-compliance-checker/issues/27 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Hangout on Slack?
Appy, could you make another link? if there's a "no-expiration" option that would be best. On Fri, Jun 17, 2016 at 7:23 PM, Apekshit Sharma wrote: > Or you can join the team using this link: > https://apache-hbase.slack.com/shared_invite/NTIxMTE4NTQwMzYtMTQ2NjIwOTMwMi1kMzc2YzkwYTJm > It expires on Sunday. > > On Fri, Jun 17, 2016 at 4:43 PM, Apekshit Sharma wrote: > >> Created slack team apache-hbase.slack.com. >> It has two channels: users and dev. >> I still have to figure out how to allow guests to join 'users' group. >> I have sent invites to some people to seed the group, i think you should >> be able to add others. If not, please let me know. >> >> On Thu, Jun 16, 2016 at 3:20 PM, Dima Spivak wrote: >> >>> +1. Even with all the bots, I feel really lonely in the #hbase IRC. :) >>> >>> -Dima >>> >>> On Thu, Jun 16, 2016 at 3:15 PM, Apekshit Sharma >>> wrote: >>> >>> > Brining it up again, because I really feel that we should do this. It'll >>> > make communicating with community so much easier, both broadcasts and >>> 1-1 >>> > pings (with people not in same org). Inside the slack group, we'll also >>> be >>> > able to create separate channels for users and dev. For those who >>> haven't >>> > tried Slack yet, I am fairly certain that you'll like it. >>> > Unless someone says otherwise, I'll go ahead and do this tonight. >>> > We'll post a redirect message on existing IRC. I'll update the hbase >>> book >>> > too. >>> > Thanks >>> > >>> > -- Appy >>> > >>> > >>> > >>> > On Sun, May 15, 2016 at 9:19 PM, Stack wrote: >>> > >>> > > On Mon, Apr 25, 2016 at 4:05 PM, Apekshit Sharma >>> > > wrote: >>> > > >>> > > > ... >>> > > > Anyways, let's revive the old tradition because it will certainly be >>> > > useful >>> > > > to hang out in a room for real-time discussions. >>> > > > >>> > > >>> > > >>> > > Just to day that there are signs of life over in IRC over last few >>> days. >>> > > Suggest we nurture and then suggest move to Slack if wanted (Heard an >>> > > argument on friday that Slack has lower barrier to entry... Do others >>> > > believe this?) >>> > > >>> > > Thanks, >>> > > St.Ack >>> > > >>> > > >>> > > >>> > > > -- Appy >>> > > > >>> > > >>> > >>> > >>> > >>> > -- >>> > >>> > Regards >>> > >>> > Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, California | >>> > 650-963-6311 >>> > >>> >> >> >> >> -- >> >> -- Appy >> > > > > -- > > -- Appy -- busbey
Re: [VOTE] First release candidate for HBase 1.2.2 (RC0) is available
Friendly reminder that this vote closes in about 1 hour. +1 * checked sigs and sums * checked source against commit hash * checked source builds binaries * compatibility report looks good[1]. No IA.Public impact. [1]: https://home.apache.org/~busbey/hbase/1.2.2-RC0/1.2.1_1.2.2RC0_compat_report.html On Tue, Jun 14, 2016 at 11:43 PM, Sean Busbey wrote: > Hi folks! > > I'm happy to announce that the first release candidate of HBase 1.2.2 is > available for download at: > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.2.2RC0/ > > As of this vote, the relevant md5 hashes are: > > hbase-1.2.2-bin.tar.gz: 5E 5E 0C A1 EB 50 98 00 54 36 8E 9B 71 B8 36 C5 > hbase-1.2.2-src.tar.gz: ED 16 3C 50 58 24 4F 24 64 19 30 CA 07 34 F2 C1 > > Maven artifacts are also available in the staging repository > > https://repository.apache.org/content/repositories/orgapachehbase-1140/ > > All artifacts are signed with my code signing key 0D80DB7C, available in > the project KEYS file: > > http://www.apache.org/dist/hbase/KEYS > > These artifacts correspond to commit hash > > e48943455819437667424bebd0f80b7ac80fa493 > > which signed tag 1.2.2RC0 currently points to: > > https://s.apache.org/hbase-1.2.2RC0-tag > > HBase 1.2.2 is the second maintenance release in the HBase 1.2.z line, > continuing on the theme of bringing a stable, reliable database to the > Hadoop and NoSQL communities. This release includes 57 resolved issues > since the 1.2.1 release. > > Notable fixes include: > > * [HBASE-15811] - Batch Get after batch Put does not fetch all Cells > * [HBASE-15698] - Increment TimeRange not serialized to server > * [HBASE-15234] - ReplicationLogCleaner can abort due to transient ZK issues > * [HBASE-15645] - hbase.rpc.timeout is not used in operations of HTable > * [HBASE-15622] - Superusers does not consider the keytab credentials > * [HBASE-15856] - Cached Connection instances can wind up with > addresses never resolved > * [HBASE-15873] - ACL for snapshot restore / clone is not enforced > > > The full list of issues can be found in the CHANGES.txt file included in > the release and online at: > > https://s.apache.org/hbase-1.2.2-jira-releasenotes > > Please take some time to verify the release[1], try out the release > candidate, and vote on releasing it: > > [ ] +1 Release this package as HBase 1.2.2 > [ ] +0 no opinion > [ ] -1 Do not release this package because... > > Vote will be subject to Majority Approval[2] and will close at > 5:00PM UTC on Monday, June 20th, 2016[3]. > > [1]: http://www.apache.org/info/verification.html > [2]: https://www.apache.org/foundation/glossary.html#MajorityApproval > [3]: to find this in your local timezone see: > > https://s.apache.org/hbase-1.2.2RC0-vote-close -- busbey
Successful: HBase Generate Website
Build status: Successful If successful, the website and docs have been generated. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around permanently, you can skip the clone step. git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git cd hbase-site wget -O- https://builds.apache.org/job/hbase_generate_website/263/artifact/website.patch.zip | funzip > 71c8cd5b1fe55b97becec228c13df6dd9d966f95.patch git fetch git checkout -b asf-site-71c8cd5b1fe55b97becec228c13df6dd9d966f95 origin/asf-site git am --whitespace=fix 71c8cd5b1fe55b97becec228c13df6dd9d966f95.patch At this point, you can preview the changes by opening index.html or any of the other HTML pages in your local asf-site-71c8cd5b1fe55b97becec228c13df6dd9d966f95 branch. There are lots of spurious changes, such as timestamps and CSS styles in tables, so a generic git diff is not very useful. To see a list of files that have been added, deleted, renamed, changed type, or are otherwise interesting, use the following command: git diff --name-status --diff-filter=ADCRTXUB origin/asf-site To see only files that had 100 or more lines changed: git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}' When you are satisfied, publish your changes to origin/asf-site using these commands: git commit --allow-empty -m "Empty commit" # to work around a current ASF INFRA bug git push origin asf-site-71c8cd5b1fe55b97becec228c13df6dd9d966f95:asf-site git checkout asf-site git branch -d asf-site-71c8cd5b1fe55b97becec228c13df6dd9d966f95 Changes take a couple of minutes to be propagated. You can verify whether they have been propagated by looking at the Last Published date at the bottom of http://hbase.apache.org/. It should match the date in the index.html on the asf-site branch in Git. If failed, see https://builds.apache.org/job/hbase_generate_website/263/console
Re: MutableQuantiles
BTW, I am looking at 1.2 branch, though here the Hadoop one does exactly the same as what the HBase one does. Where do I see the difference? Master looks the same too. Are you referring to the histogram classes? On Mon, Jun 20, 2016 at 1:36 PM, Lars George wrote: > Ah thanks Andy. It seemed mostly a copy (with some internal > modification). Now, where is that used at all? > > On Sun, Jun 19, 2016 at 7:06 PM, Andrew Purtell > wrote: >> We have additional functionality that the Hadoop supplied one does not, >> importantly the ability to dump counts by latency bucket rather than >> percentile measures at the moment. The former can be used to calculate >> mathematically meaningful percentile measures over the whole fleet and over >> longer timescales, the latter cannot. >> >> >>> On Jun 19, 2016, at 9:36 AM, Lars George wrote: >>> >>> Hi, >>> >>> As per https://issues.apache.org/jira/browse/HBASE-6409 we rolled our >>> own class. Is that still needed? Since 2012 lot's has changed and we >>> should have all in place to use the Hadoop supplied one? >>> >>> Just curious. >>> >>> Cheers, >>> Lars
[jira] [Created] (HBASE-16072) CRUD actions stucked when using spark1.6 manipulate hbase1.2.1
benbenqiang created HBASE-16072: --- Summary: CRUD actions stucked when using spark1.6 manipulate hbase1.2.1 Key: HBASE-16072 URL: https://issues.apache.org/jira/browse/HBASE-16072 Project: HBase Issue Type: Bug Components: API Affects Versions: 1.2.1 Environment: spark1.6 hbase1.2.1 Reporter: benbenqiang Priority: Critical -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: MutableQuantiles
Ah thanks Andy. It seemed mostly a copy (with some internal modification). Now, where is that used at all? On Sun, Jun 19, 2016 at 7:06 PM, Andrew Purtell wrote: > We have additional functionality that the Hadoop supplied one does not, > importantly the ability to dump counts by latency bucket rather than > percentile measures at the moment. The former can be used to calculate > mathematically meaningful percentile measures over the whole fleet and over > longer timescales, the latter cannot. > > >> On Jun 19, 2016, at 9:36 AM, Lars George wrote: >> >> Hi, >> >> As per https://issues.apache.org/jira/browse/HBASE-6409 we rolled our >> own class. Is that still needed? Since 2012 lot's has changed and we >> should have all in place to use the Hadoop supplied one? >> >> Just curious. >> >> Cheers, >> Lars
[jira] [Created] (HBASE-16071) The VisibilityLabelFilter should not count the "delete cell"
ChiaPing Tsai created HBASE-16071: - Summary: The VisibilityLabelFilter should not count the "delete cell" Key: HBASE-16071 URL: https://issues.apache.org/jira/browse/HBASE-16071 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: ChiaPing Tsai Priority: Trivial The VisibilityLabelFilter will see and count the "delete cell" if the scan.isRaw() returns true, so the (put) cell will be skipped if it has lower version than "delete cell" The critical code is shown below: {code:title=VisibilityLabelFilter.java|borderStyle=solid} public ReturnCode filterKeyValue(Cell cell) throws IOException { if (curFamily.getBytes() == null || !(CellUtil.matchingFamily(cell, curFamily.getBytes(), curFamily.getOffset(), curFamily.getLength( { curFamily.set(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()); // For this family, all the columns can have max of curFamilyMaxVersions versions. No need to // consider the older versions for visibility label check. // Ideally this should have been done at a lower layer by HBase (?) curFamilyMaxVersions = cfVsMaxVersions.get(curFamily); // Family is changed. Just unset curQualifier. curQualifier.unset(); } if (curQualifier.getBytes() == null || !(CellUtil.matchingQualifier(cell, curQualifier.getBytes(), curQualifier.getOffset(), curQualifier.getLength( { curQualifier.set(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()); curQualMetVersions = 0; } curQualMetVersions++; if (curQualMetVersions > curFamilyMaxVersions) { return ReturnCode.SKIP; } return this.expEvaluator.evaluate(cell) ? ReturnCode.INCLUDE : ReturnCode.SKIP; } {code} [VisibilityLabelFilter.java|https://github.com/apache/hbase/blob/d7a4499dfc8b3936a0eca867589fc2b23b597866/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16070) Mapreduce Serialization class do not Interface audience
ramkrishna.s.vasudevan created HBASE-16070: -- Summary: Mapreduce Serialization class do not Interface audience Key: HBASE-16070 URL: https://issues.apache.org/jira/browse/HBASE-16070 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 2.0.0 KeyValueSerilization, ResultSerialization and MutationSerialization do not Interface audience. They are exposed interfaces and should be Public if am not wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16069) Typo "trapsparently" in item 3 of chapter 87.2
li xiang created HBASE-16069: Summary: Typo "trapsparently" in item 3 of chapter 87.2 Key: HBASE-16069 URL: https://issues.apache.org/jira/browse/HBASE-16069 Project: HBase Issue Type: Bug Components: documentation Reporter: li xiang Assignee: li xiang Priority: Minor In Chapter 87.2. Coprocessor Implementation Overview ... 3. Call the coprocessor from your client-side code. HBase handles the coprocessor trapsparently. ... Correct "trapsparently" into "transparently" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16068) Procedure v2 - use consts for conf properties in tests
Matteo Bertozzi created HBASE-16068: --- Summary: Procedure v2 - use consts for conf properties in tests Key: HBASE-16068 URL: https://issues.apache.org/jira/browse/HBASE-16068 Project: HBase Issue Type: Sub-task Components: proc-v2, test Affects Versions: 1.1.5, 1.2.1, 2.0.0, 1.3.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Trivial Fix For: 2.0.0, 1.3.0, 1.2.2, 1.1.6 replace the hardcoded properties string conf.set("foo.key", v) in the tests with the use of the configuration property constants that we already have -- This message was sent by Atlassian JIRA (v6.3.4#6332)