Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0
All you have to do is stick around long enough. Hadoop 0.20-append v2 :-) > On May 11, 2016, at 9:46 PM, Stackwrote: > >> On Wed, May 11, 2016 at 7:53 PM, 张铎 wrote: >> >> I think at that time I will start a new project called AsyncDFSClient which >> will implement the whole client side logic of HDFS without using reflection >> :) > Haven't I seen this movie before? (smile) > St.Ack > > > >> 2016-05-12 10:27 GMT+08:00 Andrew Purtell : >> >>> If Hadoop refuses the changes before we release, we can change the >> default >>> back. >>> >>> >>> On May 11, 2016, at 6:50 PM, Gary Helmling wrote: >>> > > > I was trying to avoid the below oft-repeated pattern at least for the >>> case > of critical developments: > > + New feature arrives after much work by developer, reviewers and >>> testers > accompanied by fanfare (blog, talks). > + Developers and reviewers move on after getting it committed or it >> gets > hacked into a deploy so it works in a frankenstein form > + It sits in our code base across one or more releases marked as >>> optional, > 'experimental' > + The 'experimental' bleamish discourages its exercise by users > + The feature lags, rots > + Or, the odd time, we go ahead and enable it as default in spite of >> the > fact it was never tried when experimental. > > Distributed Log Replay sat in hbase across a few major versions. Only >>> when > the threat of our making an actual release with it on by default did >> it >>> get > serious attention where it was found flawed and is now being actively > purged. This was after it made it past reviews, multiple attempts at > testing at scale, and so on; i.e. we'd done it all by the book. The >>> time in > an 'experimental' state added nothing. Those are all valid concerns as well. It's certainly a pattern that >> we've seen repeated. That's also a broader concern I have about the farther >> we push out 2.0, then the less exercised master is. I don't really know how best to balance this with concerns about user stability. Enabling by default in master would certainly be a forcing function and would help it get more testing before release. I hear >> that argument. But I'm worried about the impact after release, where >>> something as simple as a bug-fix point release upgrade of Hadoop could result in runtime breakage of an HBase install. Will this happen in practice? I don't know. It seems unlikely that the private variable names being >> used for example would change in a point release. But we're violating the abstraction that Hadoop provides us which guarantees such breakage >> won't occur. >> Yes. 2.0 is a bit out there so we have some time to iron out issues >> is > the > thought. Yes, it could push out delivery of 2.0. Having this on by default in an unreleased master doesn't actually >> worry >>> me that much. It's just the question of what happens when we do release. >>> At that point, this discussion will be ancient history and I don't think >>> we'll give any renewed consideration to what the impact of this change might >>> be. Ideally it would be great to see this work in HDFS by that point and >> for that HDFS version this becomes a non-issue. > > I think the discussion here has been helpful. Holes have been found >> (and > plugged), the risk involved has gotten a good airing out here on dev, >>> and > in spite of the back and forth, one of our experts in good standing is > still against it being on by default. > > If you are not down w/ the arguments, I'd be fine not making it the > default. > St.Ack I don't think it's right to block this by myself, since I'm clearly in >>> the minority. Since others clearly support this change, have at it. But let me pose an alternate question: what if HDFS flat out refuses to adopt this change? What are our options then with this already >> shipping >>> as a default? Would we continue to endure breakage due to the use of HDFS private internals? Do we switch the default back? Do we do something >>> else? Thanks for the discussion. >>
Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0
On Wed, May 11, 2016 at 7:53 PM, 张铎wrote: > I think at that time I will start a new project called AsyncDFSClient which > will implement the whole client side logic of HDFS without using reflection > :) > > Haven't I seen this movie before? (smile) St.Ack > 2016-05-12 10:27 GMT+08:00 Andrew Purtell : > > > If Hadoop refuses the changes before we release, we can change the > default > > back. > > > > > > On May 11, 2016, at 6:50 PM, Gary Helmling wrote: > > > > >> > > >> > > >> I was trying to avoid the below oft-repeated pattern at least for the > > case > > >> of critical developments: > > >> > > >> + New feature arrives after much work by developer, reviewers and > > testers > > >> accompanied by fanfare (blog, talks). > > >> + Developers and reviewers move on after getting it committed or it > gets > > >> hacked into a deploy so it works in a frankenstein form > > >> + It sits in our code base across one or more releases marked as > > optional, > > >> 'experimental' > > >> + The 'experimental' bleamish discourages its exercise by users > > >> + The feature lags, rots > > >> + Or, the odd time, we go ahead and enable it as default in spite of > the > > >> fact it was never tried when experimental. > > >> > > >> Distributed Log Replay sat in hbase across a few major versions. Only > > when > > >> the threat of our making an actual release with it on by default did > it > > get > > >> serious attention where it was found flawed and is now being actively > > >> purged. This was after it made it past reviews, multiple attempts at > > >> testing at scale, and so on; i.e. we'd done it all by the book. The > > time in > > >> an 'experimental' state added nothing. > > > Those are all valid concerns as well. It's certainly a pattern that > we've > > > seen repeated. That's also a broader concern I have about the farther > we > > > push out 2.0, then the less exercised master is. > > > > > > I don't really know how best to balance this with concerns about user > > > stability. Enabling by default in master would certainly be a forcing > > > function and would help it get more testing before release. I hear > that > > > argument. But I'm worried about the impact after release, where > > something > > > as simple as a bug-fix point release upgrade of Hadoop could result in > > > runtime breakage of an HBase install. Will this happen in practice? I > > > don't know. It seems unlikely that the private variable names being > used > > > for example would change in a point release. But we're violating the > > > abstraction that Hadoop provides us which guarantees such breakage > won't > > > occur. > > > > > > > > >>> Yes. 2.0 is a bit out there so we have some time to iron out issues > is > > >> the > > >> thought. Yes, it could push out delivery of 2.0. > > > Having this on by default in an unreleased master doesn't actually > worry > > me > > > that much. It's just the question of what happens when we do release. > > At > > > that point, this discussion will be ancient history and I don't think > > we'll > > > give any renewed consideration to what the impact of this change might > > be. > > > Ideally it would be great to see this work in HDFS by that point and > for > > > that HDFS version this becomes a non-issue. > > > > > > > > >> > > >> I think the discussion here has been helpful. Holes have been found > (and > > >> plugged), the risk involved has gotten a good airing out here on dev, > > and > > >> in spite of the back and forth, one of our experts in good standing is > > >> still against it being on by default. > > >> > > >> If you are not down w/ the arguments, I'd be fine not making it the > > >> default. > > >> St.Ack > > > > > > I don't think it's right to block this by myself, since I'm clearly in > > the > > > minority. Since others clearly support this change, have at it. > > > > > > But let me pose an alternate question: what if HDFS flat out refuses to > > > adopt this change? What are our options then with this already > shipping > > as > > > a default? Would we continue to endure breakage due to the use of HDFS > > > private internals? Do we switch the default back? Do we do something > > else? > > > > > > Thanks for the discussion. > > >
Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0
I think at that time I will start a new project called AsyncDFSClient which will implement the whole client side logic of HDFS without using reflection :) 2016-05-12 10:27 GMT+08:00 Andrew Purtell: > If Hadoop refuses the changes before we release, we can change the default > back. > > > On May 11, 2016, at 6:50 PM, Gary Helmling wrote: > > >> > >> > >> I was trying to avoid the below oft-repeated pattern at least for the > case > >> of critical developments: > >> > >> + New feature arrives after much work by developer, reviewers and > testers > >> accompanied by fanfare (blog, talks). > >> + Developers and reviewers move on after getting it committed or it gets > >> hacked into a deploy so it works in a frankenstein form > >> + It sits in our code base across one or more releases marked as > optional, > >> 'experimental' > >> + The 'experimental' bleamish discourages its exercise by users > >> + The feature lags, rots > >> + Or, the odd time, we go ahead and enable it as default in spite of the > >> fact it was never tried when experimental. > >> > >> Distributed Log Replay sat in hbase across a few major versions. Only > when > >> the threat of our making an actual release with it on by default did it > get > >> serious attention where it was found flawed and is now being actively > >> purged. This was after it made it past reviews, multiple attempts at > >> testing at scale, and so on; i.e. we'd done it all by the book. The > time in > >> an 'experimental' state added nothing. > > Those are all valid concerns as well. It's certainly a pattern that we've > > seen repeated. That's also a broader concern I have about the farther we > > push out 2.0, then the less exercised master is. > > > > I don't really know how best to balance this with concerns about user > > stability. Enabling by default in master would certainly be a forcing > > function and would help it get more testing before release. I hear that > > argument. But I'm worried about the impact after release, where > something > > as simple as a bug-fix point release upgrade of Hadoop could result in > > runtime breakage of an HBase install. Will this happen in practice? I > > don't know. It seems unlikely that the private variable names being used > > for example would change in a point release. But we're violating the > > abstraction that Hadoop provides us which guarantees such breakage won't > > occur. > > > > > >>> Yes. 2.0 is a bit out there so we have some time to iron out issues is > >> the > >> thought. Yes, it could push out delivery of 2.0. > > Having this on by default in an unreleased master doesn't actually worry > me > > that much. It's just the question of what happens when we do release. > At > > that point, this discussion will be ancient history and I don't think > we'll > > give any renewed consideration to what the impact of this change might > be. > > Ideally it would be great to see this work in HDFS by that point and for > > that HDFS version this becomes a non-issue. > > > > > >> > >> I think the discussion here has been helpful. Holes have been found (and > >> plugged), the risk involved has gotten a good airing out here on dev, > and > >> in spite of the back and forth, one of our experts in good standing is > >> still against it being on by default. > >> > >> If you are not down w/ the arguments, I'd be fine not making it the > >> default. > >> St.Ack > > > > I don't think it's right to block this by myself, since I'm clearly in > the > > minority. Since others clearly support this change, have at it. > > > > But let me pose an alternate question: what if HDFS flat out refuses to > > adopt this change? What are our options then with this already shipping > as > > a default? Would we continue to endure breakage due to the use of HDFS > > private internals? Do we switch the default back? Do we do something > else? > > > > Thanks for the discussion. >
Fixed: hbase.apache.org HTML Checker
Fixed If successful, the HTML and link-checking report for http://hbase.apache.org is available at https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/42/artifact/link_report/index.html. If failed, see https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/42/console.
Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0
If Hadoop refuses the changes before we release, we can change the default back. On May 11, 2016, at 6:50 PM, Gary Helmlingwrote: >> >> >> I was trying to avoid the below oft-repeated pattern at least for the case >> of critical developments: >> >> + New feature arrives after much work by developer, reviewers and testers >> accompanied by fanfare (blog, talks). >> + Developers and reviewers move on after getting it committed or it gets >> hacked into a deploy so it works in a frankenstein form >> + It sits in our code base across one or more releases marked as optional, >> 'experimental' >> + The 'experimental' bleamish discourages its exercise by users >> + The feature lags, rots >> + Or, the odd time, we go ahead and enable it as default in spite of the >> fact it was never tried when experimental. >> >> Distributed Log Replay sat in hbase across a few major versions. Only when >> the threat of our making an actual release with it on by default did it get >> serious attention where it was found flawed and is now being actively >> purged. This was after it made it past reviews, multiple attempts at >> testing at scale, and so on; i.e. we'd done it all by the book. The time in >> an 'experimental' state added nothing. > Those are all valid concerns as well. It's certainly a pattern that we've > seen repeated. That's also a broader concern I have about the farther we > push out 2.0, then the less exercised master is. > > I don't really know how best to balance this with concerns about user > stability. Enabling by default in master would certainly be a forcing > function and would help it get more testing before release. I hear that > argument. But I'm worried about the impact after release, where something > as simple as a bug-fix point release upgrade of Hadoop could result in > runtime breakage of an HBase install. Will this happen in practice? I > don't know. It seems unlikely that the private variable names being used > for example would change in a point release. But we're violating the > abstraction that Hadoop provides us which guarantees such breakage won't > occur. > > >>> Yes. 2.0 is a bit out there so we have some time to iron out issues is >> the >> thought. Yes, it could push out delivery of 2.0. > Having this on by default in an unreleased master doesn't actually worry me > that much. It's just the question of what happens when we do release. At > that point, this discussion will be ancient history and I don't think we'll > give any renewed consideration to what the impact of this change might be. > Ideally it would be great to see this work in HDFS by that point and for > that HDFS version this becomes a non-issue. > > >> >> I think the discussion here has been helpful. Holes have been found (and >> plugged), the risk involved has gotten a good airing out here on dev, and >> in spite of the back and forth, one of our experts in good standing is >> still against it being on by default. >> >> If you are not down w/ the arguments, I'd be fine not making it the >> default. >> St.Ack > > I don't think it's right to block this by myself, since I'm clearly in the > minority. Since others clearly support this change, have at it. > > But let me pose an alternate question: what if HDFS flat out refuses to > adopt this change? What are our options then with this already shipping as > a default? Would we continue to endure breakage due to the use of HDFS > private internals? Do we switch the default back? Do we do something else? > > Thanks for the discussion.
Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0
> > > I was trying to avoid the below oft-repeated pattern at least for the case > of critical developments: > > + New feature arrives after much work by developer, reviewers and testers > accompanied by fanfare (blog, talks). > + Developers and reviewers move on after getting it committed or it gets > hacked into a deploy so it works in a frankenstein form > + It sits in our code base across one or more releases marked as optional, > 'experimental' > + The 'experimental' bleamish discourages its exercise by users > + The feature lags, rots > + Or, the odd time, we go ahead and enable it as default in spite of the > fact it was never tried when experimental. > > Distributed Log Replay sat in hbase across a few major versions. Only when > the threat of our making an actual release with it on by default did it get > serious attention where it was found flawed and is now being actively > purged. This was after it made it past reviews, multiple attempts at > testing at scale, and so on; i.e. we'd done it all by the book. The time in > an 'experimental' state added nothing. > > Those are all valid concerns as well. It's certainly a pattern that we've seen repeated. That's also a broader concern I have about the farther we push out 2.0, then the less exercised master is. I don't really know how best to balance this with concerns about user stability. Enabling by default in master would certainly be a forcing function and would help it get more testing before release. I hear that argument. But I'm worried about the impact after release, where something as simple as a bug-fix point release upgrade of Hadoop could result in runtime breakage of an HBase install. Will this happen in practice? I don't know. It seems unlikely that the private variable names being used for example would change in a point release. But we're violating the abstraction that Hadoop provides us which guarantees such breakage won't occur. > > Yes. 2.0 is a bit out there so we have some time to iron out issues is > the > thought. Yes, it could push out delivery of 2.0. > > Having this on by default in an unreleased master doesn't actually worry me that much. It's just the question of what happens when we do release. At that point, this discussion will be ancient history and I don't think we'll give any renewed consideration to what the impact of this change might be. Ideally it would be great to see this work in HDFS by that point and for that HDFS version this becomes a non-issue. > > I think the discussion here has been helpful. Holes have been found (and > plugged), the risk involved has gotten a good airing out here on dev, and > in spite of the back and forth, one of our experts in good standing is > still against it being on by default. > > If you are not down w/ the arguments, I'd be fine not making it the > default. > St.Ack > I don't think it's right to block this by myself, since I'm clearly in the minority. Since others clearly support this change, have at it. But let me pose an alternate question: what if HDFS flat out refuses to adopt this change? What are our options then with this already shipping as a default? Would we continue to endure breakage due to the use of HDFS private internals? Do we switch the default back? Do we do something else? Thanks for the discussion.
Re: [VOTE] First release candidate for HBase 1.1.5 (RC0) is available
+1 - Checked sums and signatures. Signing key needs updating from your favorite hkp server (I used keys.gnupg.net) and then won't show as expired. - Unpacked tarballs, layouts look good. Spot checked LICENSE and NOTICE files. - Built from source with release auditing enabled, passed (7u79) - Unit test passes (7u79) - Loaded 1M keys with LTT (10 readers, 10 writers, 10 updaters (20%)), no unexpected messages in the log, latencies in the ballpark, all keys verified. On Sun, May 8, 2016 at 9:23 PM, Nick Dimidukwrote: > *** Please note that my key expired since the previous release. I have > updated its expiration, pushed to pgp.mit.edu, updated the KEYS file > linked > below, and attempted to force an update on id.apache.org. I don't know how > long it will take for people.apache.org to refresh. *** > > *** Please note that this voting window is slightly shorter than the > customary one week so that we have time for an RC1 before HBaseCon, if > necessary. *** > > I'm happy to announce the first release candidate of HBase 1.1.5 (HBase-1.1 > .5RC0) is available for download at > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.5RC0/ > > Maven artifacts are also available in the staging repository > https://repository.apache.org/content/repositories/orgapachehbase-1136/ > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > available in the Apache keys directory > https://people.apache.org/keys/committer/ndimiduk.asc and in our KEYS file > http://www-us.apache.org/dist/hbase/KEYS. > > There's also a signed tag for this release at > > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=92323e8e630e46d277ab2e8ebd34b91ab5d597d5 > > The detailed source and binary compatibility report vs 1.1.4 has been > published for your review, at > http://home.apache.org/~ndimiduk/1.1.4_1.1.5RC0_compat_report.html > > HBase 1.1.5 is the fifth patch release in the HBase 1.1 line, continuing on > the theme of bringing a stable, reliable database to the Hadoop and NoSQL > communities. This release includes over 20 bug fixes since the 1.1.4 > release. Notable correctness fixes > include HBASE-15234, HBASE-15295, HBASE-15325, HBASE-15622, and > HBASE-15645. > > The full list of fixes included in this release is available at > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753=12335058 > and and in the CHANGES.txt file included in the distribution. > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > Thursday, 2016-05-12 as to whether we should release these artifacts as > HBase 1.1.5. > > Thanks, > Nick > -- Best regards, - Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)
[jira] [Resolved] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments
[ https://issues.apache.org/jira/browse/HBASE-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi resolved HBASE-15818. - Resolution: Invalid closing, I think is just wrong ' > Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments > -- > > Key: HBASE-15818 > URL: https://issues.apache.org/jira/browse/HBASE-15818 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 2.0.0, 1.2.1 >Reporter: Matteo Bertozzi >Priority: Critical > Fix For: 2.0.0, 1.3.0, 1.2.2 > > > Unable to create a table with multiple families (as suggested by the Examples) > also the shell is exiting. (I only tested 1.2 and 2.0 and have the problem, > 1.1 seems to be ok) > {noformat} > hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’ > ERROR: wrong number of arguments (0 for 1) > Examples: > hbase> create 't1', 'f1', 'f2', 'f3' > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] First release candidate for HBase 1.1.5 (RC0) is available
+1 - Deployed onto a 5-node container cluster from binary tarball. Did some sanity checks of the HBase shell and web UI, all looks good. - Ran IntegrationTestBigLinkedList in loop mode with 1 billion nodes; passed without a problem. -Dima On Wed, May 11, 2016 at 9:37 AM, Matteo Bertozziwrote: > +1 > > - compiled from source and run few unit-test > TestAdmin*,Test*Master*,Test*Region* > - inspected the binary > - started hbase from both source and binary > - few commands from shell: create/disable/enable/drop/split, put/get/scan, > snapshot/clone_snapshot > - run PerformanceEvaluation with random write/read with autosplit > - run a simple bulkload and checked the data > - clicked around the webui > - checked the logs for anything strange > > On Wed, May 11, 2016 at 8:26 AM, Nick Dimiduk wrote: > > > A reminder, everyone, that this vote is scheduled to conclude in ~40 > hours. > > > > On Sun, May 8, 2016 at 9:23 PM, Nick Dimiduk > wrote: > > > > > *** Please note that my key expired since the previous release. I have > > > updated its expiration, pushed to pgp.mit.edu, updated the KEYS file > > > linked below, and attempted to force an update on id.apache.org. I > don't > > > know how long it will take for people.apache.org to refresh. *** > > > > > > *** Please note that this voting window is slightly shorter than the > > > customary one week so that we have time for an RC1 before HBaseCon, if > > > necessary. *** > > > > > > I'm happy to announce the first release candidate of HBase 1.1.5 > (HBase- > > > 1.1.5RC0) is available for download at > > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.5RC0/ > > > > > > Maven artifacts are also available in the staging repository > > > > https://repository.apache.org/content/repositories/orgapachehbase-1136/ > > > > > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > > > available in the Apache keys directory > > > https://people.apache.org/keys/committer/ndimiduk.asc and in our KEYS > > > file http://www-us.apache.org/dist/hbase/KEYS. > > > > > > There's also a signed tag for this release at > > > > > > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=92323e8e630e46d277ab2e8ebd34b91ab5d597d5 > > > > > > The detailed source and binary compatibility report vs 1.1.4 has been > > > published for your review, at > > > http://home.apache.org/~ndimiduk/1.1.4_1.1.5RC0_compat_report.html > > > > > > HBase 1.1.5 is the fifth patch release in the HBase 1.1 line, > continuing > > > on the theme of bringing a stable, reliable database to the Hadoop and > > > NoSQL communities. This release includes over 20 bug fixes since the > > 1.1.4 > > > release. Notable correctness fixes > > > include HBASE-15234, HBASE-15295, HBASE-15325, HBASE-15622, and > > HBASE-15645. > > > > > > The full list of fixes included in this release is available at > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753=12335058 > > > and and in the CHANGES.txt file included in the distribution. > > > > > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > > > Thursday, 2016-05-12 as to whether we should release these artifacts as > > > HBase 1.1.5. > > > > > > Thanks, > > > Nick > > > > > >
[jira] [Created] (HBASE-15818) Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments
Matteo Bertozzi created HBASE-15818: --- Summary: Shell create ‘t1’, ‘f1’, ‘f2’, ‘f3’ wrong number of arguments Key: HBASE-15818 URL: https://issues.apache.org/jira/browse/HBASE-15818 Project: HBase Issue Type: Bug Components: shell Affects Versions: 2.0.0 Reporter: Matteo Bertozzi Priority: Critical Fix For: 2.0.0, 1.3.0, 1.2.2 Unable to create a table with multiple families (as suggested by the Examples) also the shell is exiting. (I only tested 1.2 and 2.0 but it may be in every version) {noformat} hbase(main):001:0> create ‘t1’, ‘f1’, ‘f2’, ‘f3’ ERROR: wrong number of arguments (0 for 1) Examples: hbase> create 't1', 'f1', 'f2', 'f3' {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15817) Backup history should mention the type (full or incremental) of the backup
Ted Yu created HBASE-15817: -- Summary: Backup history should mention the type (full or incremental) of the backup Key: HBASE-15817 URL: https://issues.apache.org/jira/browse/HBASE-15817 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Attachments: 15817.v1.txt [~cartershanklin] performed full backup followed by incremental backup. In the output of backup history, one of the backups is full, one is incremental. But which one? {code} [vagrant@hbase ~]$ hbase backup history ID : backup_1462900419633 Tables : SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA State : COMPLETE Start time : Tue May 10 17:13:39 UTC 2016 End time : Tue May 10 17:13:53 UTC 2016 Progress : 100 ID : backup_1462900212093 Tables : SYSTEM.CATALOG;SYSTEM.SEQUENCE;MY_INDEX;SYSTEM.FUNCTION;SYSTEM.STATS;TEST_DATA State : COMPLETE Start time : Tue May 10 17:10:12 UTC 2016 End time : Tue May 10 17:11:30 UTC 2016 Progress : 100 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15816) Provide client with ability to set priority on Operations
churro morales created HBASE-15816: -- Summary: Provide client with ability to set priority on Operations Key: HBASE-15816 URL: https://issues.apache.org/jira/browse/HBASE-15816 Project: HBase Issue Type: Improvement Affects Versions: 2.0.0 Reporter: churro morales Assignee: churro morales First round will just be to expose the ability to set priorities for client operations. For more background: http://mail-archives.apache.org/mod_mbox/hbase-dev/201604.mbox/%3CCA+RK=_BG_o=q8HMptcP2WauAinmEsL+15f3YEJuz=qbpcya...@mail.gmail.com%3E Next step would be to remove AnnotationReadingPriorityFunction and have the client send priorities explicitly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15815) Region mover script sometimes reports stuck region where only one server was involved
Ted Yu created HBASE-15815: -- Summary: Region mover script sometimes reports stuck region where only one server was involved Key: HBASE-15815 URL: https://issues.apache.org/jira/browse/HBASE-15815 Project: HBase Issue Type: Bug Affects Versions: 1.1.2 Reporter: Ted Yu Priority: Minor Sometimes we saw the following in output from region mover script: {code} 2016-05-11 01:38:21,187||INFO|3969|140086696048384|MainThread|2016-05-11 01:38:21,186 INFO [RubyThread-7: /.../current/hbase-client/bin/thread-pool.rb:28-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-05-11 01:38:21,299||INFO|3969|140086696048384|MainThread|RuntimeError: Region stuck on hbase-5-2.osl,16020,1462930100540,, newserver=hbase-5-2.osl,16020,1462930100540 {code} There was only one server involved. Since the name of region was not printed, it makes debugging hard to do. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failure: hbase.apache.org HTML Checker
Failure If successful, the HTML and link-checking report for http://hbase.apache.org is available at https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/41/artifact/link_report/index.html. If failed, see https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/41/console.
Successful: hbase.apache.org HTML Checker
Successful If successful, the HTML and link-checking report for http://hbase.apache.org is available at https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/40/artifact/link_report/index.html. If failed, see https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/40/console.
Successful: hbase.apache.org HTML Checker
Successful If successful, the HTML and link-checking report for http://hbase.apache.org is available at https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/39/artifact/link_report/index.html. If failed, see https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/39/console.
Re: [VOTE] First release candidate for HBase 1.1.5 (RC0) is available
+1 - compiled from source and run few unit-test TestAdmin*,Test*Master*,Test*Region* - inspected the binary - started hbase from both source and binary - few commands from shell: create/disable/enable/drop/split, put/get/scan, snapshot/clone_snapshot - run PerformanceEvaluation with random write/read with autosplit - run a simple bulkload and checked the data - clicked around the webui - checked the logs for anything strange On Wed, May 11, 2016 at 8:26 AM, Nick Dimidukwrote: > A reminder, everyone, that this vote is scheduled to conclude in ~40 hours. > > On Sun, May 8, 2016 at 9:23 PM, Nick Dimiduk wrote: > > > *** Please note that my key expired since the previous release. I have > > updated its expiration, pushed to pgp.mit.edu, updated the KEYS file > > linked below, and attempted to force an update on id.apache.org. I don't > > know how long it will take for people.apache.org to refresh. *** > > > > *** Please note that this voting window is slightly shorter than the > > customary one week so that we have time for an RC1 before HBaseCon, if > > necessary. *** > > > > I'm happy to announce the first release candidate of HBase 1.1.5 (HBase- > > 1.1.5RC0) is available for download at > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.5RC0/ > > > > Maven artifacts are also available in the staging repository > > https://repository.apache.org/content/repositories/orgapachehbase-1136/ > > > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > > available in the Apache keys directory > > https://people.apache.org/keys/committer/ndimiduk.asc and in our KEYS > > file http://www-us.apache.org/dist/hbase/KEYS. > > > > There's also a signed tag for this release at > > > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=92323e8e630e46d277ab2e8ebd34b91ab5d597d5 > > > > The detailed source and binary compatibility report vs 1.1.4 has been > > published for your review, at > > http://home.apache.org/~ndimiduk/1.1.4_1.1.5RC0_compat_report.html > > > > HBase 1.1.5 is the fifth patch release in the HBase 1.1 line, continuing > > on the theme of bringing a stable, reliable database to the Hadoop and > > NoSQL communities. This release includes over 20 bug fixes since the > 1.1.4 > > release. Notable correctness fixes > > include HBASE-15234, HBASE-15295, HBASE-15325, HBASE-15622, and > HBASE-15645. > > > > The full list of fixes included in this release is available at > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753=12335058 > > and and in the CHANGES.txt file included in the distribution. > > > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > > Thursday, 2016-05-12 as to whether we should release these artifacts as > > HBase 1.1.5. > > > > Thanks, > > Nick > > >
Re: [VOTE] First release candidate for HBase 1.1.5 (RC0) is available
A reminder, everyone, that this vote is scheduled to conclude in ~40 hours. On Sun, May 8, 2016 at 9:23 PM, Nick Dimidukwrote: > *** Please note that my key expired since the previous release. I have > updated its expiration, pushed to pgp.mit.edu, updated the KEYS file > linked below, and attempted to force an update on id.apache.org. I don't > know how long it will take for people.apache.org to refresh. *** > > *** Please note that this voting window is slightly shorter than the > customary one week so that we have time for an RC1 before HBaseCon, if > necessary. *** > > I'm happy to announce the first release candidate of HBase 1.1.5 (HBase- > 1.1.5RC0) is available for download at > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.5RC0/ > > Maven artifacts are also available in the staging repository > https://repository.apache.org/content/repositories/orgapachehbase-1136/ > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > available in the Apache keys directory > https://people.apache.org/keys/committer/ndimiduk.asc and in our KEYS > file http://www-us.apache.org/dist/hbase/KEYS. > > There's also a signed tag for this release at > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=92323e8e630e46d277ab2e8ebd34b91ab5d597d5 > > The detailed source and binary compatibility report vs 1.1.4 has been > published for your review, at > http://home.apache.org/~ndimiduk/1.1.4_1.1.5RC0_compat_report.html > > HBase 1.1.5 is the fifth patch release in the HBase 1.1 line, continuing > on the theme of bringing a stable, reliable database to the Hadoop and > NoSQL communities. This release includes over 20 bug fixes since the 1.1.4 > release. Notable correctness fixes > include HBASE-15234, HBASE-15295, HBASE-15325, HBASE-15622, and HBASE-15645. > > The full list of fixes included in this release is available at > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753=12335058 > and and in the CHANGES.txt file included in the distribution. > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > Thursday, 2016-05-12 as to whether we should release these artifacts as > HBase 1.1.5. > > Thanks, > Nick >
Successful: HBase Generate Website
Build status: Successful If successful, the website and docs have been generated. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around permanently, you can skip the clone step. git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git cd hbase-site wget -O- https://builds.apache.org/job/hbase_generate_website/226/artifact/website.patch.zip | funzip > c9ebcd4e296a31e0da43f513db3f5a8c3929c191.patch git fetch git checkout -b asf-site-c9ebcd4e296a31e0da43f513db3f5a8c3929c191 origin/asf-site git am --whitespace=fix c9ebcd4e296a31e0da43f513db3f5a8c3929c191.patch At this point, you can preview the changes by opening index.html or any of the other HTML pages in your local asf-site-c9ebcd4e296a31e0da43f513db3f5a8c3929c191 branch, and you can review the differences by running: git diff origin/asf-site There are lots of spurious changes, such as timestamps and CSS styles in tables. To see a list of files that have been added, deleted, renamed, changed type, or are otherwise interesting, use the following command: git diff --name-status --diff-filter=ADCRTXUB origin/asf-site To see only files that had 100 or more lines changed: git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}' When you are satisfied, publish your changes to origin/asf-site using this command: git push origin asf-site-c9ebcd4e296a31e0da43f513db3f5a8c3929c191:asf-site Changes take a couple of minutes to be propagated. You can then remove your asf-site-c9ebcd4e296a31e0da43f513db3f5a8c3929c191 branch: git checkout asf-site && git branch -d asf-site-c9ebcd4e296a31e0da43f513db3f5a8c3929c191 If failed, see https://builds.apache.org/job/hbase_generate_website/226/console
[jira] [Created] (HBASE-15814) Miss important information in Document of HBase Security
Heng Chen created HBASE-15814: - Summary: Miss important information in Document of HBase Security Key: HBASE-15814 URL: https://issues.apache.org/jira/browse/HBASE-15814 Project: HBase Issue Type: Bug Components: documentation Reporter: Heng Chen I have deployed secure cluster recently, and found we miss important information in http://hbase.apache.org/book.html#security Some configurations like {code} hbase.regionserver.kerberos.principal hbase/_h...@your-realm.com hbase.regionserver.keytab.file /etc/hbase/conf/hbase.keytab hbase.master.kerberos.principal hbase/_h...@your-realm.com hbase.master.keytab.file /etc/hbase/conf/hbase.keytab {code} And i found more detailed document in http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_sg_hbase_authentication.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Pre-commit builds only run the tests of changed module, not all modules being effected
On Tue, May 10, 2016 at 11:02 PM, Sean Busbeywrote: > Always running hbase-server is equivalent to always running the build > at root. It takes hours. We should not do that. > > Agree. > An opt-in mode for Yetus that runs tests in all module that depend on > a changed module would make sense for projects that have poor module > independence. > > That said, every time this comes up, I ask the same question and no > one ever seems to have an answer: > > Why does the module that changed not have sufficient tests of its > surface that the use in other modules is properly covered? > > Good question. Thats where we should be going. Thanks Sean, St.Ack > On Tue, May 10, 2016 at 8:24 PM, Ted Yu wrote: > > Contributor / committer should watch out for Jenkins builds after his / > her > > patch gets integrated. > > > > TestRpcMetrics is in hbase-server module. > > I think the QA bot should always run tests in hbase-server module. This > can > > be done by adding hbase-server to CHANGED_MODULES (if it is not included) > > in dev-support/hbase-personality.sh > > > > On Tue, May 10, 2016 at 7:57 PM, Phil Yang wrote: > > > >> Hi all > >> > >> Recently I did some optimization about metrics in HBASE-15742, the patch > >> only changed the hbase-hadoop2-compat module, and pre-commit builds only > >> run the tests in this module. However, some other modules import this > >> module, and some tests failed but we didn't know before committing. I > think > >> we should run all modules that import the changed modules in pre-commit > >> builds. > >> > >> Any ideas? > >> > >> > >> > >> Thanks, > >> Phil > >> >
Re: Pre-commit builds only run the tests of changed module, not all modules being effected
Always running hbase-server is equivalent to always running the build at root. It takes hours. We should not do that. An opt-in mode for Yetus that runs tests in all module that depend on a changed module would make sense for projects that have poor module independence. That said, every time this comes up, I ask the same question and no one ever seems to have an answer: Why does the module that changed not have sufficient tests of its surface that the use in other modules is properly covered? On Tue, May 10, 2016 at 8:24 PM, Ted Yuwrote: > Contributor / committer should watch out for Jenkins builds after his / her > patch gets integrated. > > TestRpcMetrics is in hbase-server module. > I think the QA bot should always run tests in hbase-server module. This can > be done by adding hbase-server to CHANGED_MODULES (if it is not included) > in dev-support/hbase-personality.sh > > On Tue, May 10, 2016 at 7:57 PM, Phil Yang wrote: > >> Hi all >> >> Recently I did some optimization about metrics in HBASE-15742, the patch >> only changed the hbase-hadoop2-compat module, and pre-commit builds only >> run the tests in this module. However, some other modules import this >> module, and some tests failed but we didn't know before committing. I think >> we should run all modules that import the changed modules in pre-commit >> builds. >> >> Any ideas? >> >> >> >> Thanks, >> Phil >>